Categories: FAANG

Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models

*Equal Contributors
Large language models (LLMs) are increasingly being adapted to achieve task-specificity for deployment in real-world decision systems. Several previous works have investigated the bias transfer hypothesis (BTH) by studying the effect of the fine-tuning adaptation strategy on model fairness to find that fairness in pre-trained masked language models have limited effect on the fairness of models when adapted using fine-tuning. In this work, we expand the study of BTH to causal models under prompt adaptations, as prompting is an accessible, and compute-efficient way to deploy…
AI Generated Robotic Content

Recent Posts

Jasper Named a 2025 NRF Innovator at Retail’s Big Show

The Innovators Showcase at NRF 2025: Retail’s Big Show recognizes the top 50 tech leaders…

21 hours ago

BayesCNS: A Unified Bayesian Approach to Address Cold Start and Non-Stationarity in Search Systems at Scale

Information Retrieval (IR) systems used in search and recommendation platforms frequently employ Learning-to-Rank (LTR) models…

21 hours ago

Accelerate your ML lifecycle using the new and improved Amazon SageMaker Python SDK – Part 2: ModelBuilder

In Part 1 of this series, we introduced the newly launched ModelTrainer class on the…

21 hours ago

How Dun & Bradstreet is transforming software development with Gemini Code Assist

Dun & Bradstreet, a leading global provider of business data and analytics, is committed to…

21 hours ago

‘AI-at-scale’ method accelerates atomistic simulations for scientists

Quantum calculations of molecular systems often require extraordinary amounts of computing power; these calculations are…

22 hours ago

Introducing Gemini 2.0: our new AI model for the agentic era

Today, we’re announcing Gemini 2.0, our most capable multimodal AI model yet.

2 days ago