Categories: FAANG

Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models

*Equal Contributors
Large language models (LLMs) are increasingly being adapted to achieve task-specificity for deployment in real-world decision systems. Several previous works have investigated the bias transfer hypothesis (BTH) by studying the effect of the fine-tuning adaptation strategy on model fairness to find that fairness in pre-trained masked language models have limited effect on the fairness of models when adapted using fine-tuning. In this work, we expand the study of BTH to causal models under prompt adaptations, as prompting is an accessible, and compute-efficient way to deploy…
AI Generated Robotic Content

Recent Posts

NOAA Employees Told to Pause Work With ‘Foreign Nationals’

An internal email obtained by WIRED shows that NOAA workers received orders to pause “ALL…

5 mins ago

A brain-inspired AI technology boosts efficiency and reduces energy consumption

Researchers at FORTH have developed a new type of artificial neural network (ANN) that incorporates…

5 mins ago

Automated Feature Engineering in PyCaret

Automated feature engineering in

23 hours ago

Updating the Frontier Safety Framework

Our next iteration of the FSF sets out stronger security protocols on the path to…

23 hours ago

Adaptive Training Distributions with Scalable Online Bilevel Optimization

Large neural networks pretrained on web-scale corpora are central to modern machine learning. In this…

23 hours ago

Orchestrate seamless business systems integrations using Amazon Bedrock Agents

Generative AI has revolutionized technology through generating content and solving complex problems. To fully take…

23 hours ago