Categories: FAANG

Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models

*Equal Contributors
Large language models (LLMs) are increasingly being adapted to achieve task-specificity for deployment in real-world decision systems. Several previous works have investigated the bias transfer hypothesis (BTH) by studying the effect of the fine-tuning adaptation strategy on model fairness to find that fairness in pre-trained masked language models have limited effect on the fairness of models when adapted using fine-tuning. In this work, we expand the study of BTH to causal models under prompt adaptations, as prompting is an accessible, and compute-efficient way to deploy…
AI Generated Robotic Content

Recent Posts

Just tried HunyuanImage 2.1

Hey guys, I just tested out the new HunyuanImage 2.1 model on HF and… wow.…

15 hours ago

Multi-Agent Systems: The Next Frontier in AI-Driven Cyber Defense

The increasing sophistication of cyber threats calls for a systemic change in the way we…

15 hours ago

ROC AUC vs Precision-Recall for Imbalanced Data

When building machine learning models to classify imbalanced data — i.

15 hours ago

7 Scikit-learn Tricks for Optimized Cross-Validation

Validating machine learning models requires careful testing on unseen data to ensure robust, unbiased estimates…

15 hours ago

Powering innovation at scale: How AWS is tackling AI infrastructure challenges

As generative AI continues to transform how enterprises operate—and develop net new innovations—the infrastructure demands…

15 hours ago

Introducing the Agentic SOC Workshops for security professionals

The security operations centers of the future will use agentic AI to enable intelligent automation…

15 hours ago