Categories: FAANG

Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models

*Equal Contributors
Large language models (LLMs) are increasingly being adapted to achieve task-specificity for deployment in real-world decision systems. Several previous works have investigated the bias transfer hypothesis (BTH) by studying the effect of the fine-tuning adaptation strategy on model fairness to find that fairness in pre-trained masked language models have limited effect on the fairness of models when adapted using fine-tuning. In this work, we expand the study of BTH to causal models under prompt adaptations, as prompting is an accessible, and compute-efficient way to deploy…
AI Generated Robotic Content

Recent Posts

Open source CRT animation lora for ltx 2.3

None of the video gen models do a real CRT terminal animation look. Weights +…

6 hours ago

Getting Started with Zero-Shot Text Classification

Zero-shot text classification is a way to label text without first training a classifier on…

6 hours ago

Gradient-based Planning for World Models at Longer Horizons

GRASP is a new gradient-based planner for learned dynamics (a “world model”) that makes long-horizon…

6 hours ago

What Do Your Logits Know? (The Answer May Surprise You!)

Recent work has shown that probing model internals can reveal a wealth of information not…

6 hours ago

Accelerate Generative AI Inference on Amazon SageMaker AI with G7e Instances

As the demand for generative AI continues to grow, developers and enterprises seek more flexible,…

6 hours ago

A Humanoid Robot Set a Half-Marathon Record in China

An autonomous robot from the company Honor ran a half marathon in 50:26, beating the…

7 hours ago