Categories: FAANG

Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models

*Equal Contributors
Large language models (LLMs) are increasingly being adapted to achieve task-specificity for deployment in real-world decision systems. Several previous works have investigated the bias transfer hypothesis (BTH) by studying the effect of the fine-tuning adaptation strategy on model fairness to find that fairness in pre-trained masked language models have limited effect on the fairness of models when adapted using fine-tuning. In this work, we expand the study of BTH to causal models under prompt adaptations, as prompting is an accessible, and compute-efficient way to deploy…
AI Generated Robotic Content

Recent Posts

Understanding RAG Part VII: Vector Databases & Indexing Strategies

Be sure to check out the previous articles in this series: •

14 hours ago

Mastering Time Series Forecasting: From ARIMA to LSTM

Time series forecasting is a statistical technique used to analyze historical data points and predict…

14 hours ago

Gemini Robotics brings AI into the physical world

Introducing Gemini Robotics and Gemini Robotics-ER, AI models designed for robots to understand, act and…

14 hours ago

Exploring creative possibilities: A visual guide to Amazon Nova Canvas

Compelling AI-generated images start with well-crafted prompts. In this follow-up to our Amazon Nova Canvas…

14 hours ago

Announcing Gemma 3 on Vertex AI

Today, we’re sharing the new Gemma 3 model is available on Vertex AI Model Garden,…

14 hours ago

Google’s native multimodal AI image generation in Gemini 2.0 Flash impresses with fast edits, style transfers

It enables developers to create illustrations, refine images through conversation, and generate detailed visualsRead More

15 hours ago