Categories: FAANG

Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models

*Equal Contributors
Large language models (LLMs) are increasingly being adapted to achieve task-specificity for deployment in real-world decision systems. Several previous works have investigated the bias transfer hypothesis (BTH) by studying the effect of the fine-tuning adaptation strategy on model fairness to find that fairness in pre-trained masked language models have limited effect on the fairness of models when adapted using fine-tuning. In this work, we expand the study of BTH to causal models under prompt adaptations, as prompting is an accessible, and compute-efficient way to deploy…
AI Generated Robotic Content

Recent Posts

Building RAG Systems with Transformers

This post is divided into five parts: • Understanding the RAG architecture • Building the…

10 hours ago

Build an AI-powered document processing platform with open source NER model and LLM on Amazon SageMaker

Archival data in research institutions and national laboratories represents a vast repository of historical knowledge,…

10 hours ago

Going from requirements to prototype with Gemini Code Assist

Imagine this common scenario: you have a detailed product requirements document for your next project.…

10 hours ago

Google adds more AI tools to its Workspace productivity apps

Google expanded Gemini's features, adding the popular podcast-style feature Audio Overviews to the platform.Read More

11 hours ago

The Best N95, KF94, and KN95 Face Masks (2025)

Wildfire season is coming. Here are the best disposable face coverings we’ve tested—and where you…

11 hours ago

Engineering a robot that can jump 10 feet high — without legs

Inspired by the movements of a tiny parasitic worm, engineers have created a 5-inch soft…

11 hours ago