Categories: FAANG

Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution

Large language models (LLMs) have achieved impressive performance, leading to their widespread adoption as decision-support tools in resource-constrained contexts like hiring and admissions. There is, however, scientific consensus that AI systems can reflect and exacerbate societal biases, raising concerns about identity-based harm when used in critical social contexts. Prior work has laid a solid foundation for assessing bias in LLMs by evaluating demographic disparities in different language reasoning tasks. In this work, we extend single-axis fairness evaluations to examine intersectional…
AI Generated Robotic Content

Recent Posts

RES4LYF nodes really do make a difference with Wan 2.2

submitted by /u/Hearmeman98 [link] [comments]

19 seconds ago

7 Matplotlib Tricks to Better Visualize Your Machine Learning Models

Visualizing model performance is an essential piece of the machine learning workflow puzzle.

26 seconds ago

Introducing Gemma 3 270M: The compact model for hyper-efficient AI

Today, we're adding a new, highly specialized tool to the Gemma 3 toolkit: Gemma 3…

34 seconds ago

Scalable intelligent document processing using Amazon Bedrock Data Automation

Intelligent document processing (IDP) is a technology to automate the extraction, analysis, and interpretation of…

47 seconds ago

How Keeta processes 11 million financial transactions per second with Spanner

Keeta Network is a layer‑1 blockchain that unifies transactions across different blockchains and payment systems,…

52 seconds ago

That ‘cheap’ open-source AI model is actually burning through your compute budget

New research reveals open-source AI models use up to 10 times more computing resources than…

1 hour ago