Categories: FAANG

Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution

Large language models (LLMs) have achieved impressive performance, leading to their widespread adoption as decision-support tools in resource-constrained contexts like hiring and admissions. There is, however, scientific consensus that AI systems can reflect and exacerbate societal biases, raising concerns about identity-based harm when used in critical social contexts. Prior work has laid a solid foundation for assessing bias in LLMs by evaluating demographic disparities in different language reasoning tasks. In this work, we extend single-axis fairness evaluations to examine intersectional…
AI Generated Robotic Content

Recent Posts

SamsungCam UltraReal – Qwen-Image LoRA

Hey everyone, Just dropped the first version of a LoRA I've been working on: SamsungCam…

5 hours ago

40 Best Early Amazon Prime Day Deals on WIRED-Tested Gear (2025)

Amazon Prime Day is back, starting on October 7, but we’ve already found good deals…

6 hours ago

These little robots literally walk on water

HydroSpread, a breakthrough fabrication method, lets scientists build ultrathin soft robots directly on water. These…

6 hours ago

VHS filters work great with AI footage (WAN 2.2 + NTSC-RS)

submitted by /u/mtrx3 [link] [comments]

1 day ago

Algorithm Showdown: Logistic Regression vs. Random Forest vs. XGBoost on Imbalanced Data

Imbalanced datasets are a common challenge in machine learning.

1 day ago

Unlock global AI inference scalability using new global cross-Region inference on Amazon Bedrock with Anthropic’s Claude Sonnet 4.5

Organizations are increasingly integrating generative AI capabilities into their applications to enhance customer experiences, streamline…

1 day ago