Categories: AI/ML News

ChatGPT is biased against resumes with credentials that imply a disability—but it can improve

While seeking research internships last year, University of Washington graduate student Kate Glazko noticed recruiters posting online that they’d used OpenAI’s ChatGPT and other artificial intelligence tools to summarize resumes and rank candidates. Automated screening has been commonplace in hiring for decades. Yet Glazko, a doctoral student in the UW’s Paul G. Allen School of Computer Science & Engineering, studies how generative AI can replicate and amplify real-world biases—such as those against disabled people. How might such a system, she wondered, rank resumes that implied someone had a disability?
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Text-to-image comparison. FLUX.1 Krea [dev] Vs. Wan2.2-T2V-14B (Best of 5)

Note, this is not a "scientific test" but a best of 5 across both models.…

16 hours ago

How to Diagnose Why Your Regression Model Fails

In regression models , failure occurs when the model produces inaccurate predictions — that is,…

16 hours ago

STIV: Scalable Text and Image Conditioned Video Generation

The field of video generation has made remarkable advancements, yet there remains a pressing need…

16 hours ago

America’s AI Action Plan

Working Together to Accelerate AI AdoptionOn July 23, 2025, the White House unveiled “Winning the AI…

16 hours ago

Introducing AWS Batch Support for Amazon SageMaker Training jobs

Picture this: your machine learning (ML) team has a promising model to train and experiments…

16 hours ago

A deep dive into code reviews with Gemini Code Assist in GitHub

Imagine a code review process that doesn't slow you down. Instead of a queue of…

16 hours ago