Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Evaluating Long Range Dependency Handling in Code Generation LLMs

As language models support larger and larger context sizes, evaluating their ability to make effective…

7 hours ago

AWS costs estimation using Amazon Q CLI and AWS Cost Analysis MCP

Managing and optimizing AWS infrastructure costs is a critical challenge for organizations of all sizes.…

7 hours ago

CTGT wins Best Presentation Style award at VB Transform 2025

San Francisco-based CTGT, a startup focused on making AI more trustworthy through feature-level model customization,…

8 hours ago

The 28 Best Deals From REI’s July 4 Outdoor Gear Sale (2025)

Whether you need a tent, sleeping pad, rain jacket, or new pack, REI’s Independence Day…

8 hours ago

Flux Kontext Dev is pretty good. Generated completely locally on ComfyUI.

You can find the workflow by scrolling down on this page: https://comfyanonymous.github.io/ComfyUI_examples/flux/ submitted by /u/comfyanonymous…

1 day ago