Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

iPhone 2007 [FLUX.2 Klein]

A Lora trained on photos taken with the original Apple iPhone (2007). Works with FLUX.2…

17 hours ago

Building a ‘Human-in-the-Loop’ Approval Gate for Autonomous Agents

In agentic AI systems , when an agent's execution pipeline is intentionally halted, we have…

17 hours ago

ProText: A Benchmark Dataset for Measuring (Mis)gendering in Long-Form Texts

We introduce ProText, a dataset for measuring gendering and misgendering in stylistically diverse long-form English…

17 hours ago

Build reliable AI agents with Amazon Bedrock AgentCore Evaluations

Your AI agent worked in the demo, impressed stakeholders, handled test scenarios, and seemed ready…

17 hours ago

Robotaxi Outage in China Leaves Passengers Stranded on Highways

A suspected system failure froze Baidu’s robotaxis across Wuhan, trapping passengers and reportedly causing traffic…

18 hours ago

Chip-scale light technology could power faster AI and data center communications

Researchers at Trinity have developed a new light-based technology on a tiny chip that could…

18 hours ago