Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Anima preview3 was released

For those who has been following Anima, a new preview version was released around 2…

19 hours ago

Handling Race Conditions in Multi-Agent Orchestration

If you've ever watched two agents confidently write to the same resource at the same…

19 hours ago

Frontend Engineering at Palantir: Plotlines in Three.js

About this SeriesFrontend engineering at Palantir goes far beyond building standard web apps. Our engineers…

19 hours ago

Manage AI costs with Amazon Bedrock Projects

As organizations scale their AI workloads on Amazon Bedrock, understanding what’s driving spending becomes critical.…

19 hours ago

Claude Mythos Preview: Available in private preview on Vertex AI

Claude Mythos Preview, Anthropic’s newest and most powerful model, is now available in Private Preview…

19 hours ago

The iPhone Gets a D– for Repairability

It’s a better rating than the company has gotten from repairability experts before, at least.…

20 hours ago