Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

AceStep 1.5 – Showdown: 26 Multi-Style LoKrs Trained on Diverse Artists

These are the results of one week or more training LoKr's for Ace-Step 1.5. Enjoy…

11 hours ago

Agentify Your App with GitHub Copilot’s Agentic Coding SDK

For years, GitHub Copilot has served as a powerful pair programming tool for programmers, suggesting…

11 hours ago

Unifying Ranking and Generation in Query Auto-Completion via Retrieval-Augmented Generation and Multi-Objective Alignment

Query Auto-Completion (QAC) is a critical feature of modern search systems that improves search efficiency…

11 hours ago

Build unified intelligence with Amazon Bedrock AgentCore

Building cohesive and unified customer intelligence across your organization starts with reducing the friction your…

11 hours ago

Powering the next generation of agents with Google Cloud databases

For developers building AI applications, including custom agents and chatbots, the open-source Model Context Protocol…

11 hours ago

The Bose QuietComfort Ultra Gen 2 Headphones Are at Their Lowest Price in Months

Bose's fabulous QuietComfort Ultra Gen 2 noise-canceling headphones are the best travel headphones and are…

12 hours ago