Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Looneytunes background style for ZIT

So, only seven months after the SDXL version, here's a civitai link to the Z-Image…

35 mins ago

Local Mechanisms of Compositional Generalization in Conditional Diffusion

Conditional diffusion models appear capable of compositional generalization, i.e., generating convincing samples for out-of-distribution combinations…

35 mins ago

Connecting Agents to Decisions

The Palantir OntologyPalantir’s software powers real-time, human-agent decision-making in many of the most critical commercial and…

35 mins ago

Migrating a text agent to a voice assistant with Amazon Nova 2 Sonic

Migrating a text agent to a voice assistant is increasingly important because users expect faster,…

35 mins ago

50+ fully managed MCP servers now available for Google Cloud services

At Google Cloud Next ‘26, we announced that more than 50 Google-managed Model Context Protocol…

35 mins ago