Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Built a tool for anyone drowning in huge image folders: HybridScorer

Drowning in huge image folders and wasting hours manually sorting keepers from rejects? I built…

8 hours ago

Governance-Aware Agent Telemetry for Closed-Loop Enforcement in Multi-Agent AI Systems

Enterprise multi-agent AI systems produce thousands of inter-agent interactions per hour, yet existing observability tools…

8 hours ago

Customize Amazon Nova models with Amazon Bedrock fine-tuning

Today, we’re sharing how Amazon Bedrock makes it straightforward to customize Amazon Nova models for…

8 hours ago

New GKE Cloud Storage FUSE Profiles take the guesswork out of configuring AI storage

In the world of AI/ML, data is the fuel that drives training and inference workloads.…

8 hours ago

Conflicting Rulings Leave Anthropic in ‘Supply-Chain Risk’ Limbo

A US appeals court ruling is at odds with a separate, lower court decision from…

9 hours ago