Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Model Drop | ZIT + LTX 2.3 + Music Video | Arca Gidan contest

The idea came from something I'm pretty sure most of us live every single day:…

14 hours ago

Sonos Play Review: Performance Meets Convenience

With great sound and versatility, this new speaker may be Sonos’ best.

15 hours ago

AI companions can comfort lonely users but may deepen distress over time

AI companions are always available, never judge, never tire and never demand anything in return.…

15 hours ago

Powering Multimodal Intelligence for Video Search

Synchronizing the Senses: Powering Multimodal Intelligence for Video SearchBy: Meenakshi Jindal and Munya MarazanyeToday’s filmmakers capture…

2 days ago

Envoy: A future-ready foundation for agentic AI networking

In today's agentic AI environments, the network has a new set of responsibilities. In a…

2 days ago