Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Ai TikTok scams becoming more realistic.

I'm just attaching one video but 100's of them have popped up in the last…

9 mins ago

‘Crimson Desert’ Is a Cat Dad Simulator

Step into the shoes of the strongest, goodest boy in a game that is beautiful,…

1 hour ago

After ~400 Z-Image Turbo gens I finally figured out why everyone’s portraits look plastic

Been using Z-Image Turbo pretty heavily since it dropped and wanted to dump some notes…

1 day ago

Evaluating Netflix Show Synopses with LLM-as-a-Judge

by Gabriela Alessio, Cameron Taylor, and Cameron R. WolfeIntroductionWhen members log into Netflix, one of the…

1 day ago