Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

HappyHorse 1.0, four shot anime sequence with character consistency across cuts

Multi shot consistency was the test I cared about. Same girl across four cuts in…

18 hours ago

Automate repetitive tasks with Amazon Quick Flows

Consider a typical Monday morning: you’re manually copying data from several different systems to create…

18 hours ago

Some Musk v. Altman Jurors Don’t Like Elon Musk

Musk’s lawsuit challenges OpenAI’s evolution under Sam Altman. But during jury selection, several potential jurors…

19 hours ago

Are you addicted to your AI chatbot? It might be by design

AI chatbots can grant almost any request—a celebrity in love with you, a research assistant,…

19 hours ago

GooglyEyes IC-LoRA for LTX2.3 released!

It's exactly as dumb and as it looks and sounds; slap googly eyes on anyone.…

2 days ago