Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

trying more serious TNG content with LTX2.3

every clip was made with LTX2.3 using TNG image screengrabs and this awesome lora: https://huggingface.co/bionicman69/StarTrek_TNG_Style_LTX23…

18 hours ago

Why I disappeared for 3 Months & What’s Next

I’ve been quiet since November because I’ve been building.Over the past few months, AI has…

18 hours ago

Build financial document processing with Pulse AI and Amazon Bedrock

Financial institutions process thousands of complex documents daily. Optical Character Recognition (OCR) errors in financial…

18 hours ago

Everyone at the Musk v. Altman Trial Is Using Fancy Butt Cushions

The plaintiffs and defense have rested their cases, as well as their rear ends.

19 hours ago

New quantum algorithm solves “impossible” materials problem in seconds

A new quantum-inspired algorithm has cracked a problem so massive that conventional supercomputers struggle to…

19 hours ago