Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

GooglyEyes IC-LoRA for LTX2.3 released!

It's exactly as dumb and as it looks and sounds; slap googly eyes on anyone.…

2 hours ago

‘STAGED’: Conspiracy Theories Are Everywhere Following White House Correspondents’ Dinner Shooting

The word “staged” exploded on social media following the attack, as both right- and left-wing…

3 hours ago

WaTale: A free, fully local visual novel engine (Powered by SD 1.5, LayerDiffuse, and ControlNet)

Hey all. I've been working on WaTale, a visual novel app powered by local AI.…

1 day ago

Best Apps for Focus (2026): Focus Friend, Forest, Focus Traveller

Distractions? What distractions? Here are our recommendations for apps that help you stay focused on…

1 day ago

Comfy raises $30M to continue building the best creative AI tool in open

Hi r/StableDiffusion, Today we’re excited to share that Comfy has raised $30M at a $500M…

2 days ago

Learning Long-Term Motion Embeddings for Efficient Kinematics Generation

Understanding and predicting motion is a fundamental component of visual intelligence. Although modern video models…

2 days ago