Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

WaTale: A free, fully local visual novel engine (Powered by SD 1.5, LayerDiffuse, and ControlNet)

Hey all. I've been working on WaTale, a visual novel app powered by local AI.…

38 mins ago

Best Apps for Focus (2026): Focus Friend, Forest, Focus Traveller

Distractions? What distractions? Here are our recommendations for apps that help you stay focused on…

2 hours ago

Comfy raises $30M to continue building the best creative AI tool in open

Hi r/StableDiffusion, Today we’re excited to share that Comfy has raised $30M at a $500M…

1 day ago

Learning Long-Term Motion Embeddings for Efficient Kinematics Generation

Understanding and predicting motion is a fundamental component of visual intelligence. Although modern video models…

1 day ago

Scaling Camera File Processing at Netflix

Orchestrating Media Workflows Through Strategic CollaborationAuthors: Eric Reinecke, Bhanu SrikanthIntroduction to Content Hub’s Media Production SuiteAt…

1 day ago

Building Workforce AI Agents with Visier and Amazon Quick

Employees across every function are expected to make faster, better-informed decisions, but the information that…

1 day ago