Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Comparing 7 different image models

Tested a couple of prompts on different models. Only the base model, no community-made loras…

18 hours ago

7 Machine Learning Trends to Watch in 2026

A couple of years ago, most machine learning systems sat quietly behind dashboards.

18 hours ago

Automating competitive price intelligence with Amazon Nova Act

Monitoring competitor prices is essential for ecommerce teams to maintain a market edge. However, many…

18 hours ago

Run real-time and async inference on the same infrastructure with GKE Inference Gateway

As AI workloads transition from experimental prototypes to production-grade services, the infrastructure supporting them faces…

18 hours ago

Artemis II Mission Launches Successfully

The crew of Artemis II will not descend to the moon, but their capsule will…

19 hours ago

DNA robots could deliver drugs and hunt viruses inside your body

DNA robots are emerging as tiny programmable machines that could one day deliver drugs, hunt…

19 hours ago