Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

QR Code ControlNet

Why has no one created a QR Monster ControlNet for any of the newer models?…

14 hours ago

Lenovo’s Latest Wacky Concepts Include a Laptop With a Built-in Portable Monitor

At MWC 2026, the company also showed off a dual-screen Yoga Book with 3D capabilities,…

15 hours ago

AI is getting smarter, but not wiser: A new roadmap aims to fix that gap

A new study is the first to suggest realistic ways to integrate wisdom into artificial…

15 hours ago

[Final Update] Anima 2B Style Explorer: 20,000+ Danbooru Artists, Swipe Mode, and Uniqueness Rank

Thanks for the feedback and ideas on my previous posts! This is the final feature-complete…

2 days ago

Mount Mayhem at Netflix: Scaling Containers on Modern CPUs

Authors: Harshad Sane, Andrew HalaneyImagine this — you click play on Netflix on a Friday night and behind…

2 days ago

X Is Drowning in Disinformation Following US and Israel’s Attack on Iran

WIRED has reviewed hundreds of posts on X that promote misleading claims about the locations…

2 days ago