Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Waymo Is Trying to Crack Down on Solo Kids in Driverless Cars

As adult riders report new age-verification checks, the self-driving car company says it’s continuing to…

14 mins ago

A new type of optical chip cuts static power while enabling electrical reprogramming

As technology advances, and the demand for faster, higher-bandwidth, and more energy-efficient data processing continues…

14 mins ago

Sulphur 2 Uncensored Video Gen

I'll try to keep this as short as possible, but me and a team of…

23 hours ago

Effective KV Compression with TurboQuant

TurboQuant has recently been launched by Google as a novel algorithmic suite and library for…

23 hours ago

STARFlow-V: End-to-End Video Generative Modeling with Normalizing Flows

Normalizing flows (NFs) are end-to-end likelihood-based generative models for continuous data, and have recently regained…

23 hours ago

Ready, Set, Build with the NHS Federated Data Platform

The National Health Service (NHS) has delivered universal healthcare to an entire nation for over…

23 hours ago