Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Z-image lora training news

Many people reported that the lora training sucks for z-image base. Less than 12 hours…

10 hours ago

Export Your ML Model in ONNX Format

When building machine learning models, training is only half the journey.

10 hours ago

Accelerating your marketing ideation with generative AI – Part 2: Generate custom marketing images from historical references

Marketing teams face major challenges creating campaigns in today’s digital environment. They must navigate through…

10 hours ago

Netflix Says if the HBO Merger Makes It Too Expensive, You Can Always Cancel

During a hearing at the US Senate, Netflix co-CEO Ted Sarandos said the company is…

11 hours ago

‘Discovery learning’ AI tool predicts battery cycle life with just a few days’ data

An agentic AI tool for battery researchers harnesses data from previous battery designs to predict…

11 hours ago

Never forget…

submitted by /u/ShadowBoxingBabies [link] [comments]

1 day ago