Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Detecting and Overcoming Perfect Multicollinearity in Large Datasets

One of the significant challenges statisticians and data scientists face is multicollinearity, particularly its most…

21 hours ago

5 Emerging AI Technologies That Will Shape the Future of Machine Learning

Artificial intelligence is not just altering the way we interact with technology; it’s reshaping the…

21 hours ago

How Vidmob is using generative AI to transform its creative data landscape

This post was co-written with Mickey Alon from Vidmob. Generative artificial intelligence (AI) can be…

21 hours ago

How few-shot learning with Google’s Prompt Poet can supercharge your LLMs

Prompt Poet allows you to ground LLM-generated responses to a real-world data context, opening up…

22 hours ago

Boeing Starliner Returns Home to an Uncertain Future

NASA has three more operational Starliner missions on the books. It hasn't decided whether it…

22 hours ago

Tips for Effective Feature Selection in Machine Learning

When training a machine learning model, you may sometimes work with datasets with a large…

2 days ago