Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

Radial Attention: O(nlogn) Sparse Attention with Energy Decay for Long Video Generation

We just released RadialAttention, a sparse attention mechanism with O(nlog⁡n) computational complexity for long video…

2 hours ago

Mixture of Experts Architecture in Transformer Models

This post covers three main areas: • Why Mixture of Experts is Needed in Transformers…

2 hours ago

Your First Local LLM API Project in Python Step-By-Step

Interested in leveraging a large language model (LLM) API locally on your machine using Python…

2 hours ago

Use Amazon SageMaker Unified Studio to build complex AI workflows using Amazon Bedrock Flows

Organizations face the challenge to manage data, multiple artificial intelligence and machine learning (AI/ML) tools,…

2 hours ago

Capital One builds agentic AI modeled after its own org chart to supercharge auto sales

Capital One's head of AI foundations explained at VB Transform on how the bank patterned…

3 hours ago

A Pro-Russia Disinformation Campaign Is Using Free AI Tools to Fuel a ‘Content Explosion’

Consumer-grade AI tools have supercharged Russian-aligned disinformation as pictures, videos, QR codes, and fake websites…

3 hours ago