Categories: AI/ML News

Machine listening: Making speech recognition systems more inclusive

One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Researchers designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

[Release] Video Outpainting – easy, lightweight workflow

Github | CivitAI This is a very simple workflow for fast video outpainting using Wan…

13 mins ago

Top 5 Reranking Models to Improve RAG Results

If you have worked with retrieval-augmented generation (RAG) systems, you have probably seen this problem.

13 mins ago

SQUIRE: Interactive UI Authoring via Slot QUery Intermediate REpresentations

Frontend developers create UI prototypes to evaluate alternatives, which is a time-consuming process of repeated…

13 mins ago

Frontend Engineering at Palantir: Building a Backend-less Cross-Application API

About this SeriesFrontend engineering at Palantir goes far beyond building standard web apps. Our engineers…

14 mins ago

Stop Answering the Same Question Twice: Interval-Aware Caching for Druid at Netflix Scale

By Ben SykesIn a previous post, we described how Netflix uses Apache Druid to ingest millions…

14 mins ago

Build AI-powered employee onboarding agents with Amazon Quick

Enterprises often struggle to onboard new team members at scale. Human resources (HR) teams spend…

14 mins ago