Categories: FAANG

Speech AI Year in Review

Almost anywhere you looked, AI-based speech technologies continued to blossom in 2022, from increased interest measured in Google Trends, to surprising medical advances that suggest speech patterns can help detect some illnesses, to the variety of digital services and devices that users control with their voices. 

At Google Cloud, we spent 2022 making the best of Google’s speech AI and natural language technologies available to our customers, who are leveraging these technologies for use cases that range from robots that can help foster healthy childhood development, to customer service improvements based on data from phone calls, voicemails, and other speech interactions

We expect speech AI technologies and related advancements to significantly impact business and the world in coming years, as Andrew Moore, Google Cloud’s General Manager for Cloud AI & Industry Solutions has explored. To make sure you head into 2023 with all the latest news, below are some of our most noteworthy Speech AI announcements from the last year: 

Visual interface for the Speech-to-Text (STT) API

In February, we announced a visual user interface for our STT API, which supports over 70 languages in 120 different local variants. The STT API lets developers convert speech into text by harnessing Google’s years of research in automatic speech recognition and transcription technology—and with the visual interface, the API is that much more intuitive, helping more developers to more easily tap this technology for their projects. We celebrated the fifth anniversary of this API in April, noting that the API processes over 1 billion spoken minutes of speech each month, enough to transcribe all U.S. Presidential inauguration speeches in history over 1 million times. 

Support for custom voices in the Text-to-Speech (TTS) API

In March, we announced the general availability of Custom Voice in our TTS API, which lets customers create natural, human-like speech from text. Custom Voice lets customers train voice models with their own audio recordings, so they can offer users unique experiences. Customers simply submit audio recordings directly in the TTS API, which includes guidance to ensure high-quality models are created.   

Improved STT API models

In April, we launched our newest models for the STT API, based on a new approach that uses a single neural network — as opposed to separate models for acoustic, pronunciation, and language training —  and combines a transformer model with convolution layers. The result is significantly improved accuracy across dozens of the languages and dialects that the STT API supports.  In December, we added the latest models for more languages including Bulgarian, Swedish, Romanian, Tamil, Bengali and more, bringing the total languages for latest models to over 45. See the full list here.

Large language models (LLMs) for the Natural Language (NL) API

In the fall, we updated the NL API with a new model for Content Classification based on Google’s groundbreaking research on LLMs, which includes projects like LaMDA, PaLM and T5. Thanks to both the integration of cutting-edge language modeling approaches and an updated and expanded training data set, Content Classification supports over 1,000 labels and 11 languages: Chinese, French, German, Italian, Japanese, Korean, Portuguese, Russian, Spanish, and Dutch.  

Text-to-Speech Neural2

At Google Cloud Next ‘22, we announced the availability of our next generation of TTS voices, Neural2. These voices build on Google’s created PnG NAT technology, which we use to power our Custom Voice offering. Neural2 voices bring the same improvements customers see from PnG NAT in Custom Voices to default voices. In December, we made Neural2 generally available  and now have default voices available in: English, French, Spanish, Italian, German, Portuguese, and Japanese. See the full list here.

Speech services even without a network connection via Speech On-Device

At Google Cloud Next ‘22, we made Speech On-Device generally available, eliminating the frustration of trying to access voice services without a network connection, such as when driving far from coverage or when network outages occur. Toyota is already making use of Speech On-Device as Ryan Wheeler — Vice President, Machine Learning at Toyota Connected North America — discussed in a Google Cloud Next ‘22 session

We look forward to continuing to bring Google’s most innovative and impactful research to our cloud services in 2023—but in the meantime, to learn more about using Google Cloud speech AI products, check out this guide, these codelabs, and our Responsible AI page.

AI Generated Robotic Content

Recent Posts

Some recent Chroma renders

Model: https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v38-detail-calibrated/chroma-unlocked-v38-detail-calibrated-Q8_0.gguf Workflow: https://huggingface.co/lodestones/Chroma/resolve/main/simple_workflow.json Prompts used: High detail photo showing an abandoned Renaissance painter’s studio…

6 hours ago

A Gentle Introduction to Multi-Head Latent Attention (MLA)

This post is divided into three parts; they are: • Low-Rank Approximation of Matrices •…

6 hours ago

Converting Pandas DataFrames to PyTorch DataLoaders for Custom Deep Learning Model Training

Pandas DataFrames are powerful and versatile data manipulation and analysis tools.

6 hours ago

Securing America’s Defense Industrial Base

Palantir FedStart and the Path to CMMC ComplianceSecuring the Defense Industrial BaseNever has the imperative…

6 hours ago

No-code data preparation for time series forecasting using Amazon SageMaker Canvas

Time series forecasting helps businesses predict future trends based on historical data patterns, whether it’s…

6 hours ago

Beyond static AI: MIT’s new framework lets models teach themselves

MIT researchers developed SEAL, a framework that lets language models continuously learn new knowledge and…

7 hours ago