Almost anywhere you looked, AI-based speech technologies continued to blossom in 2022, from increased interest measured in Google Trends, to surprising medical advances that suggest speech patterns can help detect some illnesses, to the variety of digital services and devices that users control with their voices.
At Google Cloud, we spent 2022 making the best of Google’s speech AI and natural language technologies available to our customers, who are leveraging these technologies for use cases that range from robots that can help foster healthy childhood development, to customer service improvements based on data from phone calls, voicemails, and other speech interactions.
We expect speech AI technologies and related advancements to significantly impact business and the world in coming years, as Andrew Moore, Google Cloud’s General Manager for Cloud AI & Industry Solutions has explored. To make sure you head into 2023 with all the latest news, below are some of our most noteworthy Speech AI announcements from the last year:
In February, we announced a visual user interface for our STT API, which supports over 70 languages in 120 different local variants. The STT API lets developers convert speech into text by harnessing Google’s years of research in automatic speech recognition and transcription technology—and with the visual interface, the API is that much more intuitive, helping more developers to more easily tap this technology for their projects. We celebrated the fifth anniversary of this API in April, noting that the API processes over 1 billion spoken minutes of speech each month, enough to transcribe all U.S. Presidential inauguration speeches in history over 1 million times.
In March, we announced the general availability of Custom Voice in our TTS API, which lets customers create natural, human-like speech from text. Custom Voice lets customers train voice models with their own audio recordings, so they can offer users unique experiences. Customers simply submit audio recordings directly in the TTS API, which includes guidance to ensure high-quality models are created.
In April, we launched our newest models for the STT API, based on a new approach that uses a single neural network — as opposed to separate models for acoustic, pronunciation, and language training — and combines a transformer model with convolution layers. The result is significantly improved accuracy across dozens of the languages and dialects that the STT API supports. In December, we added the latest models for more languages including Bulgarian, Swedish, Romanian, Tamil, Bengali and more, bringing the total languages for latest models to over 45. See the full list here.
In the fall, we updated the NL API with a new model for Content Classification based on Google’s groundbreaking research on LLMs, which includes projects like LaMDA, PaLM and T5. Thanks to both the integration of cutting-edge language modeling approaches and an updated and expanded training data set, Content Classification supports over 1,000 labels and 11 languages: Chinese, French, German, Italian, Japanese, Korean, Portuguese, Russian, Spanish, and Dutch.
At Google Cloud Next ‘22, we announced the availability of our next generation of TTS voices, Neural2. These voices build on Google’s created PnG NAT technology, which we use to power our Custom Voice offering. Neural2 voices bring the same improvements customers see from PnG NAT in Custom Voices to default voices. In December, we made Neural2 generally available and now have default voices available in: English, French, Spanish, Italian, German, Portuguese, and Japanese. See the full list here.
At Google Cloud Next ‘22, we made Speech On-Device generally available, eliminating the frustration of trying to access voice services without a network connection, such as when driving far from coverage or when network outages occur. Toyota is already making use of Speech On-Device as Ryan Wheeler — Vice President, Machine Learning at Toyota Connected North America — discussed in a Google Cloud Next ‘22 session.
We look forward to continuing to bring Google’s most innovative and impactful research to our cloud services in 2023—but in the meantime, to learn more about using Google Cloud speech AI products, check out this guide, these codelabs, and our Responsible AI page.
Understanding what's happening behind large language models (LLMs) is essential in today's machine learning landscape.
AI accelerationists have won as a consequence of the election, potentially sidelining those advocating for…
L'Oréal's first professional hair dryer combines infrared light, wind, and heat to drastically reduce your…
TL;DR A conversation with 4o about the potential demise of companies like Anthropic. As artificial…
Whether a company begins with a proof-of-concept or live deployment, they should start small, test…
Digital tools are not always superior. Here are some WIRED-tested agendas and notebooks to keep…