social
We’ve trained and are open-sourcing a neural net called Whisper that approaches human level robustness and accuracy on English speech recognition.
Whisper is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. We show that the use of such a large and diverse dataset leads to improved robustness to accents, background noise and technical language. Moreover, it enables transcription in multiple languages, as well as translation from those languages into English. We are open-sourcing models and inference code to serve as a foundation for building useful applications and for further research on robust speech processing.
The Whisper architecture is a simple end-to-end approach, implemented as an encoder-decoder Transformer. Input audio is split into 30-second chunks, converted into a log-Mel spectrogram, and then passed into an encoder. A decoder is trained to predict the corresponding text caption, intermixed with special tokens that direct the single model to perform tasks such as language identification, phrase-level timestamps, multilingual speech transcription, and to-English speech translation.
Other existing approaches frequently use smaller, more closely paired audio-text training datasets, or use broad but unsupervised audio pretraining. Because Whisper was trained on a large and diverse dataset and was not fine-tuned to any specific one, it does not beat models that specialize in LibriSpeech performance, a famously competitive benchmark in speech recognition. However, when we measure Whisper’s zero-shot performance across many diverse datasets we find it is much more robust and makes 50% fewer errors than those models.
About a third of Whisper’s audio dataset is non-English, and it is alternately given the task of transcribing in the original language or translating to English. We find this approach is particularly effective at learning speech to text translation and outperforms the supervised SOTA on CoVoST2 to English translation zero-shot.
We hope Whisper’s high accuracy and ease of use will allow developers to add voice interfaces to a much wider set of applications. Check out the paper, model card, and code to learn more details and to try out Whisper.
Note: Whisper transcribed “Eildons” as “Yildens”` } }
const root = document.querySelector('.js-root'); const audio = document.querySelector('.js-audio'); const reveal = document.querySelector('.js-reveal'); const playButton = document.querySelector('.js-play-button'); const playIcon = document.querySelector('.js-play-icon'); const pauseIcon = document.querySelector('.js-pause-icon'); const [unplayedSoundWave, playedSoundWave] = Array.from(document.querySelectorAll('.js-soundwave')); const exampleSelect = document.querySelector('.js-example-select'); const output = document.querySelector('.js-output');
let playing = false;
const handlePlay = () => { playIcon.style.display = 'none'; pauseIcon.style.display = 'block'; root.classList.add('playing'); }
const handlePause = () => { playIcon.style.display = 'block'; pauseIcon.style.display = 'none'; root.classList.remove('playing'); }
playButton.addEventListener('click', () => { if (playing) { audio.pause(); playing = false; handlePause(); } else { audio.play(); playing = true; handlePlay(); } });
audio.addEventListener('ended', () => { playing = false; handlePause(); output.innerHTML = EXAMPLES[exampleSelect.value]?.transcript; reveal.style.display = 'none'; });
audio.addEventListener('timeupdate', () => { const percent = audio.currentTime / audio.duration; playedSoundWave.style.clipPath = `polygon(0 0, ${percent * 100}% 0, ${percent * 100}% 100%, 0 100%)`; });
exampleSelect.addEventListener('change', () => { // Pause the current example audio.pause(); playing = false;
reveal.style.display = 'block'; output.innerText = '';
// Reset mask playedSoundWave.style.clipPath = `polygon(0 0, 0 0, 0 100%, 0 100%)`;
// Update the player audio.src = EXAMPLES[exampleSelect.value].src;
handlePause(); updateMasks(); });
reveal.addEventListener('click', () => { reveal.style.display = 'none'; output.innerHTML = EXAMPLES[exampleSelect.value].transcript; });
const buildSelect = () => { const fragment = document.createDocumentFragment();
Object.keys(EXAMPLES).forEach((key) => { const option = document.createElement('option'); option.value = key; option.innerText = EXAMPLES[key].label; fragment.appendChild(option); });
exampleSelect.appendChild(fragment); }
const updateMasks = () => { const value = exampleSelect.value;
[unplayedSoundWave, playedSoundWave].forEach((el) => { el.style.maskImage = `url(${EXAMPLES[value].waveform})`; el.style.webkitMaskImage = `url(${EXAMPLES[value].waveform})`; }); }
buildSelect(); updateMasks(); audio.src = EXAMPLES[exampleSelect.value].src;
root.classList.remove('d-none');
Before we start, let's ensure you are in the right place.
Creating custom layers and loss functions in
Machine learning (ML) is considered the largest subarea of artificial intelligence (AI) , studying the…
This blog post is co-written with George Orlin from Meta. Today, we are excited to…
The cease-work order at the Consumer Financial Protection Bureau won’t just affect lawsuits and enforcement…
Researchers have developed a new AI algorithm, called Torque Clustering, that significantly improves how AI…