Categories: FAANG

Neural Transducer Training: Reduced Memory Consumption with Sample-wise Computation

The neural transducer is an end-to-end model for automatic speech recognition (ASR). While the model is well-suited for streaming ASR, the training process remains challenging. During training, the memory requirements may quickly exceed the capacity of state-of-the-art GPUs, limiting batch size and sequence lengths. In this work, we analyze the time and space complexity of a typical transducer training setup. We propose a memory-efficient training method that computes the transducer loss and gradients sample by sample. We present optimizations to increase the efficiency and parallelism of the…
AI Generated Robotic Content

Recent Posts

WAI-ANIMA 1.0 released

submitted by /u/Choowkee [link] [comments]

5 hours ago

Frontend Engineering at Palantir: Polar Scaled Tiles in Zodiac

About this SeriesFrontend engineering at Palantir goes far beyond building standard web apps. Our engineers design…

5 hours ago

Create rich, custom tooltips in Amazon Quick Sight

Amazon Quick Sight, the business intelligence (BI) capability of Amazon Quick, is a unified BI…

6 hours ago

‘Avatar: Aang, The Last Airbender’ Leaked Online. Some Fans Say Paramount Deserves the Fallout

After the full movie leaked, animators mourned the chance to release their work as intended.…

6 hours ago

This simple change stops robot swarms from getting stuck

In crowded environments, more robots don’t always mean faster results—in fact, too many can bring…

6 hours ago