Categories: FAANG

Neural Transducer Training: Reduced Memory Consumption with Sample-wise Computation

The neural transducer is an end-to-end model for automatic speech recognition (ASR). While the model is well-suited for streaming ASR, the training process remains challenging. During training, the memory requirements may quickly exceed the capacity of state-of-the-art GPUs, limiting batch size and sequence lengths. In this work, we analyze the time and space complexity of a typical transducer training setup. We propose a memory-efficient training method that computes the transducer loss and gradients sample by sample. We present optimizations to increase the efficiency and parallelism of the…
AI Generated Robotic Content

Recent Posts

The realism is getting out of hand

ComfyUI with ZIT submitted by /u/Ferwien [link] [comments]

21 hours ago

Tovala Family Meals Review: Good Food, Lots of Salt

Tovala is a meal kit that comes with a smart oven, or a smart oven…

22 hours ago

Open weight (and closed) Models with character sheet inputs

Now that we have some open weight models available to us that work with character…

2 days ago

Reinforced Agent: Inference-Time Feedback for Tool-Calling Agents

This paper was accepted at the Fifth Workshop on Natural Language Generation, Evaluation, and Metrics…

2 days ago

State of Routing in Model Serving

By Nipun Kumar, Rajat Shah, Peter ChngIntroductionThis is the first blog post in a multi-part series…

2 days ago

AWS Transform now automates BI migration to Amazon Quick in days

Migrating to Amazon Quick doesn’t have to mean starting from scratch. Your dashboards encode hard-won…

2 days ago