Categories: FAANG

How to Scale Your EMA

*=Equal Contributors
Preserving training dynamics across batch sizes is an important tool for practical machine learning as it enables the trade-off between batch size and wall-clock time. This trade-off is typically enabled by a scaling rule; for example, in stochastic gradient descent, one should scale the learning rate linearly with the batch size. Another important machine learning tool is the model EMA, a functional copy of a target model whose parameters move towards those of its target model according to an Exponential Moving Average (EMA) at a rate parameterized by a momentum…
AI Generated Robotic Content

Recent Posts

Using Amazon Q Business with AWS HealthScribe to gain insights from patient consultations

With the advent of generative AI and machine learning, new opportunities for enhancement became available…

2 hours ago

How a 12-Ounce Layer of Foam Changed the NFL

Even the makers of the Guardian Cap admit it looks silly. But for a sport…

3 hours ago

Combining next-token prediction and video diffusion in computer vision and robotics

In the current AI zeitgeist, sequence models have skyrocketed in popularity for their ability to…

3 hours ago

What Is Perplexity AI? Understanding One Of Google’s Biggest Search Engine Competitors

What is Perplexity AI? Is it an over-hyped replacement for Google as a search engine,…

1 day ago

Scalable Private Search with Wally

This paper presents Wally, a private search system that supports efficient semantic and keyword search…

1 day ago

How DPG Media uses Amazon Bedrock and Amazon Transcribe to enhance video metadata with AI-powered pipelines

This post was co-written with Lucas Desard, Tom Lauwers, and Sam Landuydt from DPG Media.…

1 day ago