Categories: FAANG

Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling

Large language models are trained on massive scrapes of the web, which are often unstructured, noisy, and poorly phrased. Current scaling laws show that learning from such data requires an abundance of both compute and data, which grows with the size of the model being trained. This is infeasible both because of the large compute costs and duration associated with pre-training, and the impending scarcity of high-quality data on the web. In this work, we propose Web Rephrase Augmented Pre-training (WRAP) that uses an off-the-shelf instruction-tuned model prompted to paraphrase documents on the…
AI Generated Robotic Content

Recent Posts

We can finally watch TNG in 16:9

Somone posted an example of LTX 2.3 outpainting to expand 4:3 video to 16:9. I…

3 hours ago

The Complete Guide to Inference Caching in LLMs

Calling a large language model API at scale is expensive and slow.

3 hours ago

The Human Infrastructure: How Netflix Built the Operations Layer Behind Live at Scale

By: Brett Axler, Casper Choffat, and Alo LowryIn the three years since our first Live show,…

3 hours ago

Introducing granular cost attribution for Amazon Bedrock

As AI inference grows into a significant share of cloud spend, understanding who and what…

3 hours ago

OpenAI Executive Kevin Weil Is Leaving the Company

The former Instagram VP is departing the ChatGPT-maker, which is folding the AI science application…

4 hours ago