Categories: FAANG

The Super Weight in Large Language Models

Recent works have shown a surprising result: a small fraction of Large Language Model (LLM) parameter outliers are disproportionately important to the quality of the model. LLMs contain billions of parameters, so these small fractions, such as 0.01%, translate to hundreds of thousands of parameters. In this work, we present an even more surprising finding: Pruning as few as a single parameter can destroy an LLM’s ability to generate text — increasing perplexity by 3 orders of magnitude and reducing zero-shot accuracy to guessing. We propose a data-free method for identifying such parameters…
AI Generated Robotic Content

Recent Posts

No more Sora ..?

submitted by /u/Affectionate_Fee232 [link] [comments]

7 hours ago

Pentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge Says

During a hearing Tuesday, a district court judge questioned the Department of Defense’s motivations for…

10 hours ago

Study finds AI privacy leaks hinge on a few high-impact neural network weights

Researchers have discovered that some of the elements of AI neural networks that contribute to…

10 hours ago

Beyond the Vector Store: Building the Full Data Layer for AI Applications

If you look at the architecture diagram of almost any AI startup today, you will…

10 hours ago

7 Steps to Mastering Memory in Agentic AI Systems

Memory is one of the most overlooked parts of agentic system design.

10 hours ago

Why Agents Fail: The Role of Seed Values and Temperature in Agentic Loops

In the modern AI landscape, an agent loop is a cyclic, repeatable, and continuous process…

10 hours ago