Categories: FAANG

The “Super Weight:” How Even a Single Parameter can Determine a Large Language Model’s Behavior

A recent paper from Apple researchers, “The Super Weight in Large Language Models,” reveals that an extremely small subset of parameters in LLMs (in some cases, a single parameter) can exert a disproportionate influence on an LLM’s overall functionality (see Figure 1). This work highlights the critical role of these “super weights” and their corresponding “super activations,” offering a new insight into LLM architecture and avenues for efficient model compression. The paper provides full technical details and experimental results; in this post, we provide a high-level overview of the key…
AI Generated Robotic Content

Recent Posts

Chroma Radiance, Mid training but the most aesthetic model already imo

submitted by /u/Different_Fix_2217 [link] [comments]

5 hours ago

From human clicks to machine intent: Preparing the web for agentic AI

For three decades, the web has been designed with one audience in mind: People. Pages…

6 hours ago

Best GoPro Camera (2025): Compact, Budget, Accessories

You’re an action hero, and you need a camera to match. We guide you through…

6 hours ago

What tools would you use to make morphing videos like this?

submitted by /u/nikitagent [link] [comments]

1 day ago

Bias after Prompting: Persistent Discrimination in Large Language Models

A dangerous assumption that can be made from prior work on the bias transfer hypothesis…

1 day ago

Post-Training Generative Recommenders with Advantage-Weighted Supervised Finetuning

Author: Keertana Chidambaram, Qiuling Xu, Ko-Jen Hsiao, Moumita Bhattacharya(*The work was done when Keertana interned…

1 day ago