Categories: FAANG

The Super Weight in Large Language Models

Recent works have shown a surprising result: a small fraction of Large Language Model (LLM) parameter outliers are disproportionately important to the quality of the model. LLMs contain billions of parameters, so these small fractions, such as 0.01%, translate to hundreds of thousands of parameters. In this work, we present an even more surprising finding: Pruning as few as a single parameter can destroy an LLM’s ability to generate text — increasing perplexity by 3 orders of magnitude and reducing zero-shot accuracy to guessing. We propose a data-free method for identifying such parameters…
AI Generated Robotic Content

Recent Posts

SamsungCam UltraReal – Qwen-Image LoRA

Hey everyone, Just dropped the first version of a LoRA I've been working on: SamsungCam…

8 hours ago

40 Best Early Amazon Prime Day Deals on WIRED-Tested Gear (2025)

Amazon Prime Day is back, starting on October 7, but we’ve already found good deals…

9 hours ago

These little robots literally walk on water

HydroSpread, a breakthrough fabrication method, lets scientists build ultrathin soft robots directly on water. These…

9 hours ago

VHS filters work great with AI footage (WAN 2.2 + NTSC-RS)

submitted by /u/mtrx3 [link] [comments]

1 day ago

Algorithm Showdown: Logistic Regression vs. Random Forest vs. XGBoost on Imbalanced Data

Imbalanced datasets are a common challenge in machine learning.

1 day ago

Unlock global AI inference scalability using new global cross-Region inference on Amazon Bedrock with Anthropic’s Claude Sonnet 4.5

Organizations are increasingly integrating generative AI capabilities into their applications to enhance customer experiences, streamline…

1 day ago