Categories: FAANG

The Super Weight in Large Language Models

Recent works have shown a surprising result: a small fraction of Large Language Model (LLM) parameter outliers are disproportionately important to the quality of the model. LLMs contain billions of parameters, so these small fractions, such as 0.01%, translate to hundreds of thousands of parameters. In this work, we present an even more surprising finding: Pruning as few as a single parameter can destroy an LLM’s ability to generate text — increasing perplexity by 3 orders of magnitude and reducing zero-shot accuracy to guessing. We propose a data-free method for identifying such parameters…
AI Generated Robotic Content

Recent Posts

Teaching the model: Designing LLM feedback loops that get smarter over time

How to close the loop between user behavior and LLM performance, and why human-in-the-loop systems…

17 hours ago

I Tried the Best At-Home Pet DNA Test Kits on My Two Cats (2025)

I sent my cats' saliva to the lab to get health and genetic insights sent…

17 hours ago

Wan LoRa that creates hyper-realistic people just got an update

The Instagirl Wan LoRa was just updated to v2.3. It was retrained to be better…

2 days ago

Vibe Coding is Shoot-and-Forget Coding

TL;DR Vibe coding is great for quick hacks; lasting software still needs real engineers. Vibe…

2 days ago

Scaling On-Prem Security at Palantir

How Insight, Foundry & Apollo Keep Thousands of Servers in CheckIntroductionWhen it comes to Palantir’s on-premises…

2 days ago

Introducing Amazon Bedrock AgentCore Gateway: Transforming enterprise AI agent tool development

To fulfill their tasks, AI Agents need access to various capabilities including tools, data stores,…

2 days ago