Categories: FAANG

Combining Compressions for Multiplicative Size Scaling on Natural Language Tasks

Quantization, knowledge distillation, and magnitude pruning are among the most popular methods for neural network compression in NLP. Independently, these methods reduce model size and can accelerate inference, but their relative benefit and combinatorial inter- actions have not been rigorously studied. For each of the eight possible subsets of these techniques, we compare accuracy vs. model size tradeoffs across six BERT architecture sizes and eight GLUE tasks. We find that quantization and distillation consistently provide greater benefit than pruning. Surprisingly, except for the pair of…
AI Generated Robotic Content

Recent Posts

Just tried animating a Pokémon TCG card with AI – Wan 2.2 blew my mind

Hey folks, I’ve been playing around with animating Pokémon cards, just for fun. Honestly I…

5 hours ago

Busted by the em dash — AI’s favorite punctuation mark, and how it’s blowing your cover

AI is brilliant at polishing and rephrasing. But like a child with glitter glue, you…

6 hours ago

Scientists Have Identified the Origin of an Extraordinarily Powerful Outer Space Radio Wave

In March 2025 the Earth was hit by a fast radio burst as energetic as…

6 hours ago

Robots can now learn to use tools—just by watching us

Despite decades of progress, most robots are still programmed for specific, repetitive tasks. They struggle…

6 hours ago

Sharing that workflow [Remake Attempt]

I took a stab at recreating that person's work but including a workflow. Workflow download…

1 day ago

SlowFast-LLaVA-1.5: A Family of Token-Efficient Video Large Language Models for Long-Form Video Understanding

We introduce SlowFast-LLaVA-1.5 (abbreviated as SF-LLaVA-1.5), a family of video large language models (LLMs) offering…

1 day ago