Categories: FAANG

ExpertLens: Activation Steering Features Are Highly Interpretable

This paper was accepted at the Workshop on Unifying Representations in Neural Models (UniReps) at NeurIPS 2025.
Activation steering methods in large language models (LLMs) have emerged as an effective way to perform targeted updates to enhance generated language without requiring large amounts of adaptation data. We ask whether the features discovered by activation steering methods are interpretable. We identify neurons responsible for specific concepts (e.g., “cat”) using the “finding experts” method from research on activation steering and show that the ExpertLens, i.e., inspection of these…
AI Generated Robotic Content

Recent Posts

SpecMD: A Comprehensive Study on Speculative Expert Prefetching

Mixture-of-Experts (MoE) models enable sparse expert activation, meaning that only a subset of the model’s…

7 hours ago

Cost effective deployment of vision-language models for pet behavior detection on AWS Inferentia2

Tomofun, the Taiwan-headquartered pet-tech startup behind the Furbo Pet Camera, is redefining how pet owners…

7 hours ago

Pioneering AI-assisted code migration: How Google achieved 6x faster migration from TensorFlow to JAX

AI coding agents are rapidly becoming ubiquitous across the software industry, fundamentally changing how developers…

7 hours ago

Elon Musk’s Last-Ditch Effort to Control OpenAI: Recruit Sam Altman to Tesla

Messages between Shivon Zilis and Tesla executives reveal plans in 2017 to start a rival…

8 hours ago

AI training method helps robots carry lab-learned skills into real-world tasks

Robots are trained for specific tasks, such as cutting, using simulation. However, collecting real-world data…

8 hours ago