Categories: FAANG

Interpreting CLIP: Insights on the Robustness to ImageNet Distribution Shifts

What distinguishes robust models from non-robust ones? While for ImageNet distribution shifts it has been shown that such differences in robustness can be traced back predominantly to differences in training data, so far it is not known what that translates to in terms of what the model has learned. In this work, we bridge this gap by probing the representation spaces of 16 robust zero-shot CLIP vision encoders with various backbones (ResNets and ViTs) and pretraining sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp), and comparing them to the representation spaces of less…
AI Generated Robotic Content

Recent Posts

Our first hyper-consistent character LoRA for Wan 2.2

Hello! My partner and I have been grinding on character consistency for Wan 2.2. After…

17 hours ago

Why tomorrow’s best devs won’t just code — they’ll curate, coordinate and command AI

AI coding requires a serious structural change. Where does that leave entry-level developers and the…

18 hours ago

The Nintendo Switch 2’s Biggest Problem Is Already Storage

In 2025, 256 gigabytes just isn’t enough, and tacking on more storage isn’t as easy…

18 hours ago

Flux Krea Dev is hands down the best model on the planet right now

I started with trying to recreate SD3 style glitches but ended up discovering this is…

2 days ago

Building a Transformer Model for Language Translation

This post is divided into six parts; they are: • Why Transformer is Better than…

2 days ago

Peacock Feathers Are Stunning. They Can Also Emit Laser Beams

Scientists hope their plumage project could someday lead to biocompatible lasers that could safely be…

2 days ago