Categories: FAANG

Interpreting CLIP: Insights on the Robustness to ImageNet Distribution Shifts

What distinguishes robust models from non-robust ones? While for ImageNet distribution shifts it has been shown that such differences in robustness can be traced back predominantly to differences in training data, so far it is not known what that translates to in terms of what the model has learned. In this work, we bridge this gap by probing the representation spaces of 16 robust zero-shot CLIP vision encoders with various backbones (ResNets and ViTs) and pretraining sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp), and comparing them to the representation spaces of less…
AI Generated Robotic Content

Recent Posts

3 Actionable AI Recommendations for Businesses in 2026

TL;DR In 2026, the businesses that win with AI will do three things differently: redesign…

14 hours ago

Revolutionizing Construction

How Cavanagh and Palantir Are Building Construction’s OS for the 21st CenturyEditor’s Note: This blog post…

2 days ago

Building a voice-driven AWS assistant with Amazon Nova Sonic

As cloud infrastructure becomes increasingly complex, the need for intuitive and efficient management interfaces has…

2 days ago

Cloud CISO Perspectives: Our 2026 Cybersecurity Forecast report

Welcome to the first Cloud CISO Perspectives for December 2025. Today, Francis deSouza, COO and…

2 days ago

As AI Grows More Complex, Model Builders Rely on NVIDIA

Unveiling what it describes as the most capable model series yet for professional knowledge work,…

2 days ago