Categories: FAANG

Interpreting CLIP: Insights on the Robustness to ImageNet Distribution Shifts

What distinguishes robust models from non-robust ones? While for ImageNet distribution shifts it has been shown that such differences in robustness can be traced back predominantly to differences in training data, so far it is not known what that translates to in terms of what the model has learned. In this work, we bridge this gap by probing the representation spaces of 16 robust zero-shot CLIP vision encoders with various backbones (ResNets and ViTs) and pretraining sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp), and comparing them to the representation spaces of less…
AI Generated Robotic Content

Recent Posts

New fire just dropped: ComfyUI-CacheDiT ⚡

ComfyUI-CacheDiT brings 1.4-1.6x speedup to DiT (Diffusion Transformer) models through intelligent residual caching, with zero…

11 hours ago

A Beginner’s Reading List for Large Language Models for 2026

  The large language models (LLMs) hype wave shows no sign of fading anytime soon:…

11 hours ago

How Clarus Care uses Amazon Bedrock to deliver conversational contact center interactions

This post was cowritten by Rishi Srivastava and Scott Reynolds from Clarus Care. Many healthcare…

11 hours ago

Build intelligent employee onboarding with Gemini Enterprise

Employee onboarding is rarely a linear process. It’s a complex web of dependencies that vary…

11 hours ago

Epstein Files Reveal Peter Thiel’s Elaborate Dietary Restrictions

The latest batch of Jeffrey Epstein files shed light on the convicted sex offender’s ties…

12 hours ago

A tiny light trap could unlock million qubit quantum computers

A new light-based breakthrough could help quantum computers finally scale up. Stanford researchers created miniature…

12 hours ago