Categories: FAANG

Interpreting CLIP: Insights on the Robustness to ImageNet Distribution Shifts

What distinguishes robust models from non-robust ones? While for ImageNet distribution shifts it has been shown that such differences in robustness can be traced back predominantly to differences in training data, so far it is not known what that translates to in terms of what the model has learned. In this work, we bridge this gap by probing the representation spaces of 16 robust zero-shot CLIP vision encoders with various backbones (ResNets and ViTs) and pretraining sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp), and comparing them to the representation spaces of less…
AI Generated Robotic Content

Recent Posts

No more Sora ..?

submitted by /u/Affectionate_Fee232 [link] [comments]

3 hours ago

Pentagon’s ‘Attempt to Cripple’ Anthropic Is Troubling, Judge Says

During a hearing Tuesday, a district court judge questioned the Department of Defense’s motivations for…

6 hours ago

Study finds AI privacy leaks hinge on a few high-impact neural network weights

Researchers have discovered that some of the elements of AI neural networks that contribute to…

6 hours ago

Beyond the Vector Store: Building the Full Data Layer for AI Applications

If you look at the architecture diagram of almost any AI startup today, you will…

6 hours ago

7 Steps to Mastering Memory in Agentic AI Systems

Memory is one of the most overlooked parts of agentic system design.

6 hours ago

Why Agents Fail: The Role of Seed Values and Temperature in Agentic Loops

In the modern AI landscape, an agent loop is a cyclic, repeatable, and continuous process…

6 hours ago