Categories: FAANG

Interpreting CLIP: Insights on the Robustness to ImageNet Distribution Shifts

What distinguishes robust models from non-robust ones? While for ImageNet distribution shifts it has been shown that such differences in robustness can be traced back predominantly to differences in training data, so far it is not known what that translates to in terms of what the model has learned. In this work, we bridge this gap by probing the representation spaces of 16 robust zero-shot CLIP vision encoders with various backbones (ResNets and ViTs) and pretraining sets (OpenAI, LAION-400M, LAION-2B, YFCC15M, CC12M and DataComp), and comparing them to the representation spaces of less…
AI Generated Robotic Content

Recent Posts

3 Months later – Proof of concept for making comics with Krita AI and other AI tools

Some folks might remember this post I made a few short months ago where I…

15 hours ago

NASA Delays Launch of Artemis II Lunar Mission Once Again

A failure in the helium flow of the SLS rocket has prompted NASA to delay…

16 hours ago

Jailbreaking the matrix: How researchers are bypassing AI guardrails to make them safer

A paper written by University of Florida Computer & Information Science & Engineering, or CISE,…

16 hours ago

Turns out LTX-2 makes a very good video upscaler for WAN

I have had a lot of fun with LTX but for a lot of usecases…

2 days ago

Sony’s WH-CH720N headphones offer excellent value at full price, but right now they’re a steal.

Sony’s WH-CH720N headphones offer excellent value at full price, but right now they're a steal.

2 days ago

AI model edits can leak sensitive data via update ‘fingerprints’

Artificial intelligence (AI) systems are now widely used by millions of people worldwide, as tools…

2 days ago