Interpreting CLIP: Insights on the Robustness to ImageNet Distribution Shifts
What distinguishes robust models from non-robust ones? While for ImageNet distribution shifts it has been shown that such differences in robustness can be traced back predominantly to differences in training data, so far it is not known what that translates to in terms of what the model has learned. In this work, we bridge this …
Read more “Interpreting CLIP: Insights on the Robustness to ImageNet Distribution Shifts”