Categories: FAANG

Proxy-FDA: Proxy-Based Feature Distribution Alignment for Fine-Tuning Vision Foundation Models Without Forgetting

Vision foundation models pre-trained on massive data encode rich representations of real-world concepts, which can be adapted to downstream tasks by fine-tuning. However, fine-tuning foundation models on one task often leads to the issue of concept forgetting on other tasks. Recent methods of robust fine-tuning aim to mitigate forgetting of prior knowledge without affecting the fine-tuning performance. Knowledge is often preserved by matching the original and fine-tuned model weights or feature pairs. However, such point-wise matching can be too strong, without explicit awareness of the…
AI Generated Robotic Content

Recent Posts

Anthropic launches Cowork, a Claude Desktop agent that works in your files — no coding required

Anthropic released Cowork on Monday, a new AI agent capability that extends the power of…

56 mins ago

New Proposed Legislation Would Let Self-Driving Cars Operate in New York State

New York governor Kathy Hochul says she will propose a new law allowing limited autonomous…

56 mins ago

From brain scans to alloys: Teaching AI to make sense of complex research data

Artificial intelligence (AI) is increasingly used to analyze medical images, materials data and scientific measurements,…

56 mins ago

MANZANO: A Simple and Scalable Unified Multimodal Model with a Hybrid Vision Tokenizer

Unified multimodal Large Language Models (LLMs) that can both understand and generate visual content hold…

24 hours ago

16 Best Heat Protectant Sprays for Wet and Dry Hair (2026)

I've spent almost a year testing dozens of heat protectants for hair. Whether you’re blow-drying,…

1 day ago

Best Deals for New Year’s Resolutions: Sleep, Fitness, and More (2026)

Whether you’re hitting the gym or tracking your schedule, these discounts on WIRED-approved gear can…

2 days ago