Categories: FAANG

4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities

*Equal Contributors
Current multimodal and multitask foundation models like 4M or UnifiedIO show promising results, but in practice their out-of-the-box abilities to accept diverse inputs and perform diverse tasks are limited by the (usually rather small) number of modalities and tasks they are trained on. In this paper, we significantly expand upon the capabilities of 4M by training it on tens of highly diverse modalities and by performing co-training on large-scale multimodal datasets and text corpora. This includes training on several semantic and geometric modalities, feature maps from…
AI Generated Robotic Content

Recent Posts

Open source CRT animation lora for ltx 2.3

None of the video gen models do a real CRT terminal animation look. Weights +…

13 hours ago

Getting Started with Zero-Shot Text Classification

Zero-shot text classification is a way to label text without first training a classifier on…

13 hours ago

Gradient-based Planning for World Models at Longer Horizons

GRASP is a new gradient-based planner for learned dynamics (a “world model”) that makes long-horizon…

13 hours ago

What Do Your Logits Know? (The Answer May Surprise You!)

Recent work has shown that probing model internals can reveal a wealth of information not…

13 hours ago

Accelerate Generative AI Inference on Amazon SageMaker AI with G7e Instances

As the demand for generative AI continues to grow, developers and enterprises seek more flexible,…

13 hours ago

A Humanoid Robot Set a Half-Marathon Record in China

An autonomous robot from the company Honor ran a half marathon in 50:26, beating the…

14 hours ago