Categories: FAANG

RT-2: New model translates vision and language into action

Introducing Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities. This work builds upon Robotic Transformer 1 (RT-1), a model trained on multi-task demonstrations which can learn combinations of tasks and objects seen in the robotic data. RT-2 shows improved generalisation capabilities and semantic and visual understanding, beyond the robotic data it was exposed to. This includes interpreting new commands and responding to user commands by performing rudimentary reasoning, such as reasoning about object categories or high-level descriptions.
AI Generated Robotic Content

Recent Posts

Pirate VFX Breakdown | Made almost exclusively with SDXL and Wan!

In the past weeks, I've been tweaking Wan to get really good at video inpainting.…

3 hours ago

Try Deep Think in the Gemini app

Deep Think utilizes extended, parallel thinking and novel reinforcement learning techniques for significantly improved problem-solving.

3 hours ago

Introducing Amazon Bedrock AgentCore Browser Tool

At AWS Summit New York City 2025, Amazon Web Services (AWS) announced the preview of…

3 hours ago

New vision model from Cohere runs on two GPUs, beats top-tier VLMs on visual tasks

Cohere's Command A Vision can read graphs and PDFs to make enterprise research richer and…

4 hours ago

Anthropic Revokes OpenAI’s Access to Claude

OpenAI lost access to the Claude API this week after Anthropic claimed the company was…

4 hours ago

New AI tool learns to read medical images with far less data

A new artificial intelligence (AI) tool could make it much easier—and cheaper—for doctors and researchers…

4 hours ago