Categories: FAANG

RT-2: New model translates vision and language into action

Introducing Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities. This work builds upon Robotic Transformer 1 (RT-1), a model trained on multi-task demonstrations which can learn combinations of tasks and objects seen in the robotic data. RT-2 shows improved generalisation capabilities and semantic and visual understanding, beyond the robotic data it was exposed to. This includes interpreting new commands and responding to user commands by performing rudimentary reasoning, such as reasoning about object categories or high-level descriptions.
AI Generated Robotic Content

Recent Posts

I’m trying out an amazing open-source video upscaler called FlashVSR

Link : https://github.com/lihaoyun6/ComfyUI-FlashVSR_Ultra_Fast submitted by /u/Many-Ad-6225 [link] [comments]

11 hours ago

Build reliable AI systems with Automated Reasoning on Amazon Bedrock – Part 1

Enterprises in regulated industries often need mathematical certainty that every AI response complies with established…

11 hours ago

Cloud CISO Perspectives: AI as a strategic imperative to manage risk

Welcome to the second Cloud CISO Perspectives for October 2025. Today, Jeanette Manfra, senior director,…

11 hours ago

Inside Celosphere 2025: Why there’s no ‘enterprise AI’ without process intelligence

Presented by CelonisAI adoption is accelerating, but results often lag expectations. And enterprise leaders are…

12 hours ago

Nancy Mace Curses, Berates Confused Cops in Airport Meltdown: Police Report

At an airport in South Carolina on Thursday, US representative Nancy Mace called police officers…

12 hours ago

New LTX is insane. Made a short horror in time for Halloween (flashing images warning)

I mainly used I2V. Used several models for the images. Some thoughts after working on…

1 day ago