Categories: FAANG

RT-2: New model translates vision and language into action

Introducing Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities. This work builds upon Robotic Transformer 1 (RT-1), a model trained on multi-task demonstrations which can learn combinations of tasks and objects seen in the robotic data. RT-2 shows improved generalisation capabilities and semantic and visual understanding, beyond the robotic data it was exposed to. This includes interpreting new commands and responding to user commands by performing rudimentary reasoning, such as reasoning about object categories or high-level descriptions.
AI Generated Robotic Content

Recent Posts

Intel announced new enterprise GPU with 32GB vram

If only it works well with work flow. Nvidia have CUDA, AMD have ROCM, I…

5 hours ago

5 Practical Techniques to Detect and Mitigate LLM Hallucinations Beyond Prompt Engineering

My friend who is a developer once asked an LLM to generate documentation for a…

5 hours ago

Exclusive Self Attention

We introduce exclusive self attention (XSA), a simple modification of self attention (SA) that improves…

5 hours ago

Unlocking video insights at scale with Amazon Bedrock multimodal models

Video content is now everywhere, from security surveillance and media production to social platforms and…

5 hours ago

DRA: A new era of Kubernetes device management with Dynamic Resource Allocation

The explosion of large language models (LLMs) has increased demand for high-performance accelerators like GPUs…

5 hours ago

Amazon Spring Sale Deal: The Typhur Dome 2 Air Fryer Is 30% Off

I tested more than 30 air fryers this past year. The Typhur Dome 2 is…

6 hours ago