Categories: FAANG

Needle-Moving AI Research Trains Surgical Robots in Simulation

A collaboration between NVIDIA and academic researchers is prepping robots for surgery.

ORBIT-Surgical — developed by researchers from the University of Toronto, UC Berkeley, ETH Zurich, Georgia Tech and NVIDIA — is a simulation framework to train robots that could augment the skills of surgical teams while reducing surgeons’ cognitive load.

It supports more than a dozen maneuvers inspired by the training curriculum for laparoscopic procedures, aka minimally invasive surgery, such as grasping small objects like needles, passing them from one arm to another and placing them with high precision.

The physics-based framework was built using NVIDIA Isaac Sim, a robotics simulation platform for designing, training and testing AI-based robots. The researchers trained reinforcement learning and imitation learning algorithms on NVIDIA GPUs and used NVIDIA Omniverse, a platform for developing and deploying advanced 3D applications and pipelines based on Universal Scene Description (OpenUSD), to enable photorealistic rendering.

Using the community-supported da Vinci Research Kit, provided by the Intuitive Foundation, a nonprofit supported by robotic surgery leader Intuitive Surgical, the ORBIT-Surgical research team demonstrated how training a digital twin in simulation transfers to a physical robot in a lab environment in the video below.

ORBIT-Surgical will be presented Thursday at ICRA, the IEEE International Conference on Robotics and Automation, taking place this week in Yokohama, Japan. The open-source code package is now available on GitHub.

A Stitch in AI Saves Nine

ORBIT-Surgical is based on Isaac Orbit, a modular framework for robot learning built on Isaac Sim. Orbit includes support for various libraries for reinforcement learning and imitation learning, where AI agents are trained to mimic ground-truth expert examples.

The surgical framework enables developers to train robots like the da Vinci Research Kit robot, or dVRK, to manipulate both rigid and soft objects using reinforcement learning and imitation learning frameworks running on NVIDIA RTX GPUs.

ORBIT-Surgical introduces more than a dozen benchmark tasks for surgical training, including one-handed tasks such as picking up a piece of gauze, inserting a shunt into a blood vessel or lifting a suture needle to a specific position. It also includes two-handed tasks, like handing a needle from one arm to another, passing a threaded needle through a ring pole and reaching two arms to specific positions while avoiding obstacles.


One of ORBIT-Surgical’s benchmark tests is inserting a shunt — shown on left with a real-world robot and on right in simulation.

By developing a surgical simulator that takes advantage of GPU acceleration and parallelization, the team is able to boost robot learning speed by an order of magnitude compared to existing surgical frameworks. They found that the robot digital twin could be trained to complete tasks like inserting a shunt and lifting a suture needle in under two hours on a single NVIDIA RTX GPU.

With the visual realism enabled by rendering in Omniverse, ORBIT-Surgical also allows researchers to generate high-fidelity synthetic data, which could help train AI models for perception tasks such as segmenting surgical tools in real-world videos captured in the operating room.

A proof of concept by the team showed that combining simulation and real data significantly improved the accuracy of an AI model to segment surgical needles from images — helping reduce the need for large, expensive real-world datasets for training such models.

Read the paper behind ORBIT-Surgical, and learn more about NVIDIA-authored papers at ICRA.

AI Generated Robotic Content

Recent Posts

“FLUX Creator Program” – New Flux models sooner than expected?

are we getting new Flux models soon? hopefully open source. Would love a new klein…

18 hours ago

Implementing Statistical Guardrails for Non-Deterministic Agents

Non-deterministic agents are those where the same input can lead to distinct outputs across multiple…

18 hours ago

Stochastic KV Routing: Enabling Adaptive Depth-Wise Cache Sharing

Serving transformer language models with high throughput requires caching Key-Values (KVs) to avoid redundant computation…

18 hours ago

How Hapag-Lloyd uses Amazon Bedrock to transform customer feedback into actionable insights

Hapag-Lloyd stands as one of the world’s leading liner shipping companies, operating a modern fleet…

18 hours ago

Five must-have guides to move agents into production with Gemini Enterprise Agent Platform

Building AI agents that work well in a demo is one thing, but running them…

18 hours ago

‘I Actually Thought He Was Going to Hit Me,’ OpenAI’s Greg Brockman Says of Elon Musk

OpenAI’s president wrapped his testimony on Tuesday by revealing a fiery meeting with Musk and…

19 hours ago