A method to enable robotic paper folding based on deep learning and physics simulations
To tackle different real-world tasks, robots should be able to handle and manipulate a variety of objects and materials, including paper. While roboticists have successfully improved the ability of humanoid robots or robotic grippers to handle several materials, paper folding remains a rarely explored topic within the robotics community.
New research from Carnegie Mellon University's Robotics Institute can help robots feel layers of cloth rather than relying on computer vision tools to only see it. The work could allow robots to assist people with household tasks like folding laundry.
Robots are quickly becoming part of our everyday lives, but they’re often only programmed to perform specific tasks well. While harnessing recent advances in AI could lead to robots that could help in many more ways, progress in building general-purpose robots is slower in part because of the time needed…
This paper introduces a framework, called EMOTION, for generating expressive motion sequences in humanoid robots, enhancing their ability to engage in human-like non-verbal communication. Non-verbal cues such as facial expressions, gestures, and body movements play a crucial role in effective interpersonal interactions. Despite the advancements in robotic behaviors, existing methods…