Making Artificial Intelligence systems robustly perceive humans remains one of the most intricate challenges in computer vision. Among the most complex problems is reconstructing 3D models of human hands, a task with wide-ranging applications in robotics, animation, human-computer interaction, and augmented and virtual reality. The difficulty lies in the nature of hands themselves, often obscured while holding objects or contorted into challenging orientations during tasks like grasping.
Imagine you're in an airplane with two pilots, one human and one computer. Both have their "hands" on the controllers, but they're always looking out for different things. If they're both paying attention to the same thing, the human gets to steer. But if the human gets distracted or misses…
How can mobile robots perceive and understand the environment correctly, even if parts of the environment are occluded by other objects? This is a key question that must be solved for self-driving vehicles to safely navigate in large crowded cities. While humans can imagine complete physical structures of objects even…
An entirely new predictive typing model can simulate different kinds of users, helping reveal ways to optimize how we use our phones. Developed by researchers at Aalto University, the new model captures the difference between typing with one or two hands and between younger and older users.