Even smartest AI models don’t match human visual processing
Deep convolutional neural networks (DCNNs) don’t see objects the way humans do — using configural shape perception — and that could be dangerous in real-world AI applications. The study employed novel visual stimuli called ‘Frankensteins’ to explore how the human brain and DCNNs process holistic, configural object properties.
A novel, human-inspired approach to training artificial intelligence (AI) systems to identify objects and navigate their surroundings could set the stage for the development of more advanced AI systems to explore extreme environments or distant worlds, according to research from an interdisciplinary team at Penn State.
With the rapid advancement of artificial intelligence, unmanned systems such as autonomous driving and embodied intelligence are continuously being promoted and applied in real-world scenarios, leading to a new wave of technological revolution and industrial transformation. Visual perception, a core means of information acquisition, plays a crucial role in these…
Using generative artificial intelligence, a team of researchers at The University of Texas at Austin has converted sounds from audio recordings into street-view images. The visual accuracy of these generated images demonstrates that machines can replicate human connection between audio and visual perception of environments.