Categories: AI/ML News

This AI knew the answers but didn’t understand the questions

For decades, psychologists have debated whether the human mind can be explained by one unified theory or must be broken into separate parts like memory and attention. A recent AI model called Centaur seemed to offer a breakthrough, claiming it could mimic human thinking across 160 different cognitive tasks. But new research is challenging that bold claim, suggesting the model isn’t truly “thinking” at all—it’s just memorizing patterns.
AI Generated Robotic Content

Share
Published by
AI Generated Robotic Content

Recent Posts

How Shivon Zilis Operated as Elon Musk’s OpenAI Insider

Messages presented at trial reveal how Zilis, the mother of four of Musk's children, acted…

26 seconds ago

Why pedestrian deaths keep rising: AI spots rare crash patterns where targeted fixes could save lives

On average, car crashes cause more than 40,000 deaths per year in the United States.…

29 seconds ago

SenseNova-U1 just dropped — native multimodal gen/understanding in one model, no VAE, no diffusion

What's new: Text rendering in images actually works. Diffusion models scramble text because they don't…

23 hours ago

Adaptive Thinking: Large Language Models Know When to Think in Latent Space

Recent advances in large language models (LLMs) test-time computing have introduced the capability to perform…

23 hours ago

Extracting contract insights with PwC’s AI-driven annotation on AWS

This post was co-written with Yash Munsadwala, Adam Hood, Justin Guse, and Hector Hernandez from…

23 hours ago