AI overestimates how smart people are, according to economists
Scientists at HSE University have found that current AI models, including ChatGPT and Claude, tend to overestimate the rationality of their human opponents—whether first-year undergraduate students or experienced scientists—in strategic thinking games, such as the Keynesian beauty contest. While these models attempt to predict human behavior, they often end up playing “too smart” and losing because they assume a higher level of logic in people than is actually present.
Psychologists and behavioral scientists have been trying to understand how people mentally represent, encode and process letters, words and sentences for decades. The introduction of large language models (LLMs) such as ChatGPT, has opened new possibilities for research in this area, as these models are specifically designed to process and…
A new study explores how artificial intelligence can not only better predict new scientific discoveries but can also usefully expand them. The researchers, who published their work in Nature Human Behaviour, built models that could predict human inferences and the scientists who will make them.
Over the past decade or so, computer scientists have developed increasingly advanced computational techniques that can tackle real-world tasks with human-comparable accuracy. While many of these artificial intelligence (AI) models have achieved remarkable results, they often do not precisely replicate the computations performed by the human brain.