While scientists have long dreamed of creating machines that think like humans, a new AI model called Centaur is making unprecedented strides toward this goal. The system, trained on the massive Psych-101 dataset containing over 10 million decisions from 60,000 participants across 160 psychology experiments, claims to predict human behavior with about 64% accuracy. That's better than previous models. Much better.
What's freaking out some researchers isn't just the accuracy—it's how Centaur's internal workings have spontaneously aligned with human brain activity. The AI wasn't explicitly trained on neural data, yet somehow developed patterns that correlate with actual human brain scans. Spooky, right? Though experts caution that AI systems still lack true consciousness and emotional awareness despite impressive pattern recognition abilities.
Scientists stunned as AI's inner workings mysteriously mirror human brain patterns without being programmed to do so.
The model fundamentally reverse-engineered aspects of human cognition by studying our choices. It's like it figured out how we think without being told. The implications are huge for understanding decision-making and accelerating psychological research. But let's not get carried away.
Plenty of skeptics remain unconvinced. The dataset has the same old problems—overrepresentation of Western, educated populations. Big surprise. And the model primarily focuses on learning and decision-making, with limited coverage of social psychology or cross-cultural differences. Not exactly the full human experience.
Energy efficiency is another selling point. Unlike traditional AI systems that guzzle electricity, some newer approaches aim to mimic the brain's remarkable efficiency. This "Super-Turing AI" integrates processes instead of separating them, potentially revolutionizing the industry by slashing energy needs. The model can also predict human reaction times in various experimental scenarios.
The scientific community remains divided. Some see Centaur as the dawn of truly human-like AI, while others dismiss it as overhyped pattern recognition. At least the model and dataset are publicly available, so researchers can poke holes in the claims themselves.
Will Centaur lead to a unified theory of human cognition as its developers hope? Or will it join the long list of AI technologies that promised human-like thinking but delivered glorified statistics? Time will tell. It always does. Even more impressive, the model was created using parameter-efficient technique that modified only 0.15% of the underlying Llama 3.1 language model.

