While tech giants continue pouring billions into making language models bigger, a revolutionary brain-inspired AI has quietly left ChatGPT in the dust. The Hierarchical Reasoning Model (HRM) is changing the game with its brain-like architecture that mimics how humans actually think. And get this—it does more with less. Way less.
The numbers don't lie. HRM scored a solid 40.3% on the ARC-AGI-1 benchmark, trouncing OpenAI's o3-mini-high at 34.5% and making Claude 3.7 look positively remedial at 21.2%. On the tougher ARC-AGI-2 test? HRM hit 5% while the competition limped in at 3% and a pathetic 0.9%. Not too shabby for the new kid.
Traditional LLMs like ChatGPT rely on chain-of-thought reasoning that breaks down problems step by step. Nice idea, but prone to cascading errors. One mistake early on? The whole reasoning chain falls apart. Using supervised learning techniques, these systems require extensive labeled data for training to achieve their results.
HRM takes a different approach with its two-module system—one for abstract planning, one for detailed computations. Just like your brain. The model was developed by researchers at Sapient in Singapore, establishing a new paradigm for efficient AI design.
Think hierarchically, like nature intended. HRM's brain-mimicking dual modules solve what gigantic models can't.
Here's the kicker: HRM has only 27 million parameters. ChatGPT and its GPT cousins? Try trillions. Talk about efficiency! HRM trained on just 1,000 samples compared to the millions needed for those bloated LLMs. Less computing power, less energy, less everything. Unlike the human brain's remarkable energy efficiency of 20 watts, traditional AI systems require substantial power-hungry hardware to function.
The secret sauce is in the brain-inspired design. HRM processes information across multiple timescales and executes reasoning in a single forward pass. No explicit supervision needed for those intermediate steps. It's like the difference between a natural athlete and someone following a step-by-step instruction manual.
This isn't just another AI breakthrough—it's a fundamental shift. By integrating learning and memory like real brains do, these models could ultimately deliver on the promise of truly intelligent systems.
The big tech companies might want to take notes. Sometimes bigger isn't better. Sometimes, you just need to be more brain-like.

