Gabriel Hardy-Françon
What is intelligence, really?
We live in an era where “artificial intelligence” is everywhere—AI helps us write emails, translate languages, recommend songs, and even drive cars. It’s easy to assume that, because computers can generate text or recognize faces, we must have cracked the code of how intelligence works. But ask anyone who’s worked on AI long enough and you’ll hear the same thing: even our most advanced models are still leagues away from human-like understanding.
So, why is real intelligence so hard to define—and even harder to replicate?
To answer that, we need to turn not to the world of computer science, but to the brain itself. And there’s no better guide for this journey than Jeff Hawkins, the Silicon Valley engineer-turned-neuroscientist whose “A Thousand Brains” theory is quietly shaking up how we think about both minds and machines.
If you had a Palm Pilot in the early 2000s, you’ve already used something Jeff Hawkins helped create. But while many of his peers doubled down on the tech industry, Hawkins took a different route—he became obsessed with a much bigger question:
What is intelligence, and how does the brain actually work?
After years of reading, building, and thinking, Hawkins co-founded Numenta, a research company devoted to understanding the neocortex—the folded, walnut-like surface of the brain that’s responsible for perception, language, and, most importantly, intelligence. His work is chronicled in his book, A Thousand Brains: A New Theory of Intelligence, which takes us on a journey into the weird, beautiful architecture of the mind.
We often picture the brain as a “central processor,” a single control room making all the big decisions. Hawkins blows up this idea completely. According to him, the neocortex is built from thousands of repeating units, called “cortical columns.” Each of these columns is like a tiny brain, capable of learning models of the world on its own.
Let’s break it down:
Human Brain (A Thousand Brains) |
Traditional AI (OpenAI, DeepMind, etc.) |
|
---|---|---|
Architecture | Distributed (thousands of columns) | Centralized (single, massive model) |
Learning | Parallel, many models at once | Sequential, one model at a time |
Context | Built-in, multi-modal, always updating | Limited, needs explicit input |
Robustness | Very high (can lose parts & recover) | Prone to error from missing data |
Generalization | Easy—one model per object or idea | Harder—relies on huge data sets |
The implications for neuroscience are huge, but Hawkins’ ideas go even further. They challenge the basic assumptions behind modern artificial intelligence.
Most leading AI systems—whether built by OpenAI, DeepMind, Anthropic, or others—work by training a single, gigantic model on oceans of data. These models are amazing at pattern recognition, but they lack the parallel, context-rich modeling that our brains use every day.
In other words: today’s AI is powerful, but it’s not really intelligent in the human sense. It’s missing the messy, consensus-driven, context-rich intelligence that’s the hallmark of the neocortex.
Let’s come back to the coffee cup. If you hand it to a child—handle up, handle down, with or without coffee—the child will instantly know what it is, what it’s for, and how to use it. Their brain has built thousands of overlapping models that, together, make “coffee cup” an obvious reality.
Try the same with an AI model, and you’re more likely to get a best guess based on pixel patterns, text correlations, or statistical likelihoods. No model consensus, no reference frames, no built-in adaptability. The machine can’t know a cup the way you do.
Test | Human Brain | Traditional AI |
---|---|---|
Recognizing objects in new positions | Instant, flexible | Often struggles |
Understanding ambiguous input | Uses context, fills in gaps | May return “I don’t know” |
Learning from few examples | Easy (few shots needed) | Needs thousands or millions of samples |
Multi-sensory integration | Natural (touch, sight, sound, etc.) | Needs special design |
Adapting to new situations | Built-in flexibility | Requires retraining |
If you care about the future of artificial intelligence, Hawkins’ “thousand brains” theory is more than a neuroscience curiosity. It’s a blueprint for what comes next.
Imagine an AI that doesn’t just parrot information, but actually models reality the way your brain does:
This kind of system wouldn’t just answer questions—it would understand them. It could adapt, learn new skills quickly, and make sense of the world in a deeply human way.
There are a few reasons:
But research is moving fast. More and more teams are realizing that context and distributed intelligence are keys to closing the gap.
At Calk AI, we’re fascinated by brain-inspired ideas that go beyond the limits of traditional artificial intelligence companies. While we don’t claim to have recreated the neocortex, we do believe that the best artificial intelligence—whether you’re building with custom GPTs, exploring open source artificial intelligence, or integrating the best AI apps—must be context-aware, flexible, and collaborative. That’s why our approach to how to use artificial intelligence in your business isn’t about siloed, rigid workflows or fixed “custom GPT” interfaces. Instead, we help users create AI agents that combine knowledge, share context, and continuously update their understanding as your business evolves.
Like the brain’s columns, our vision for how to use AI in business is to move away from fragmented solutions and toward systems that truly work together—each agent bringing its own “reference frame” to the table. Whether you’re an enterprise seeking to implement a new AI strategy, an AI startup looking for scalable agent orchestration, or simply asking how to use AI in my business, our tools are designed to make it easy to create an AI system that grows and adapts with you. We’re committed to showing you how to use AI in your business, and what’s possible when artificial intelligence stops being just a tool and starts being a partner.
The story of intelligence—both human and artificial—is still being written. Hawkins’ theory doesn’t give us all the answers, but it’s a much-needed reminder: true intelligence is less about raw power, and more about how we represent, combine, and update knowledge in context.
It’s about building a thousand models of the world, and finding meaning in their overlap.
That’s how our brains work—and maybe, just maybe, it’s how our AIs will work in the future.
As we move forward, let’s keep one eye on the incredible architecture inside our own heads. The brain has been running its intelligence “operating system” for millions of years; we’ve only just begun to learn from it.
Managing your daily activities has never been easier with these
AI model
March 19, 2025
AI models
March 19, 2025
Give your team AI agents that search, act, and write — using your tools and knowledge.