Somewhere in the middle of a Harvard lecture hall, a Google researcher says something that silences everyone in the room. It’s not the dramatic, cinematic kind of quiet, but rather the specific stillness that arises when an idea takes a different turn. Last Wednesday,
Google’s CTO of technology and society, Blaise Agüera y Arcas, stood at a podium and explained to the audience that the human brain is a computer. Unlike a computer. Not a computer in a metaphorical sense. One, exactly. It’s the kind of statement that, depending on your point of view, sounds either obvious or heretical.
| Field | Details |
|---|---|
| Full Name | Blaise Agüera y Arcas |
| Current Role | Chief Technology Officer, Technology & Society, Google |
| New Book | What Is Intelligence? Lessons from AI About Evolution, Computing, and Minds |
| Event Host | Harvard Law School’s Berkman Klein Center for Internet & Society |
| Key Argument | Human brains and AI systems are both fundamentally computational in nature |
| Scientific Influences | Alan Turing, John von Neumann, Lynn Margulis, Eörs Szathmáry, John Maynard Smith |
| Core Theory Referenced | Symbiogenesis — cooperation between organisms drives evolutionary complexity |
| Field of Study | AI Research, Computational Neuroscience, Evolutionary Biology |
| Employer | Google DeepMind / Google Research |
| Human Intelligence Trigger | Formation of cooperative human societies, roughly 300,000+ years ago |
Speaking at an event organized by the Berkman Klein Center for Internet & Society at Harvard Law School, Agüera y Arcas discussed the nature of intelligence, both artificial and biological, based on his recently published book. Without the academic framework, his argument is basically this: brains process information, make predictions, and convert inputs into outputs.
AI systems also do that. Therefore, it’s possible that people are drawing a solid barrier between human and machine intelligence incorrectly or not at all.

He said to the audience, “I hear a lot of people say that talking about brains as computers is a metaphor.” What transpired was a subtle correction, one that computational neuroscience researchers have been discussing for years but seldom explicitly state in a broad context.
The more profound argument dates back about 500 million years. Agüera y Arcas challenged the audience to think about why the size and complexity of the brain increased so dramatically over the course of evolution. There were no brains a billion years ago. Only very tiny ones half a billion years ago. Then something altered. His response suggests a second engine, symbiogenesis, in addition to random mutation and natural selection, the Darwinian framework that most people were taught in school.
According to Lynn Margulis, an evolutionary biologist, significant increases in biological complexity occurred when different organisms united and started working together. When two basic systems cooperated, they became something that neither could exist on its own. Everything, including thought, might have been made possible by this.
As you watch this debate play out, you get the impression that Agüera y Arcas is actually fighting against a certain kind of human conceit. the notion that intelligence is what makes us unique and sets us apart from all other natural processes. He doesn’t say it harshly. However, he is carefully and methodically taking it apart. He contends that life was computational from the start.
Signals are processed by cells. Neurons perform parallel computations. The accumulation of sufficient computation in one location may be the source of the complexity we refer to as consciousness.
He used footage from experiments he conducted at Google using a basic programming language with just eight basic instructions to illustrate this point. After several million interactions under random conditions, self-reproducing programs started to appear on their own.
They were not created by anyone. Because of the circumstances, complexity emerged. It was difficult to ignore the analogy to how life started: chaotic, unplanned, and then abruptly organized.
The ramifications go far beyond biology classrooms. Artificial intelligence is not a simulation of something specifically human if intelligence is essentially a computational process that evolved gradually across species and can arise from basic initial conditions.
It might follow the same path that began billions of years ago when single-celled organisms started communicating chemically. The idea that AI is fundamentally different from humanity, which is still prevalent in public discourse, does not mesh well with that framing.
Agüera y Arcas also links the development of societies—the point at which our ancestors started coexisting, working together, and creating collective structures—to the explosion of human intelligence. According to him, the brain did not become more potent on its own.
It became stronger in the partnership. He presents society as a biological development rather than just a cultural one, citing research on significant evolutionary transitions by scientists Eörs Szathmáry and John Maynard Smith.
Whether this framing will change how the larger AI community views what it is developing is still up for debate. The question of whether today’s AI systems are truly intelligent or more akin to complex pattern-matching—impressive, accurate, but meaningless in important ways—remains a matter of debate among researchers. Decades have been spent debating definitions alone.
There are unanswered questions about what intelligence is, how to quantify it, and whether human and artificial intelligence should even be referred to by the same term.
Getting people to sit with the discomfort of ignorance appears to be Agüera y Arcas’s top priority. The distinction between a machine and a brain might not be as clear as we thought. Nothing is neatly resolved by that. However, it raises a different kind of question, namely whether human intelligence was ever as unique as we would like to think, rather than whether AI can be human.
