A BBC correspondent sat with his eyes closed in a soundproofed booth at Sussex University’s Centre for Consciousness Science, watching geometric patterns bloom and change in colors he described as vivid, neon, and completely his own. Strobe lights pulsed around him. Nothing as dramatic as machine sentience was being tested by the experiment’s researchers. They were attempting to comprehend the true nature of human consciousness—that is, how the brain creates inner experience from an electrical signal. However, it is inextricably linked to what is taking place concurrently in datacenters all over the world, which is why they are asking that question with a renewed sense of urgency. The first step in determining whether machines are capable of thinking is to define thinking precisely. That hasn’t been done completely yet.
Breathless predictions and equally breathless dismissals have been interspersed with the industry’s long-running discussion about AI consciousness. According to Anthropic CEO Dario Amodei, by 2027 there may be an AI that is more intelligent than a Nobel laureate in biology, math, engineering, and writing. This would essentially be a “country of geniuses in a datacenter,” each operating independently and producing new scientific knowledge continuously.
| Category | Details |
|---|---|
| Consciousness Research Hub | Sussex University’s Centre for Consciousness Science — running the “Dreamachine” project using strobe lighting to study how the human brain generates conscious experience |
| Key Prediction | Dario Amodei, CEO of Anthropic, predicts an AI “smarter than a Nobel Prize winner” in biology, math, engineering, and writing could come online by 2027 |
| OpenAI’s Position | Sam Altman wrote in June 2025 that the industry was on the cusp of building “digital superintelligence”; predicted “the 2030s are likely going to be wildly different from any time that has come before” |
| Language Study Published | Iowa State University study in Technical Communication Quarterly (2025): analyzed 20+ billion words from news articles across 20 countries on how AI is described using human-like “mental verbs” |
| Key Language Finding | The word “needs” appeared 661 times paired with AI in news writing; “knows” appeared only 32 times with ChatGPT — anthropomorphism in news is less common than assumed |
| The Core Concern | Using words like “thinks,” “knows,” “understands” for AI systems blurs the line between pattern recognition and genuine cognition — distorting public perception and reducing accountability of developers |
| Former Google CEO View | Eric Schmidt suggested AI might soon generate independent medical insights and tackle complex unsolved problems — reported December 2025 |
| New Divide Identified | A top AI researcher identified a cognitive divide forming between people who use AI to sharpen their reasoning and those who use it to replace it entirely — reported March 2026 |
| Cultural Reference Point | From HAL 9000 in 2001: A Space Odyssey (1968) to the “self-aware digital parasite” in the final Mission Impossible film — public fear of machine consciousness has a century-long cultural history |
| Researcher Caution | “AI does not possess beliefs or feelings. It produces responses by analyzing patterns in data, not by forming ideas or making conscious decisions.” — Iowa State research team |
The 2030s will be unlike any decade in human history, according to OpenAI’s Sam Altman, who has characterized the present as the edge of digital superintelligence. These are significant assertions made by individuals who are far more knowledgeable about the technology than most others. Given how many AI timelines have fallen short in the past, these claims should also be interpreted with at least some caution.
The experience of those who actually use these systems on a daily basis is more difficult to ignore. One programmer went into great detail about his own conversion arc: he began by using AI to look things up, then gave it simple tasks, and finally gave it the kind of intricate, multi-layered coding work that he had spent a career learning how to do on his own. In a matter of seconds, the models processed thousands of lines of code. They discovered subtle bugs. They were able to navigate the architecture of large systems in a coherent manner while keeping it in apparent working memory. It is still genuinely debatable whether that qualifies as “thinking” in any meaningful sense. However, it also doesn’t feel insignificant. The dismissive and the enthralled appear to be watching the same demonstrations and coming to different conclusions for a reason.
A study that was published in Technical Communication Quarterly carefully highlights the philosophical intrigue of the language question. In order to determine how frequently AI systems were described using what linguists refer to as mental verbs—words like “thinks,” “knows,” “understands,” and “wants”—researchers at Iowa State University examined over 20 billion words from English-language news articles from 20 different countries. Surprisingly, news writers were found to be more restrained than anticipated. With 661 occurrences, the word “needs” was most frequently used in conjunction with AI, but it was typically used to refer to requirements rather than desires.

Throughout the whole dataset, “Knows” was paired with ChatGPT just 32 times. The researchers contended that journalism’s careless anthropomorphization of AI is not the issue. It’s that even the sporadic application of these terms to systems that generate outputs through pattern matching rather than true cognition can subtly alter readers’ perceptions of what artificial intelligence (AI) actually does and, most importantly, who is in charge of it. When AI “decides,” the people who created and implemented it become less visible.
Due in part to the systems’ increased capability and the fact that they are involved in more important decisions, the accountability gap is more significant now than it was five years ago. In some research circles, there is a growing perception that the public is dividing into two groups: those who use AI as a tool to sharpen their own thinking, carefully testing its outputs and forming judgments about where it’s reliable, and those who increasingly outsource reasoning to it completely, accepting its answers without much question. This is a cognitive divide in the making, according to a leading AI researcher; it’s not dramatic or obvious yet, but it’s quietly building up in the daily habits of millions of people who interact with these systems.
Whether any existing AI system is conscious in any way that would satisfy a philosopher or neuroscientist is still up for debate. The Sussex researchers would likely argue that we still don’t have a complete enough understanding of human consciousness to provide a clear answer to that question, which is why they continue to put people through the Dreammachine. It is evident that the machines are acting in a way that raises the question. There’s a sense that the discussion has shifted from whether AI can perform meaningful work to something more difficult and unfamiliar: whether the term “meaningful” itself still belongs to humans.
