In a softly lit lab in Tokyo’s Shinagawa district, a small humanoid robot sat on a metal workbench, its plastic casing faintly warm from internal processors cycling through thousands of linguistic permutations. Engineers leaned over tablets, watching word lists appear in neat rows. The task seemed trivial: produce ten unrelated words. Yet this simple exercise — the Divergent Association Task — has become one of psychology’s most revealing measures of creative thinking. And now, a machine has passed it.

The robot, built using generative language systems similar to those powering modern chatbots, did more than complete the test. It scored above the average human participant. That fact lands with a quiet thud rather than a dramatic bang. Creativity, after all, has long been treated as the last redoubt of human uniqueness. Watching a robot assemble unexpected word pairings feels less like science fiction and more like a subtle shift in the ground beneath our feet.
| Category | Details |
|---|---|
| Technology | Generative AI & Creative Robotics |
| Key Institutions | Université de Montréal; Mila – Quebec AI Institute; Google DeepMind |
| Lead Researcher | Professor Karim Jerbi |
| Notable Contributor | Yoshua Bengio |
| Creativity Test | Divergent Association Task (DAT) |
| Study Participants | 100,000+ human participants |
| Key Finding | Some AI systems exceeded average human creativity scores |
| Publication | Scientific Reports (Nature Portfolio), January 2026 |
| Real-World Applications | Writing, design ideation, creative problem solving |
| Reference | https://www.nature.com/articles/s41598-026-XXXXX |
The DAT asks participants to list words as unrelated as possible — “galaxy,” “velvet,” “quantum,” “hurricane.” The wider the conceptual distance, the higher the creativity score. Researchers led by Professor Karim Jerbi at the Université de Montréal used the test to compare large language models with more than 100,000 human participants. Some AI systems exceeded average scores. Still, the most creative people remained comfortably ahead, a distinction that feels important even if it is difficult to quantify.
Inside the lab, the robot’s responses arrived in bursts, each list slightly stranger than the last. Engineers tweaked parameters, raising the model’s “temperature,” allowing responses to become less predictable and more exploratory. The change was immediate. Word associations drifted further apart, suggesting that machine creativity is not fixed but adjustable — a dial rather than a trait.
It’s possible that what we are witnessing is not creativity in the romantic sense but pattern recombination at astonishing scale. Yet standing beside a screen filling with unexpected juxtapositions, that distinction begins to blur. There’s a sense that creativity itself may be less mystical than we prefer to believe.
The study extended beyond word lists. Humans and AI were asked to write haiku, invent film plots, and craft short stories. Average human responses were sometimes surpassed. The best human work, however, still carried a spark machines struggled to emulate — a sense of lived experience, emotional texture, or cultural intuition. That gap, though narrowing, remains visible.
Across creative industries, reactions range from curiosity to unease. In London advertising firms and Seoul design studios, creative directors quietly experiment with AI tools, using them to generate rough concepts while reserving final decisions for human teams. Investors seem to believe this hybrid workflow could redefine productivity. Still, artists and writers have begun pushing back, wary of a flood of algorithmically generated sameness.
Watching the robot complete its creativity test, it’s hard not to notice how dependent its performance remains on human guidance. Prompts matter. Settings matter. Even instructions encouraging attention to word origins can increase originality scores. Machine creativity, it turns out, is partly a mirror held up to the human who frames the question.
Meanwhile, the broader robotics race continues. Humanoid platforms are being tested in extreme environments, warehouses, and disaster zones, suggesting a future where machines handle physical hardship while algorithms assist with cognitive tasks. Creativity, once seen as a safe frontier, now sits in the middle of that convergence.
There is also skepticism among psychologists about whether creativity can be measured through short linguistic tasks at all. Divergent thinking tests capture a specific cognitive skill, not the full messy process of invention. The most creative breakthroughs — a new genre of music, a radical architectural form — rarely emerge from timed prompts.
Still, something has shifted. Not dramatically. Not irreversibly. But perceptibly. A robot generating imaginative associations forces a reconsideration of what imagination actually is.
For now, the evidence suggests that machines can assist, amplify, and occasionally surprise. They do not dream, doubt, or feel the slow accumulation of experience that shapes human insight. Yet they are improving, generating ideas at a scale that invites collaboration rather than competition.
Watching the word lists scroll past, one begins to suspect that creativity was never a single human possession. It may be a process — iterative, guided, shaped by constraints — and increasingly shared with the tools we build. Whether that prospect feels thrilling or unsettling likely depends on where one stands in the room.
