Close Menu
TemporaerTemporaer
  • Home
  • Privacy Policy
  • Terms of Service
  • Contact
  • Science
  • Technology
  • News
Facebook X (Twitter) Instagram
Facebook X (Twitter)
TemporaerTemporaer
Subscribe Login
  • Home
  • Privacy Policy
  • Terms of Service
  • Contact
  • Science
  • Technology
  • News
TemporaerTemporaer
  • Home
  • Privacy Policy
  • Terms of Service
  • Contact
  • Science
  • Technology
  • News
Home » The Hidden Pattern in Human Brain Activity That AI Just Discovered
Nature

The Hidden Pattern in Human Brain Activity That AI Just Discovered

MelissaBy MelissaApril 7, 2026No Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Share
Facebook Twitter LinkedIn Pinterest Email

Think about what your brain is doing right now, just reading this sentence. It’s pulling meaning from symbols, sequencing sounds you’re not even hearing aloud, predicting what comes next. And for most of human history, we had no idea how any of that actually worked.

Hidden Pattern in Human Brain Activity
Hidden Pattern in Human Brain Activity

We had theories, frameworks, competing schools of thought — but the actual mechanical signature of the brain doing its job? That remained stubbornly invisible. Until recently, a series of discoveries has begun pulling back the curtain, and some of the most surprising clues are coming not from biology, but from artificial intelligence.

Discovery NameMultiscale Neural Signature for Reach & Grasp Movements
Lead ResearcherDr. Maryam Shanechi, USC Viterbi School of Engineering
Collaborating ResearcherDr. Ariel Goldstein, Hebrew University of Jerusalem
PhD ContributorHamidreza Abbaspourazad, USC Electrical Engineering
External CollaboratorProf. Bijan Pesaran, NYU Neural Science
Published InNature Communications
Key TechnologyMachine-learning algorithm; iGluSnFR4 glutamate sensor
Primary ApplicationBrain-machine interfaces, paralysis treatment, movement disorders
Funding / AwardsNIH Director’s New Innovator Award; ASEE Curtis W. McGraw Research Award
Reference WebsiteUSC Viterbi School of Engineering

At the University of Southern California, electrical engineering professor Maryam Shanechi and her PhD student Hamidreza Abbaspourazad set out to answer a deceptively simple question: when you reach out and grab a cup of coffee, what exactly is your brain doing? The motion feels effortless.

It isn’t. Your brain is coordinating 27 joint angles in real time, managing signals across billions of neurons, translating intent into precise physical action faster than conscious thought. Researchers have long debated how this happens. Shanechi’s team may have found the answer, or at least a significant piece of it.

Their approach was unusual. Instead of studying just one type of brain signal, they looked at two simultaneously — the spiking of individual neurons and the broader, wave-like activity called Local Field Potentials, which represent the collective hum of thousands of neurons working together. Most researchers study these separately.

Shanechi’s team built a new machine-learning algorithm specifically designed to look at both at once, searching for patterns that existed across both scales at the same time. What they found was not what anyone predicted.

There was a common pattern — a kind of neural fingerprint — buried inside both types of activity. And it wasn’t just predictive of movement in a general sense. It was dominant. “When looking closer,” Shanechi explained, “we discovered that this common multiscale pattern actually happened to dominantly predict movement compared to all other existing patterns.” The team published the findings in Nature Communications, and the neuroscience community took notice.

What made it stranger, and more compelling, was the next part: the same pattern appeared across different test subjects. Different people, same neural signature. It’s hard not to notice the implications of that — the possibility that human movement is, at some level, written in a shared biological language.

That idea gains even more weight when you put it alongside a separate but strangely parallel discovery out of Hebrew University. Dr. Ariel Goldstein and a team that included researchers from Google and Princeton recorded brain activity from people listening to a spoken story — a 30-minute podcast — using a method called electrocorticography, which captures signals with unusually high precision.

What they found was that the brain doesn’t comprehend language all at once. It processes meaning in ordered stages, each building on the last, moving from basic word recognition toward something richer and more contextual. Layer by layer, like sediment forming stone.

Here’s where it gets genuinely strange. That same layered structure — that step-by-step accumulation of meaning — mirrors almost exactly how large AI language models like GPT-2 and Llama 2 are built. Early brain responses matched early AI processing stages. Later brain responses, particularly in Broca’s area, aligned with deeper, more contextually sophisticated AI layers. “What surprised us most,” Goldstein said, “was how closely the brain’s temporal unfolding of meaning matches the sequence of transformations inside large language models.”

These AI systems weren’t designed by studying brains. They were designed by engineers trying to solve a prediction problem. And yet they seem to have landed on a surprisingly similar architecture. That parallel either says something profound about the nature of language itself, or it’s a coincidence so large it doesn’t feel like one.

There’s a practical dimension here that deserves attention, too. Shanechi is explicit about where she hopes the movement research goes: brain-machine interfaces for paralyzed patients. If researchers can identify a consistent neural signature for reach and grasp — and translate that pattern into commands a machine can execute — then a person who has lost the ability to move their arm might one day recover meaningful function.

Her algorithm doesn’t just identify the pattern; it predicts arm and finger movements from it with measurable accuracy. The distance between laboratory result and clinical reality is still significant, and it would be a mistake to overstate how close that bridge is. But the foundation is now there in a way it wasn’t before.

Underlying all of this is a quieter, more foundational breakthrough in how scientists can observe the brain at all. A newly developed protein sensor called iGluSnFR4, built by researchers at the Allen Institute and Janelia Research Campus, can now detect glutamate — the brain’s primary chemical messenger — at the level of individual synapses in real time. Until recently, catching those signals was essentially impossible. They’re too fast, too faint.

Now scientists can watch neurons talking to each other as it happens. One researcher described the old approach like reading a book with all the words scrambled. This new tool, he said, finally shows you how the words connect.

What’s emerging from all of this work, taken together, is something that feels less like incremental scientific progress and more like the beginning of a genuine shift in how we understand the mind. The brain is no longer purely a biological mystery.

It’s becoming, slowly and imperfectly, a readable system — one that artificial intelligence is uniquely positioned to help decode. Whether that’s reassuring or unsettling probably depends on the day. But it is, without question, one of the more consequential developments in modern science.

Hidden Pattern in Human Brain Activity
Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleMicrosoft’s New AI Feature Is Raising Serious Questions Inside the Company
Next Article Cambridge Scientists Say Reality May Be More Fragile Than We Thought
Melissa
  • Website

Related Posts

NASA’s Latest Mission Found Evidence That Has Scientists Reconsidering Mars

April 7, 2026

Sodium-Ion Supremacy: How Cheap Salt is Threatening the Lithium Monopoly

April 1, 2026

The Strange Pattern Found in Antarctica That Has Scientists Quietly Rewriting Earth’s History

February 27, 2026
Leave A Reply Cancel Reply

You must be logged in to post a comment.

Science

NASA Scientists Detect Strange Energy Pattern in Deep Space

By MelissaApril 10, 20260

Humanity has always had a tendency to feel both a little small and a little…

Anthropic’s $10 Billion Mistake: What the Claude Source Code Leak Means for AI Safety

April 10, 2026

Apple’s iOS 18 DarkSword Patch Is a Rare Emergency Fix — Here’s What It Means for Your iPhone

April 10, 2026

Japan’s New AI Robot Just Did Something Completely on Its Own

April 10, 2026

The Discovery That Has Physicists Questioning Everything

April 10, 2026

Scientists Say Earth May Be Changing Faster Than Expected

April 10, 2026

AI Just Passed a Test That Was Never Meant for Machines

April 10, 2026
Facebook X (Twitter)
  • Home
  • Privacy Policy
  • Terms of Service
  • Contact
  • Science
  • Technology
  • News
© 2026 ThemeSphere. Designed by ThemeSphere.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?