Think about what your brain is doing right now, just reading this sentence. It’s pulling meaning from symbols, sequencing sounds you’re not even hearing aloud, predicting what comes next. And for most of human history, we had no idea how any of that actually worked.

We had theories, frameworks, competing schools of thought — but the actual mechanical signature of the brain doing its job? That remained stubbornly invisible. Until recently, a series of discoveries has begun pulling back the curtain, and some of the most surprising clues are coming not from biology, but from artificial intelligence.
| Discovery Name | Multiscale Neural Signature for Reach & Grasp Movements |
| Lead Researcher | Dr. Maryam Shanechi, USC Viterbi School of Engineering |
| Collaborating Researcher | Dr. Ariel Goldstein, Hebrew University of Jerusalem |
| PhD Contributor | Hamidreza Abbaspourazad, USC Electrical Engineering |
| External Collaborator | Prof. Bijan Pesaran, NYU Neural Science |
| Published In | Nature Communications |
| Key Technology | Machine-learning algorithm; iGluSnFR4 glutamate sensor |
| Primary Application | Brain-machine interfaces, paralysis treatment, movement disorders |
| Funding / Awards | NIH Director’s New Innovator Award; ASEE Curtis W. McGraw Research Award |
| Reference Website | USC Viterbi School of Engineering |
At the University of Southern California, electrical engineering professor Maryam Shanechi and her PhD student Hamidreza Abbaspourazad set out to answer a deceptively simple question: when you reach out and grab a cup of coffee, what exactly is your brain doing? The motion feels effortless.
It isn’t. Your brain is coordinating 27 joint angles in real time, managing signals across billions of neurons, translating intent into precise physical action faster than conscious thought. Researchers have long debated how this happens. Shanechi’s team may have found the answer, or at least a significant piece of it.
Their approach was unusual. Instead of studying just one type of brain signal, they looked at two simultaneously — the spiking of individual neurons and the broader, wave-like activity called Local Field Potentials, which represent the collective hum of thousands of neurons working together. Most researchers study these separately.
Shanechi’s team built a new machine-learning algorithm specifically designed to look at both at once, searching for patterns that existed across both scales at the same time. What they found was not what anyone predicted.
There was a common pattern — a kind of neural fingerprint — buried inside both types of activity. And it wasn’t just predictive of movement in a general sense. It was dominant. “When looking closer,” Shanechi explained, “we discovered that this common multiscale pattern actually happened to dominantly predict movement compared to all other existing patterns.” The team published the findings in Nature Communications, and the neuroscience community took notice.
What made it stranger, and more compelling, was the next part: the same pattern appeared across different test subjects. Different people, same neural signature. It’s hard not to notice the implications of that — the possibility that human movement is, at some level, written in a shared biological language.
That idea gains even more weight when you put it alongside a separate but strangely parallel discovery out of Hebrew University. Dr. Ariel Goldstein and a team that included researchers from Google and Princeton recorded brain activity from people listening to a spoken story — a 30-minute podcast — using a method called electrocorticography, which captures signals with unusually high precision.
What they found was that the brain doesn’t comprehend language all at once. It processes meaning in ordered stages, each building on the last, moving from basic word recognition toward something richer and more contextual. Layer by layer, like sediment forming stone.
Here’s where it gets genuinely strange. That same layered structure — that step-by-step accumulation of meaning — mirrors almost exactly how large AI language models like GPT-2 and Llama 2 are built. Early brain responses matched early AI processing stages. Later brain responses, particularly in Broca’s area, aligned with deeper, more contextually sophisticated AI layers. “What surprised us most,” Goldstein said, “was how closely the brain’s temporal unfolding of meaning matches the sequence of transformations inside large language models.”
These AI systems weren’t designed by studying brains. They were designed by engineers trying to solve a prediction problem. And yet they seem to have landed on a surprisingly similar architecture. That parallel either says something profound about the nature of language itself, or it’s a coincidence so large it doesn’t feel like one.
There’s a practical dimension here that deserves attention, too. Shanechi is explicit about where she hopes the movement research goes: brain-machine interfaces for paralyzed patients. If researchers can identify a consistent neural signature for reach and grasp — and translate that pattern into commands a machine can execute — then a person who has lost the ability to move their arm might one day recover meaningful function.
Her algorithm doesn’t just identify the pattern; it predicts arm and finger movements from it with measurable accuracy. The distance between laboratory result and clinical reality is still significant, and it would be a mistake to overstate how close that bridge is. But the foundation is now there in a way it wasn’t before.
Underlying all of this is a quieter, more foundational breakthrough in how scientists can observe the brain at all. A newly developed protein sensor called iGluSnFR4, built by researchers at the Allen Institute and Janelia Research Campus, can now detect glutamate — the brain’s primary chemical messenger — at the level of individual synapses in real time. Until recently, catching those signals was essentially impossible. They’re too fast, too faint.
Now scientists can watch neurons talking to each other as it happens. One researcher described the old approach like reading a book with all the words scrambled. This new tool, he said, finally shows you how the words connect.
What’s emerging from all of this work, taken together, is something that feels less like incremental scientific progress and more like the beginning of a genuine shift in how we understand the mind. The brain is no longer purely a biological mystery.
It’s becoming, slowly and imperfectly, a readable system — one that artificial intelligence is uniquely positioned to help decode. Whether that’s reassuring or unsettling probably depends on the day. But it is, without question, one of the more consequential developments in modern science.
