Summary: The human brain processes spoken language in a step-by-step sequence that closely resembles how large linguistic models transform text. Using electrocorticography recordings of people listening to a podcast, the researchers found that early brain responses aligned with early AI layers, while deeper layers corresponded to later neural activity in regions such as Broca’s area.
The findings challenge traditional language theories that rely on fixed rules and instead highlight dynamic, context-based computation. The team also published a rich data set linking neural signals with linguistic features, offering a powerful resource for future research in neuroscience.
Key facts
Layered alignment: Early brain responses tracked the first layers of the AI model, while deeper layers were aligned with later neural activity. Context over rules: AI-derived contextual embeddings predicted brain activity better than classical linguistic units. New resource: Researchers published a large neurolinguistic data set to accelerate the neuroscience of language.
Source: Hebrew University of Jerusalem
In a study published in Nature Communications, researchers led by Dr. Ariel Goldstein of the Hebrew University in collaboration with Dr. Mariano Schain of Google Research along with Professor Uri Hasson and Eric Ham of Princeton University, discovered a surprising connection between the way our brains make sense of spoken language and the way advanced AI models analyze text.
Using electrocorticography recordings of participants listening to a thirty-minute podcast, the team showed that the brain processes language in a structured sequence that reflects the layered architecture of large language models such as GPT-2 and Llama 2.
What the study found
When we listen to someone speak, our brain transforms each incoming word through a cascade of neural calculations. Goldstein’s team found that these transformations develop over time in a pattern parallel to the tiered layers of AI language models.
Early layers of AI track simple features of words, while deeper layers integrate context, tone, and meaning. The study found that human brain activity follows a similar progression: early neural responses align with the early layers of the model and later neural responses align with deeper layers.
This alignment was especially clear in high-level language regions, such as Broca’s area, where the peak brain response occurred later in time for deeper layers of AI.
According to Dr. Goldstein, “What surprised us most was how closely the brain’s temporal development of meaning coincides with the sequence of transformations within large linguistic models. Although these systems are built very differently, they both appear to converge in a similar step-by-step accumulation toward understanding.”
Why is it important
The findings suggest that artificial intelligence is not just a tool for generating text. It may also offer a new window into understanding how the human brain processes meaning. For decades, scientists believed that understanding language depended on symbolic rules and rigid linguistic hierarchies.
This study challenges that view. Instead, it supports a more dynamic and statistical approach to language, in which meaning emerges gradually through layers of contextual processing.
The researchers also found that classical linguistic features, such as phonemes and morphemes, did not predict real-time brain activity or AI-derived contextual embeddings. This strengthens the idea that the brain integrates meaning in a more fluid and context-based way than previously believed.
A new benchmark for neuroscience
To advance the field, the team publicly published the entire dataset of neural recordings combined with linguistic features. This new resource allows scientists around the world to test competing theories about how the brain understands natural language, paving the way for computational models that more closely resemble human cognition.
Key questions answered:
A: The brain transforms spoken language through a sequence of calculations that align with progressively deeper layers of large language models.
A: It challenges rule-based theories of language and suggests instead that meaning emerges through dynamic, context-driven processing similar to modern artificial intelligence systems.
A: A publicly available data set that combines electrocorticography recordings with linguistic features, allowing new tests of competing linguistic theories.
Editorial notes:
This article was edited by a Neuroscience News editor. Magazine article reviewed in its entirety. Additional context added by our staff.
About this language and AI research news
Author: Yarden Mills
Source: Hebrew University of Jerusalem
Contact: Yarden Mills – Hebrew University of Jerusalem
Image: Image is credited to Neuroscience News.
Original research: Open access.
“The temporal structure of natural language processing in the human brain corresponds to a layered hierarchy of large language models” by Uri Hasson et al. Nature Communications
Abstract
The temporal structure of natural language processing in the human brain corresponds to a layered hierarchy of large language models.
Large language models (LLMs) offer a framework for understanding language processing in the human brain. Unlike traditional models, LLMs represent words and context through layered numerical embeddings.
Here, we demonstrate that the layer hierarchy of LLMs aligns with the temporal dynamics of language understanding in the brain.
Using electrocorticography (ECoG) data from participants listening to a 30-minute narrative, we show that deeper layers of LLM correspond to later brain activity, particularly in Broca’s area and other language-related regions.
We extract contextual embeddings from GPT-2 XL and Llama-2 and use linear models to predict neural responses over time. Our results reveal a strong correlation between model depth and the brain’s temporal receptive window during comprehension.
We also compare LLM-based predictions with symbolic approaches, highlighting the advantages of deep learning models in capturing brain dynamics.
We publish our aligned neural and linguistic data set as a public benchmark for testing competing theories of language processing.

























