Summary: A new study reveals how uniquely wired human brains can perceive the world in strikingly similar ways. The researchers recorded neural activity in vivo in epilepsy patients and found that while each person’s neurons respond differently to the same image, the relationships between those neural responses remain consistent across individuals.
This shared relational pattern allows different brains to interpret the same scene (such as a dog running on the beach) in similar ways. The findings shed light on the universal structure of perception and can help refine artificial intelligence models inspired by human cognition.
Key facts:
Unique but equal: Each person’s neurons activate differently, but the relationships between their activity patterns are consistent between people. Shared perception: This common relational code explains how humans interpret the world similarly despite individual neural wiring. Implications of AI: Understanding how the brain organizes perception could improve artificial neural networks and machine learning.
Source: Reichman Institute
How do we all see the world similarly?
Imagine sitting with a friend at a coffee shop, both of you looking at a phone screen that shows a dog running on the beach.
Although each of our brains is a world unto itself, made up of billions of neurons with completely different connections and unique activity patterns, both would describe it as: “A dog on the beach.” How is it possible that two such different brains lead to the same perception of the world?
A joint research team from Reichman University and the Weizmann Institute of Science investigated how people with different wired brains can still perceive the world in strikingly similar ways. my
Every image we see and every sound we hear is encoded in the brain by the activation of small processing units called neurons, nerve cells that are ten times smaller than a human hair.
The human brain contains 85 billion interconnected neurons that allow us to experience, think, and respond to the world.
The question that has intrigued brain researchers for years is how this coding takes place and how it is possible for two people to have completely different neural codes and yet end up with similar perceptions.
The research team, led by Reichman University graduate student Ofer Lipman and supervised by Prof. Rafi Malach and Dr. Shany Grossman of the Weizmann Institute and Prof. Doron Friedman and Prof. Yacov Hel-Or of Reichman University, set out to observe how neurons in the brain encode information in real time.
This is an extremely challenging task as most brain imaging methods provide only a low-resolution image, similar to a satellite photograph of a city where you can see the roads but not the people on the streets.
To overcome this challenge, the researchers turned to a unique data source: patients with epilepsy who had electrodes implanted in their brains for medical purposes. While the implants were placed to help doctors locate the epicenter of patients’ seizures, they also offered researchers a rare window into the activity of brain neurons (recorded live and not simulated or inferred) as patients viewed the images.
The team of researchers found that, just like in artificial neural networks (the technology behind AI), the raw patterns of activity in the human brain differ from person to person.
When looking at a cat, the neurons that “light up” (are active) in one person’s brain may be different neurons in another person’s brain.
But here’s a surprising finding: When the researchers moved from examining the raw activity of neurons to looking at relationships between overall patterns of neuron activity (i.e., how strongly the brain overall responds to a cat versus a dog), they discovered a common relational structure among all participants.
For example, if a brain’s overall activity in response to a cat is more similar to its response to a dog than, say, an elephant, that same relationship is likely to hold in all other brains. In other words, the actual activity patterns in different brains may not be identical, but the relationship between them is preserved.
This relational representation may be the way the brain organizes information so that all humans can understand the world similarly, even when the underlying neural coding differs.
“This study brings us one step closer to deciphering the brain’s ‘representational code’: the language in which our brain stores and organizes information,” explains Lipman.
“This understanding helps advance not only neuroscience, but also AI: insights into how the brain represents information can inspire the design of more efficient and intelligent artificial networks, and vice versa: artificial networks can generate insights that deepen our understanding of the brain.
“This study is part of a broad series of work in which researchers compare the representation of information in natural networks (the human brain) with the representation of information in artificial networks (AI). This integration opens the door to a richer understanding of ourselves and the systems we build.”
So, the next time you see a dog running on the beach and think “a dog,” remember that behind this simple thought lies a vast and complex code that science is only beginning to decipher.
Key questions answered:
A: Although neurons differ between individuals, the relationships between neural responses to objects (like the way the brain reacts to a cat versus a dog) follow a universal pattern.
A: They recorded real-time neural activity from epilepsy patients with brain implants, providing direct insights into how information is represented in the human brain.
A: The discovery bridges human cognition and artificial intelligence, showing how understanding the brain’s representational code could guide the design of more efficient AI systems.
About this perception and neuroscience research news.
Author: Lital Ben Ari
Source: Reichman Institute
Contact: Lital Ben Ari – Reichman Institute
Image: Image is credited to Neuroscience News.
Original research: Open access.
“Cross-subject-invariant relational structures in high-order human visual cortex” by Ofer Lipman et al. Nature Communications
Abstract
Cross-subject-invariant relational structures in high-order human visual cortex.
It is fundamental in behavior that different individuals see the world in very similar ways. This is an essential foundation for humans’ ability to cooperate and communicate.
However, what are the neural properties that underlie these commonalities between subjects in our visual world?
Discovering which aspects of neural coding remain invariant in the brains of individuals will shed light not only on this fundamental question but will also point out the neural coding scheme at the basis of visual perception.
Here, we addressed this question by obtaining intracranial recordings from three groups of patients participating in a visual recognition task (in total, 19 patients and 244 high-order visual contacts included in the analyses) and examining the neural coding scheme that was most consistent across the individuals’ visual cortex.
Our results highlight relational coding, expressed by the set of similarity distances between profiles of pattern activations, as the most consistent representation across individuals.
Alternative coding schemes, such as activation pattern coding or linear coding, did not achieve similar consistency across subjects.
Therefore, our results support relational coding as the core neural code underlying the shared perceptual content of individuals in the human brain.






_6e98296023b34dfabc133638c1ef5d32-620x480.jpg)











