Summary: The new brain computer interface (BCI) allows a paralyzed man to control the robotic arm simply by imagining movement. Unlike previous BCIs that lasted just a few days, this AI-enhanced device worked reliably for seven months. AI models adapt to natural changes in brain activity and maintain accuracy over time.
After training with virtual arms, participants grasped, moved and manipulated real-world objects. This technique represents a major step in restoring movement in a paralyzed person. Researchers are now improving the system for smoother operation and testing its use in home environments.
Important Facts:
Long-term stability: The AI-enhanced BCI was much longer than the previous version and worked for 7 months. Adaptive Learning: The system adapts to daily shifts in brain activity and maintains accuracy.
Source: UCSF
Researchers at UC San Francisco have enabled a paralyzed man to control the robotic arm via a device that relays signals from the brain to a computer.
He was able to grasp, move and drop objects simply by imagining his actions.
The device known as the Brain-Computer Interface (BCI) worked on record for seven months without the need for adjustment. Until now, such devices have only been working for a day or two.
BCI relies on AI models that can adapt to small changes that occur in the brain, so that people learn to repeat movements, in this case repeating imaginary movements and do them in a more sophisticated way.
“This fusion of learning between humans and AI is the next step in these brain-computer interfaces,” he said as a neurologist, Karnesh Ganguly, MD, professor of neurology and a member of the UCSF Weil Institute. “It’s what you need to achieve sophisticated and realistic features.”
The study, funded by the National Institutes of Health, will appear in the cell on March 6th.
The key was to discover how activity could change daily activity in the brain, as research participants repeatedly imagined themselves making certain movements. When AI was programmed to explain these shifts it worked for several months at a time.
Location, location, location
Ganguly studied how patterns of brain activity in animals represent specific movements and saw that these expressions change daily, as animals have learned. He suspected that the same thing was happening in humans, so their BCIS quickly lost the ability to recognize these patterns.
Gangully and Neurology Researcher Dr. Nikhilesh Natraj collaborated with a study participant who was paralyzed with a stroke several years ago. He didn’t speak or move.
He implanted a small sensor on the surface of his brain and was able to pick up brain activity when he imagined it moving.
To see if his brain patterns had changed over time, Ganguly asked participants to imagine moving different parts of his body, like his hands, legs, and head.
He was unable to move in practice, but the participant’s brain was still able to generate signals of movement when he imagined it to do it. BCI recorded brain representations of these movements through brain sensors.
The Ganguly team discovered that the form of expression in their brains remained the same, but the place shifted slightly each day.
From virtual to reality
Ganguly then asked participants to imagine doing simple movements with their fingers, hands or thumbs over the course of two weeks, but the sensors recorded brain activity to train the AI.
The participants then tried to control the robot’s arms and hands. However, the movement was not yet very accurate.
Therefore, Ganguly practiced participants in Virtual Robot Arm and gave feedback on visualization accuracy. Eventually, he got a virtual arm to do what he wanted.
Once participants began practicing with real robot arms, it only took a few practice sessions for him to transfer his skills to the real world.
He was able to pick up the robot arms and pick up the blocks and turn them to move them to a new location. He could even open the cabinet, take out the cup and hold it in the water digging device.
A few months later, participants were able to control the robotic arm after a 15-minute “tune-up” and adjusted how the moving representations drifted after the device began using it.
Ganguly is currently planning to improve its AI model to test BCI in a home environment to make the robotic arms move faster and more smoothly.
For those with paralysis, the ability to feed themselves or drink water will change your life.
Ganguly believes this is within reach.
“We’ve learned how to build a system now and I’m sure we can do this job,” he said.
Author: Other authors of this study include Sarah Seko and Adelyn Tu-chan of UCSF and Reza Abiri of Rhode Island University.
Funding: This work was supported by the National Institutes of Health (1 DP2 HD087955) and the UCSF Weill Institute for Neurosciences.
About this AI and robot research news
Author: Robin Marks
Source; UCSF
Contact: Robin Marks – UCSF
Image: Image credited to Neuroscience News
Original research: Open access.
“Sampling the expressive plasticity of simple imaginary movements over several days allows for long-term neural inhibitory control,” Karunesh Ganguly et al. cell
Abstract
Sampling the expressive plasticity of simple imaginary movements over several days allows for long-term neurological control
The nervous system must balance the stability and plasticity of neural expression. It is unclear what the expressive stability of simple rehearsed behavior is, especially in humans and in their adaptability to new contexts.
Using an electrocortical brain computer interface (BCI) on Tetraplegic participants, we found that the low-dimensional manifold and relative representation distances of the simple imaginary movement repertoire were very stable.
However, the absolute position of the manifold indicated a constrained daily drift. Surprisingly, neurostatistics, in particular variance, can be flexibly adjusted to increase the expression distance during BCI control without somatic changes.
Discriminatoriality was strengthened with practice and was inherent in BCI and demonstrated contextual specificity.
We then sampled the plasticity and drift of the representation, and then revealed a meta-representation structure with generalizable decision boundaries of the repertoire. This allows for long-term neural inhibitory control of the robotic arm and hands for reaching and grasping.
Our study provides insight into mesoscale representational statistics that allow for long-term complex neural inhibitory control.