In 2020, when i was a child, the years that often appeared in science fiction, actually, really, came!! Cars fly in the sky, robots do housework, clothes automatically adjust the temperature, some people migrate to Mars, brain waves transmit consciousness… These were once “set” in the 2020 world. Although sci-fi is not entirely a reality, we are undoubtedly witnessing the birth of many new technologies, some of which have progressed even beyond what we once imagined.
Photo credit: Pixabay
The Brain Interface (BCI) is a science-fiction-sounding technology that uses the interpretation of signals of neuroelectric activity to exchange information between external devices and the brain. Recently, a team of neurosurgery experts and engineers from Stanford University, made the unexpected discovery in the field of brain-computer interfaces, upsetting the understanding of the brain over the past few decades and providing a new way for researchers to help speechless people recover in the future.
The results of the study were published in the prestigious open source journal eLife
In the 1930s, neurosurgeon Wilder Penfield and colleagues proposed a model to explain how the brain controls movement. In a long area of the brain that spans the head, the motor cortex, different areas control the movements of the hands, legs, faces and other different parts.
Later studies have gradually complicated this simplified model, such as finding that the brain regions responsible for subdivisions such as fingers and palms intersect. But in general, as the “movement villain” model, as we often see in neuroscience textbooks, shows that the main parts of the body are controlled by different areas of the cortex.
The larger the organ of the “motor villain”, the larger the area of the motor cortex in the brain responsible for the organ’s movement (Photo: 123RF)
In a clinical trial launched a decade ago, scientists studying brain-computer interfaces began implanting special sensors in the cerebral cortex of volunteers, reading neuronal signals in specific brain regions and using algorithms to convert them into motion, with the aim of “thinking” to control devices such as computers and prosthetics.
In the brains of several of the participants, the sensors were placed in an area of their motor cortex called “hand knot.” This area has been thought to be associated with hand and arm movements in the past.
In 2017, Professor Krishna Shenoy, an electrical engineer at Stanford University, and Professor Jaimie Henderson, a neurosurgery expert, published a major advance in the project: They used the brain interface to decode the “motor hand and arm” signals of neurons in the brain region. Let several paralysed people successfully achieve accurate and fast idea typing!
Two sensors implanted in the “handle” area of the brain’s motor cortex can record nerve activity in that area (Photo: Jaimie Henderson / Stanford University)
As the team, led by Professor Shenoy and Professor Henderson, continued to decode the neural signals in the volunteers’ brains, this time there was an unexpected new discovery!
Dr. Stavisky, the first author, said the two patients involved in the trial were quadriplegic sons with spinal cord injuries, but were still able to speak. The researchers were then able to look at the neural activity in areas related to their motor cortex hand activity when volunteers spoke loudly. “It’s a typical ‘I don’t know what’s going to happen’ study, ” says Professor Shenoy, “but we say: give it a try.” “
The electrode array on the sensor can record the volunteers’ neural activity signals at the resolution of a single neuron (Photo: Howard Hughes Medical Institute, Credit: Peter Barreras)
As a result, Dr. Stavisky and his colleagues found that the volunteers’ neuronal activity changed significantly after giving a “start to speak” tip. These neurons were supposed to be active in controlling hand and arm movements, but unexpectedly, they became active when volunteers spoke.
Not only that, but when volunteers made different sounds, the patterns of activity of these neurons varied. The researchers provided volunteers with a list of 10 words to record their neural signals when they said each word. By analyzing neural activity patterns, the researchers were able to identify which words the volunteers were speaking, with 85% and 55% accuracy in the two volunteers!
Dr Stavisky said they then wanted to record the neural activity in this brain region as volunteers spoke longer sentences and paragraphs. Then, by decoding nerve signals, the volunteers re-enact what they say.
The study’s three lead authors, Dr Stavisky, Professor Shenoy and Professor Henderson (Photo: Howard Hughes Medical Institute, Credit: Jaimie Henderson)
“I’m very excited about this paper, ” commented Another prominent scientist in the field of brain-computer interfaces, Professor Edward Chang of the University of California, San Francisco. It raises the question of how exclusive the functional distribution of specific areas of the brain is… I think (the lack of exclusivity) is something we didn’t fully realize in the past. “
This also means that the area in the motor cortex associated with hand movements may be a previously unknown breakthrough in allowing people who are incapacitated to speak again.
The researchers hope that building on the findings could build on the discovery of a medical device implanted in the brain that could help people who have lost the ability to speak regain their speech skills. Speaking about when the device is expected to become a reality, Professor Shenoy said: “I think we can see something in the next 10 years. “
We look forward to the next 10 years of progress and breakthroughs that will bring science fiction to life.