1st patient with new 'mind-reading' device uses brain signals to write
An implanted device allows a man to translate his brain signals into written words.
A man who developed paralysis and lost his ability to speak following a stroke can now communicate using a system that translates his brain's electrical signals into individual letters, allowing him to craft whole words and sentences in real time.
To use the device, which receives signals from electrodes implanted in his brain, the man silently attempts to say code words that stand in for the 26 letters of the alphabet, according to a new report, published Tuesday (Nov. 8) in the journal Nature Communications. These code words come from the NATO phonetic alphabet, in which "alpha" stands for the letter A, "bravo" for B and so on.
"The NATO phonetic alphabet was developed for communication over noisy channels," Sean Metzger, the study's first author and a doctoral candidate in the University of California, Berkeley and University of California, San Francisco's Graduate Program in Bioengineering, told Live Science. "That's kind of the situation we're in, where we're in this noisy environment of neural recordings." The researchers initially tried using individual letters instead of code words, but their system struggled to distinguish phonetically similar letters, such as B, D, P and G.
By silently speaking the NATO code words, the user generates brain activity that can then be decoded by algorithms that piece together the intended letters and insert spaces between words as they form. To end a sentence, the user attempts to squeeze their right hand; this produces distinct brain activity that tells the device to stop decoding.
Related: What happens in our brains when we 'hear' our own thoughts?
In recent tests, the man could produce sentences from a vocabulary of more than 1,150 words at a speed of 29.4 characters per minute, or about seven words per minute. The decoder device did occasionally make mistakes when translating his brain activity into letters, showing a median character error rate of 6.13%.
This marks an improvement from a previous test of the system, which was described in a 2021 report in The New England Journal of Medicine. In that test, the man built sentences by attempting to say whole words aloud from a set vocabulary of 50 words. The device could decode about 18 words per minute with a median accuracy of 75% and a maximum accuracy of 93%.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
"That was great, but limited," in terms of vocabulary and in that the user attempted to speak the words aloud, Metzger said. The latest trial of the system shows that the system still worked in silence and that, by using a spelling approach, a user can greatly expand the available vocabulary. In the future, the two approaches could be easily combined: Users could rely on the whole-word decoder to quickly generate common words, and they could use the single-letter decoder to spell out less-common words, Metzger explained.
The man featured in both studies is the first participant in the Brain-Computer Interface Restoration of Arm and Voice (BRAVO) trial, which is being conducted at UC San Francisco. The trial is open to adults who've lost significant speech and motor control due to conditions such as stroke, amyotrophic lateral sclerosis (ALS) and muscular dystrophy.
At age 20, the participant had a severe stroke that cut off blood flow to a part of the brain stem called the pons. This structure acts as a bridge between the brain and the spinal cord, and following his stroke, the participant lost much of his ability to move his head, neck and limbs and all of his ability to produce intelligible speech. In general, the man now communicates by using his limited head mobility to select letters on a screen using a physical pointer or a head-controlled cursor.
The man entered the BRAVO trial at age 36, at which time he underwent surgery to have a web of 128 electrodes laid over the surface of his brain. Crucially, these electrodes sit on top of a region of the wrinkled cerebral cortex that controls the muscles of the vocal tract, instructing them to move and thus produce specific sounds. It also covers the area of the brain involved in moving the hands.
Related: Can we think without using language?
For now, to connect to the decoder, the trial participant must be physically plugged into the device through a port that sticks up through the skin of his scalp. Ideally, in the future, the system will be completely wireless, Metzger said.
To calibrate the decoder, the researchers cued the participant to silently attempt to say each of the NATO code words and also practice attempting to squeeze his right hand. In time, they also had him spell out arbitrary words and copy down whole sentences, letter for letter. Eventually, after spending about 11 hours training with the system, the man could spell out his own original sentences and produce answers to specific questions.
One limitation of the system is that there's a 2.5-second time window allotted for each letter; in that time, the user silently says a code word, and the system records and decodes the resulting brain signals. Narrowing that time window and making the pace of decoding more flexible will be key to both increasing the system's speed, Metzger said.
Although the new study includes only one participant, it's "still a break-through study," said Jun Wang, an associate professor in the departments of Speech, Language, and Hearing Sciences and Neurology at the University of Texas at Austin. More research is needed to know whether the same approach will work for other patients, or if it will need to be somewhat adapted for each person, Wang told Live Science in an email.
To be fit for daily use, such devices will need to be easy for patients and their caregivers to operate without assistance, and they'll need to interface with other computer softwares, Wang said.
The technology would be especially useful to patients in a "locked-in state" who are completely paralyzed but retain their cognitive function, he said. For paralyzed patients who can still move their eyes and blink, noninvasive, eye-tracking-based communication systems would likely remain the best option, he added.
Editor's note: This article was updated on Nov. 15 to adjust the phrasing of a comment from Jun Wang. The original article was published on Nov. 9.
Nicoletta Lanese is the health channel editor at Live Science and was previously a news editor and staff writer at the site. She holds a graduate certificate in science communication from UC Santa Cruz and degrees in neuroscience and dance from the University of Florida. Her work has appeared in The Scientist, Science News, the Mercury News, Mongabay and Stanford Medicine Magazine, among other outlets. Based in NYC, she also remains heavily involved in dance and performs in local choreographers' work.