Skip to Main Content

In 2003, Pancho’s life changed forever. That’s when a car crash sent the 20-year-old farm worker into emergency surgery to repair damage to his stomach. The operation went well, but the next day, a blood clot caused by the procedure cut off oxygen to his brain stem, leaving him paralyzed and unable to speak.

In February 2019, another operation transformed his life again. This time, as part of an audacious clinical trial, surgeons at the University of California, San Francisco, opened his skull and slipped a thin sheet packed with 128 microelectrodes onto the surface of his brain. The system, developed in the lab of UCSF neurosurgeon Edward Chang, would listen in to the electrical impulses firing across Pancho’s motor cortex as he tried to speak, then transmit those signals to a computer, whose language-prediction algorithms would decode them into words and sentences. If it worked, after more than 15 years with only grunts and moans, Pancho would have a voice again.

advertisement

And it did. In a landmark study published last year, Chang and his colleagues reported that the neuroprosthesis enabled Pancho (a nickname, to protect the patient’s privacy) to type words on a screen by attempting to speak them. The algorithm correctly constructed sentences from a 50-word vocabulary about 75% of the time.

Now, in a new report published Tuesday in Nature Communications, Chang’s team has pushed that scientific milestone even further. By tweaking their system to recognize  individual letters of the NATO phonetic alphabet — Alpha, Bravo, Charlie, etc. — the device was able to decode more than 1,100 words from the electrical activity inside Pancho’s brain as he silently tried saying the letters.

That included sentences the researchers prompted him to spell out, like “thank you,” or “I agree.” But it also freed him up to communicate other things outside of their training sessions. One day late last summer, he said to the researchers, “You all stay safe from the virus.”

advertisement

“It was cool to see him express himself much more flexibly than what we’d seen before,” said David Moses, a postdoctoral engineer who developed the decoding software with graduate students Sean Metzger and Jessie R. Liu. The three are lead authors on the study.

Pancho is one of only a few dozen people on the planet who’ve had brain-computer interfaces, or BCIs, embedded in their gray matter as part of a clinical experiment. Together, these volunteers are pushing the boundaries of a technology with the potential to help thousands of people who’ve lost the ability to speak due to stroke, spinal cord injury, or disease to communicate at least some of what’s going on inside their heads. And thanks to parallel advances in neuroscience, engineering, and artificial intelligence over the past decade, the still-small but burgeoning BCI field is moving fast.

Last year, scientists at Stanford University published another groundbreaking study in which a volunteer visualized himself writing words with a pen and a BCI translated those mental hand movements into speech — up to 18 words a minute. In March, a team of international researchers reported for the first time that someone with locked-in syndrome — on a ventilator with full-body paralysis and no voluntary muscle control — used a BCI to communicate in full sentences one letter at a time.

The UCSF team’s latest study shows that their spelling system can be scaled up to give people robust vocabularies. In a set of offline experiments, computer simulations using Pancho’s neural activity recordings suggest the system should be able to translate up to 9,000 words. And notably, it worked faster than the device Pancho currently uses to communicate — a screen he taps using a stylus he controls with his head. “Our accuracy is not 100% yet, and there are other limitations, but now we’re in the ballpark of existing technologies,” said Moses.

Dr. Eddie Chang prepares to connect an experimental brain implant to a computer
Dr. Eddie Chang, a neurosurgeon and Chairman of the Department of Neurological Surgery at UCSF Medical School, prepares to connect an experimental brain implant to a computer which will help a paralyzed patient, Pancho, speak by reading his brain signals. Courtesy Mike Kai Chen

These systems are still far from producing natural speech in real-time from continuous thoughts. But that reality is inching closer. “It’s likely in our reach now,” said Anna-Lise Giraud, director of the Hearing Institute at the Pasteur Institute in Paris, who is part of a European consortium on decoding speech from brain activity. “With each new trial we learn a lot about the technology but also about the brain functioning and its plasticity.”

This is a much harder problem than reading brain signals for movement, the technology behind mind-controlled prosthetic limbs. One of the main challenges is that many different brain regions are involved in language — it’s encoded across neural networks that control the movement of our lips, mouth, and vocal tract, associate written letters with sounds, and recognize speech. Current recording techniques can’t keep tabs on all of them with sufficient spatial and temporal resolution to decode their signals.

The other problem is that the signals produced by thinking about saying words tend to be weaker and much more noisy than those produced by actually speaking. Accurately pulling out attempted speech patterns requires taking into account both distributed, low-frequency signals and more localized high-frequency signals. There are many different ways to do that, so this problem also presents an opportunity. It means there are multiple options for attempting speech decoding at different linguistic levels — individual letters, phonemes, syllables, and words.

These approaches combined with better language models produced in the past few years have helped to overcome the field’s historical decoding difficulties, said Giraud. The most pressing bottleneck now is engineering interfaces compatible with long-term chronic use. “The challenge will be to find the best compromise between invasiveness and performance,” she said.

Deeper-penetrating, surgically embedded electrodes can hone in on the crackle of individual neurons, making them more adept at decoding speech signals. But the brain, bathed continuously in a corrosive salty fluid, is not exactly an electronics-friendly environment. And the operation comes with the risk of inflammation, scarring, and infection. Noninvasive interfaces that eavesdrop on electrical activity from outside the skull can only capture the collective firing of large groups of neurons, making them safer but not as powerful.

Companies and research groups — including the one Giraud is a part of — are now working on building next-generation, high-density surface electrodes which would eliminate the need for surgery and cumbersome accessory hardware. But for now, scientists testing technologies in the clinic are mostly sacrificing practicality for precision.

In the BRAVO trial at UCSF for instance, volunteers like Pancho receive an implant that has to be attached to computers by a cable in order to read his brain activity. Chang’s team would like to transition to a wireless version that would beam data to a tablet and wouldn’t pose as much of a risk, but that kind of hardware update doesn’t happen overnight. “It should be possible,” said Moses. “It will just take time and effort.”

Developing noninvasive BCIs tailored for long-term use outside a lab isn’t just a prerequisite for making them more widely available. It’s also an ethical issue. No one wants patients to go through operations and training to use neural implants, only to have to have them removed because of an infection or because the electrodes stop functioning.

In 2013, BCI-manufacturer NeuroVista folded when it couldn’t secure new funding, and epilepsy patients in a clinical trial of its device had to have their implants removed, an experience one patient described to the New Yorker as “devastating.” More recently, neuroprosthetics maker Second Sight stopped servicing the bionic eyes they sold to more than 350 visually impaired people because of insufficient revenues, according to a recent IEEE Spectrum investigation. BCIs are starting to give people back the ability to speak. But if they’re to deliver on their full promise, they have to be built to last.

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.