Brain Chip Technology Enables Woman With ALS To Speak 13 Years After Being Diagnosed With Neurodegenerative Disease

There are still chances for those with amyotrophic lateral sclerosis (ALS) to speak again. That is the case with Pat Bennett, 68, who was diagnosed with the neurodegenerative disease 13 years ago.

Brain Chip Enables Woman With ALS To Speak Again

Before losing her ability to talk in 2012 due to a rare illness, Bennett used to ride horses as an equestrian and work in human resources. She also maintained an active lifestyle by jogging every day.

Thanks to the four tiny sensors the size of baby aspirins implanted in her brain as part of a clinical trial at Stanford University, assistance is on the way for her to speak again. Bennett can transmit her ideas directly from her mind to a computer monitor at a record-breaking 62 words per minute, more than three times quicker than the previous speed record for the technology.

Outside of Stanford, cognitive scientists and medical researchers are also impressed. Professor Philip Sabes at the University of California, San Francisco, an expert on brain-machine interfaces and co-founder of Elon Musk's Neuralink, called the new work a "big breakthrough."

According to Sabes, the brain chip's performance is already at a level that many people who are mute would desire if the technology were available.

Bennett has battled amyotrophic lateral sclerosis (ALS) since 2012. This is the same disease that claimed the lives of Stephen Hawking in 2018 and Sandra Bullock's partner, Bryan Randall, earlier this summer.

According to Dr. Jaimie Henderson, the surgeon who skillfully placed the iBCI electrodes on Bennett's brain surface, we've demonstrated that you can understand intended speech by recording activity from a tiny area on the surface.

Bennett didn't hide how impressed she was with the brain chip technology. According to her, in an email, the initial results have proven the concept, and eventually, technology will catch up to make it easily accessible to people who cannot speak. She added that it enables nonverbal people to maintain contact with the outside world.

According to Bennett, what comes to mind when one hears ALS is its impact on arms and legs. However, it usually starts with difficulty speaking among ALS patients, which explains why she can't talk at present.

Intracortical Brain-Computer Interface (iBCI)

Bennett collaborated with an artificial intelligence system over 26 sessions, each lasting roughly four hours, to teach the AI how to recognize which brain activity corresponds to 39 important phonemes, or sounds, used in spoken English.

Bennett would try to effectively convey between 260 and 480 sentences per training session to the AI using the brain-sensor technology Stanford researchers call an intracortical brain-computer interface (iBCI).

The sentences were chosen randomly from a sizable data set called SWITCHBOARD that was derived from a collection of phone conversations gathered by Texas Instruments, a calculator manufacturer, for language research back in the 1990s.

Examples of informal sentences included "I left right in the middle of it" and "It's only been that way in the last five years."

Bennett and the Stanford team with whom she collaborated reduced the error rate of the AI translator to 9.1 percent during sessions where the sentence alternatives were restricted to a vocabulary of 50 words.

The intended speech AI of the iBCI experienced an increase in translation errors when the vocabulary limit was raised to 125,000 words, which is closer to the entire number of commonly used English words. The percentage increased to 23.8%.

Even if the error rate may be lower, the researchers anticipated that with more training and a wider interface-more implants, in other words-the brain and the iBCI's AI could interact more.

The system has already outperformed all previous models three times more in speed when decoding ideas into speech.

The iBCI developed by the Stanford team could speak at a pace of 62 words per minute, which is 3.2 times quicker than the previous record holder and more than ever near the average human speech rate of 160 words per minute.

Check out more news and information on Neuralink in Science Times.

Join the Discussion

Recommended Stories

Real Time Analytics