A Decoder to Turn Brain Activity Into Speech

For people who have lost their voice through paralysis and conditions such as throat cancer, amyotrophic lateral sclerosis (ALS) and Parkinson's disease, there may just be hope to regain what they've lost. Scientists have developed a brain-machine interface that can decode and eventually translate brain activity directly into speech.

According to Edward Chang, a professor of neurological surgery at the University of California San Francisco (UCSF) and the senior author of the work, for the very first time, entire spoken sentences can be generated based on an individual's brain activity. "This is an exhilarating proof of principle that, with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss," he said. This new development might just transform the lives of people who rely on very slow communication methods that make casual conversation impossible.

Currently, speech synthesizers are being used by these people to try and communicate verbally, like what was used by Stephen Hawking. Using it could be quite tedious for them since it involves spelling out words letter-by-letter using eye or facial muscle movements, and only allows people to say about eight words a minute, compared with natural speech, which averages from 100 to 150 words per minute.

Previous researches trying to achieve the same results have been focused mostly on unraveling how speech sounds are represented in the brain, and have had limited success. Chang's team, on the other hand, tried something different. They instead focused on the brain areas that send the instructions needed to coordinate the sequence of movements of the tongue, lips, jaw and throat during speech. "We reasoned that if these speech centres in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals," said Gopala Anumanchipalli, a speech scientist at UCSF and the paper's first author and given the speed, subtlety and complexity of movements people make during speech, this task presented an extremely difficult computational challenge for the team.

Their decoding system involves a two-step process: translating electrical signals in the brain to vocal movements and then translating those movements into speech sounds. For the first step, they gathered data from volunteers who were about to undergo neurosurgery for epilepsy and were implanted with electrodes as preparation for the procedure. The volunteers were asked to read several hundred sentences aloud, while the scientists recorded activity from a brain area known to be involved in speech production.

For the second step, the researchers used data previously gathered by old research which showed how vocal movements are linked to speech sounds. They were able to reverse this to determine their patients' vocal movements.

They then put both sets of data together by training a machine learning algorithm to match patterns of electrical activity in the brain with the vocal movements this would produce (such as pressing the lips together, tightening vocal cords and shifting the tip of the tongue to the roof of the mouth). The result is what they called as a "virtual vocal tract" that can be controlled directly by the brain to produce a synthetic approximation of a person's voice.

The audio samples of the speech were then presented to several hundreds of people for feedback and according to them, it sounds like a regular human voice, with a strong foreign accent. In terms of intelligibility, the general intonation and gender of the speaker were conveyed well, but the decoder struggled with "b" and "p" sounds.What's another noteworthy result of the study is that they were also able to decode new sentences the algorithm was not trained on and it appeared to translate between people, which is seen as crucial for such technology to be useful to patients.

The next big test will be to determine whether someone who cannot speak could learn to use the system without being able to train it on their own voice.This latest advancement could, for the first time, allow people who have been deprived of speech through illness or injury to converse naturally, without extra effort. This has the potential to restore, not only the ability to state one's thoughts and needs, but also the joy and sparkle of conversation.

Join the Discussion

Recommended Stories

Real Time Analytics