Meta Platforms Creating AI System to Decode Brainwaves [STUDY]

Meta Platforms, Facebook's mother company, is now crafting new paths to understand what is going on in one's brain.

Recently, the company announced that its scientists had developed an AI that can "hear" someone's thoughts through their brainwaves.

The study is still at kilometer zero, but its main intention is to craft a technology that will help individuals with traumatic brain injuries who are paralyzed and unable to communicate.

Developers also intend to record brain activity without poking the brain, which usually needs surgery.

This study looked at 169 healthy participants who heard narratives and words read aloud. The scientists documented their brain activity with multiple pieces of equipment, and the researchers fed all the gathered information into an AI model to find similar patterns. Intending the algorithm to 'hear' the participants' listening activity using electrical and magnetic pursuit in their dreams.

Jean Remi King, Facebook Artificial Intelligence Research (FAIR) Lab research scientist, spoke with TIME about their study's challenges, vision, and ethical implications. He said that "there are a bunch of conditions, from traumatic brain injury to anoxia (brain oxygen deficiency), that makes people unable to communicate."

He added that these patients' communication paths had been brain-computer interfaces over the past decades. Specialists can decode the patient's brain activity using electrodes on the motor areas of the brain. But putting an electrode inside the patient's brain is invasive, he added, and that is why Remi King and his team pursued the study with this goal: to develop an AI system that will decode brain retorts to spoken narratives.

Meta Platform
Bastian Riccardi

The Starting Challenges of the Study

Jean mentioned two main challenges they had in the first stages of the research. "The signals we pick up from brain activity are extremely "noisy."

He explains how the brain's protected layer from the external and internal parts (such as the skull, skin, and hair) serves as a wall that can corrupt the brain's signal, making it weak before it reaches its systems. He added that they need "super advanced technology" to pick up the brainwaves.

The next ample opportunity, as per Jean, is more conceptual. He said they are unaware of how the brain represents language to a bigger extent.

He even demonstrates when they get a crisp signal, without machine learning, it will be difficult to say, "Okay, this brain activity means this word, or this phoneme, or an intent to act or whatever."



The Research Approach and Initial Results

Jean showed everyone a glimpse of the study by tweeting the latest updates on the AI system.

The approach they did it through a contrastive-learning architecture that will position the signals to corresponding speech sounds directed into a speech module. This developing system trained 56k hours of speech wav2vec 2.0.

Remi-King clarified that this study would develop, and the "brain module" would increase its performance with further data collection. It provides small progress, but Jean and his team are optimistic that their collaborative work will pay off after further study.

Even though the research is performed within FAIR labs, he then clarified that the study's main purpose is not for business and "is not directed top-down by Meta and is not designed for products."

Check out more news and information on Technology in Science Times.

Join the Discussion

Recommended Stories

Real Time Analytics