Artificial Intelligence Can Map How a Brain's Complex Network Works to Understand Sentences

It is amazing how brains work and how it understands the sentences being said, given that the same words in that sentence could have an entirely different meaning when rearranged. Using artificial intelligence (AI) and neuroimaging, researchers mapped the complex network that lets brains comprehend spoken sentences.

The team from the University of Rochester Medical Center described in their research entitled "Deep artificial neural networks reveal a distributed cortical network encoding propositional sentence-level meaning," how people could understand the sentences they hear.

The study presents a major scientific breakthrough in understanding how and what part of the brain is the sentence-level meaning is constructed.

 Using AI to Map How Brain's Complex Network Works To Understand Sentences
Using AI to Map How Brain's Complex Network Works To Understand Sentences Pixabay


More Than Words

University of Rochester Del Monte Institute for Neuroscience research assistant professor Andrew Anderson, Ph.D., and the lead author of the study, said that it is unknown whether the integration of the meaning in the sentences happens in a particular part of the brain, like the anterior temporal lobes, or if a more complex neural network is involved.

The study, published in the Journal of Neuroscience, suggests that the meaning of a sentence is more than the words being spoken. That means that a sentence is greater than the sum of its parts.

For example, Devdiscourse reported that the sentence "the car ran over the cat" and "the cat ran over the car" have the same set of words but could give different meanings when words are rearranged.


Unlocking Extremely Complex Neural Signals

The researchers used AI and neuroimaging to unlock extremely complex neural signals in the brain that are responsible for language processing, according to Science Daily.

They gathered data from study participants who underwent fMRI while reading sentences. Their findings revealed that brain activity spanned across neural networks in various regions of the brain, such as anterior and posterior temporal lobes, inferior parietal cortex, and inferior frontal cortex.

The AI model InferSent, developed by Facebook that produces unified semantic representations of sentences, was used to predict patterns of fMRI activity across those brain regions that reflect while the participants read sentences.

"It's the first time that we've applied this model to predict brain activity within these regions, and that provides new evidence that contextualized semantic representations are encoded throughout a distributed language network, rather than at a single site in the brain," the researchers said.

The team believes that their findings could help in understanding medical conditions, such as Alzheimer's disease. The researchers are developing similar methods that aim to understand how patients' ability to comprehend languages breaks down during the early stages of Alzheimer's disease.

Additionally, the researchers said that they are interested in improving their models in the future to make them into something that can predict brain activity elicited as language is produced.

"The current study had people read sentences, in the future we're interested in moving forward to predict brain activity as people might speak sentences," Anderson said.


Check out more news and information on Language and Brain on Science Times.

Join the Discussion

Recommended Stories

Real Time Analytics