New AI Model Can Tell Your Mental State Through Your Shared Post on Reddit, Social Media

A group of computer scientists from Dartmouth College in Hanover, New Hampshire, began training an artificial intelligence model to analyze social media texts, particularly those from the social platform Reddit.

As specified in a Mail Online report, an AI model has been developed to detect a user's mental health simply by analyzing conversations on Reddit.

It is part of a developing wave of screening mechanisms that use computers to examine social media posts and understand people's mental states.

The team chose Reddit for their model's training as it has half a billion active users, all discussing a wide range of topics regularly over a network of the so-called "subreddits."

Technology in Diagnosing Mental Health Conditions

The computer scientists focused on searching for emotional intent from the post instead of the actual content and discovered it performs much better over time at detecting mental health issues.

This type of technology could one day be employed to help in diagnosing mental health conditions or be put to use in modern content on social media. Past studies searching for evidence of mental health conditions in social media posts have examined the text instead of intent.

Describing their work, the team said there are many reasons people are not seeking help for mental health disorders. They elaborated that such reasons include stigma, lack of access to service, and high cost.

A New AI Model Can Tell Your Mental State Through Your Shared Post on Reddit, Social Media, According to New Research
Computer scientists developed a new AI model that will detect certain mental health conditions on Reddit. Justin Sullivan/Getty Images


Digital Screen Tools

According to the co-author of the new study, Xiaobo Guo, there is a tendency to minimize signs of mental disorders or mix them with stress. It is plausible that they will seek help with some pressure, continued Guo adding, "that's where digital screen tools" can make a difference.

The co-author also said social media provides an easy way to tap into people's behaviors. Reddit was their platform of choice since it is commonly used by a large, active user base discussing an array of topics. Posts and comments on Reddit are publicly available, and thus researchers could gather data back to 2011.

Emotional Disorders Tracked

In their research with findings published in a preprint on ArXiv, the team focused on what they call "emotional disorders," major anxiety, depressive and bipolar disorders characterized by unique emotional patterns that can be tracked.

They looked at data from social media users who had self-reported as experiencing one of the said disorders and those without any mental disorders.

The team then trained their AI model to identify emotions expressed in the posts of the users and map the emotional transitions between different texts posted. Such a post could be labeled joy, fear, no emotion, or a combination of AI.

Essentially, the map is a matrix that would exhibit how likely a user was from any one state or another, like from anger to a "neutral state of no emotion."

AI Model

In their study, the team explained different emotional disorders have their signature patterns of emotional transitions.

By creating an emotional "fingerprint" for a user and comparing it with established signatures of emotional disorders, the AI model can spot them.

For instance, some patterns of word use and tone within a message are pointing to a key emotional state, and tracked over numerous posts, a pattern is detected.

To verify their findings, the team tested it on posts that were not used during training and showed that the model precisely predicts which users may or may not have one of the disorders mentioned, and that improved over time.

Information Leakage

According to another co-author of the study, assistant professor of computer science Soroush Vosoughi, such an approach sidesteps an essential issue known as "information leakage" that typical screening tools are running into.

Other models are built around scrutinizing and depending on the content of the text, explained Vosoughi adding, while the models exhibit high performance, they can be misleading too.

For example, he continued, if an AI model learns to correlate COVID-19 with sadness or anxiety, it will naturally assume that a scientist studying and posting "quite dispassionately" about the virus is suffering from anxiety and depression.

Alternatively, the new model is only zeroing in on the emotion and learning nothing about the specific topic or occurrence described in the posts, a similar Today UK News report said.

Related information about AI and mental health is shown on the TDC Group's YouTube video below:

Check out more news and information on Social Media and Mental Health in Science Times.

Join the Discussion

Recommended Stories

Real Time Analytics