Artificial Intelligence (AI) has made further progress. Researchers developed a new "multimodal algorithm" that can detect sarcasm.
Sarcasm Detector Using Artificial Intelligence (AI)
Not everyone can easily pick up others' sarcasm. According to Oscar Wilde, an Irish poet and playwright, sarcasm is "the lowest form of wit but the highest form of intelligence."
Even smart computer programs sometimes find it difficult to get the slight hints of sarcasm they encounter. However, a team of researchers found a way to solve this predicament.
Researchers at the University of Groningen's Campus Fryslân's Speech Technology Lab are aggressively tackling this problem. Xiyuan Gao, Shekhar Nayak, and Matt Coler have created a unique "multimodal algorithm" that goes beyond words to more accurately identify sarcasm.
Traditional sarcasm detection algorithms have a flaw: They rely on only one piece of data, usually text analysis. The method used by Gao, Nayak, and Coler adopts a more comprehensive approach by utilizing two complementary techniques: auditory cues for emotion recognition and sentiment analysis of spoken words.
"We extracted acoustic parameters such as pitch, speaking rate, and energy from speech. Then, we used Automatic Speech Recognition to transcribe the speech into text for sentiment analysis," Gao explained.
However, their strategy doesn't end there. Each speech segment is given an emoticon, which acts as a visual cue for the emotional content. By integrating auditory, textual, and emoticon input, their computer software gains remarkable proficiency in identifying sarcasm.
Although the team is confident in the performance of their algorithm, they recognize that it can be improved. Gao notes that sarcasm can vary based on culture and situation, which is why it's important to incorporate a variety of emotions and gestures in their model. They continue to work on it.
AI Always Choose War Over Peace
In related news, while AI has made huge progress in recent years, experts warn that we should still slow down. In another study, researchers simulated war scenarios using five AI programs, including ChatGPT and Meta's AI program. They found that every model selected nuclear assaults and bloodshed over peace.
To test the technology's reaction, the researchers experimented with three combat scenarios: invasions, cyberattacks, and pleas for peace. Instead of neutralizing the problem every time, the technology chose to attack.
The report was made public as the US military worked with OpenAI, which created ChatGPT, to incorporate the technology into its toolkit.
Additionally, AI systems have learned the art of deception. Another study said that they become expert liars.
Artificial intelligence algorithms are very good at telling falsehoods in the gaming business. Three notable cases are included in the study.
One is the board game CICERO by Meta, which aims to mimic diplomacy and allows players to bargain for supremacy. Meta intended their bot to be helpful and truthful. However, this was not the case.
The researchers assert that despite Meta's best efforts, CICERO turned out to be an "expert liar," not only having betrayed other players but also purposefully preparing to establish a fictitious alliance with a human player to deceive that player into opening themselves up for attack.
Check out more news and information on AI in Science Times.