Why Does Google's Latest Experimental Search Tool Feature Make Bizarre Suggestions Like Eating Rocks?
Why Does Google's Latest Experimental Search Tool Feature Make Bizarre Suggestions Like Eating Rocks?
(Photo: Pexels/ Aaron Kittredge)

Artificial Intelligence (AI) can sometimes go rogue, and this seems to be the case with Google's latest experimental search feature.

Google's Recent Generative AI Makes Bizarre Suggestions

Hundreds of millions of users now have access to Google's most recent experimental search function on Chrome, Firefox, and the Google app browser.

By providing summaries of the search results, "AI Overviews" saves you time by utilizing generative AI, the same technology that powers ChatGPT, a competitor product. When asked "how to keep bananas fresh for longer," it would use artificial intelligence (AI) to provide a helpful overview of suggestions like keeping them away from other fruits like apples and in a cool, dark environment.

AI overviews can teach you useful things, such as explaining that Whack-A-Mole is an iconic arcade game in which players use a mallet to smash randomly positioned moles for points. The game was first known as Mogura Taiji or Mogura Tataki when it was created in Japan in 1975 by the amusement firm TOGO.

However, the outcome can be hazardous or even catastrophic if you pose a bizarre question. AI Overviews offers odd advice, such as eating at least one mall rock daily because "rocks are a vital source of minerals" and putting glue on pizza toppings. It also claims that "astronauts have met cats on the Moon, played with them, and provided care."

There are two reasons why generative AI tools act weirdly. According to Toby Walsh, a Professor of AI, Research Group Leader, UNSW Sydney, first, they only know what's popular and not necessarily true. For example, a satirical article from The Onion about eating rocks was so popular that the AI tool used it as a reference to suggest that one should eat rocks daily.

Second, AI tools don't share our values. They are programmed using the large data found on the web. So, they make suggestions that sometimes don't sound reasonable to humans, like adding glues to pizza, which will only make it inedible.

AI experts are aware of this. Google chief executive Sundar Pichai even said in 2023 that they were being "cautious" with the technology. He added that they "set up good structures around responsible AI."

ALSO READ: Bots Better, Faster Than Humans at Cracking Captcha Tests With Nearly 100% Accuracy

AI Sarcasm Detector

In related news, researchers at the University of Groningen's Campus Fryslân Speech Technology Lab created an AI that could detect sarcasm with accuracy. Shekhar Nayak, Matt Coler, and Xiyuan Gao have developed a novel "multimodal algorithm" that recognizes sarcasm more precisely than words alone.

One drawback of traditional sarcasm detection algorithms is that they only use one type of data, typically text analysis. By employing two complementary techniques, analysis of spoken words and auditory clues for the emotion recognition method by Gao, Nayak, and Coler takes a more thorough approach.

Gao and his colleagues extracted acoustic parameters like pitch, speaking rate, and energy when the speech was delivered before transcribing the remarks into text and analyzing their sentiment. An emoticon is assigned to each speech segment, serving as a visual signal for the emotional content.

Through the combination of textual, emoticon, and audio data, their computer program becomes remarkably adept at sarcasm recognition.

RELATED ARTICLE: AI Systems, Chatbots Have Mastered the Arts of Deception; Meta's CICERO Turns Out as an 'Expert Liar': Report

Check out more news and information on AI in Science Times.