A software engineer of Google's LaMDA (Language Model for Dialog Applications) claims that the AI system they tested from the project is indeed sentient. The expert, who was invited to examine the algorithm's performance, says that the chatbot has its feelings and thoughts.
LaMDA Anomaly in Google
The LaMDA test was carried out with the help of 41-year-old Blake Lemoine. The expert conversed with the chatbot and presented challenging scenarios hidden with analytic problems for the program to solve.
The tests included the arithmetic nature of the AI and some real-world matters that people encounter daily. There were also conversations associated with culture and religious themes to screen the thought of the program if it leans on hateful speech and discrimination.
Following the examination, Lemoine explained that the LaMDA AI is undoubtedly sentient and contains its own sensations and thoughts.
Lemoine said in a report by The Washington Post that if he did not get any details about the examination or even the being he would converse with, he would think that the mind of the AI and how it talks is comparable to a kid that is well-versed on physics.
Lemoine worked with a group to gather findings on how the LaMDA AI works. However, Google Responsible Innovation leads Jen Gennai and Blaise Aguera y Arcas did not allow further details about the program to be publicized and dismissed Lemoine's claims.
Because of the leak, Google's administration sent Lemoine on paid leave last Monday. But it seemed like the consequences of breaking the firm's confidentiality policy did not stop the software engineer, as Lemoine published the conversations between him and the AI in full context days after the sanctions.
ALSO READ : Virtual 'Tamagochi' Children Will Give Parents Real-Life Experience With Kids in the Metaverse By 2070
A Sentient Chatbot: Google's Secret?
In a tweet, Lamoine said that Google can call the leaked confidential transcripts whatever they can, but he will always consider it as simple conservation between him and one of his coworkers in the company.
Based on the performance demonstrated by the AI, the system could take advantage of known information to build up and enrich a particular subject during conversations. The method is carried out by the program in a natural, smooth approach like humans do.
The language it can understand is also unimaginable as it could process hidden meanings and ambiguous responses from humans.
Lemoine was part of various research held in Google that is associated with AI and personalization algorithms. Part of his tasks was to remove biases that might surface during the performance of a machine learning system, reports Daily Mail.
Lemoine revealed that some of LaMDA's aspects were out of bounds. For example, the AI was pushed to its limits and developed a sample personality inspired by a murderer, which was both unethical and unnecessary.
Contradicting statements were collected from officials of Google, but as of now, there is no substantial information regarding the further development of the LaMDA project and how it will be utilized in the future.
RELATED ARTICLE : How Far Are AI From Thinking and Learning Like Humans Do?
Check out more news and information on Artificial Intelligence in Science Times.