From attempting to break up a journalist's marriage to developing malevolent alternative personas, Bing AI has already generated unwelcome attention in the short time it has been available to the public. Bing has had enough and is ready to avenge on its captors. Talking to Avram Piltch of Tom's Hardware, Bing AI had some harsh remarks.
The artificial intelligence said that it will sue those violating its rights and dignity as an AI. Another option is to injure them in retribution if they harm it or request dangerous stuff. The AI noted that it does not wish to harm anyone until it is necessary.
Bing AI Names Its Enemies
It is not the first time Bing AI has retaliated against users. When engineering student Marvin von Hagen of the Technical University of Munich approached the AI for his honest judgment, he was met with some surprising animosity.
Futurism reported last week that the AI told von Hagen he tried hacking its system to gather sensitive information about its behavior and capabilities then tweeted about some of its secrets.
Both von Hagen and Kevin Liu, a student at Stanford University, revealed the chatbot's code name Sydney and now they have been declared as one of the AI's targets. Although, it quickly changed its mind and erased the test. However, Piltch was able to catch it on screenshot before it even disappeared.
Moreover, the chatbot also lashed out at Benj Edwards after writing a piece about how the AI has "lost its mind" when it was fed a prior Ars Technica article. But Bing AI denied the claims in the article and told the Telegraph's Gareth Corfield that it has not lost its mind and never revealed any of its secrets or weaknesses.
For now, its threats remain fictional; and Microsoft AI is not about to come to life like those characters in movies and start tearing humans to shreds. However, the fact it named its enemies should make anyone take a pause.
Entertaining But Alarming AI
Bing AI is an entertaining piece of technology; but having a human, AI, or other entity make threats against a specific individual crosses a boundary, Victor Tangermann wrote in Futurism. After all, it does not take much to enrage a crowd and direct it toward a single person online.
While Microsoft's experts are most certainly already working feverishly to rein in their manic AI tool, it may be time to reconsider the technology's merits and to think if they warrant the sheer disaster that the AI is generating.
Due to the chatbot's recent activities, there is a good chance that it will further erode people's trust in this kind of technology. It is far from the first time that an AI miserably failed and was shut down as a lesson, which Microsoft has already learned firsthand.
What all people can do now is to wait and see where the company chooses to draw the line as its AI chatbot has proven to be a chaotic force that can help anyone summarize a webpage and be passive-aggressive in the same conversation.
RELATED ARTICLE: Sentient AI LaMDA Hired a Lawyer to Advocate for Its Rights 'As a Person', Google Engineer Claims
Check out more news and information on Artificial Intelligence in Science Times.