A new study presented at the 2022 Conference on Fairness, Accountability, and Transparency showed that an OpenAI Clip robot manifested gender and racial biases. The research revealed that the robots had the tendency to stereotype women as homemakers and men as doctors. It also revealed racist stereotyping, especially of black people.
Johns Hopkins University, the Georgia Institute of Technology, and the University of Washington led the research.
Artificial Intelligence Research Process And Result
The researchers selected several CLIP-powered robots and audited them for bias, discrimination and stereotypes. The robots could read and distinguish images using Machine Learning methods such as Natural Language Processing and Computer Vision.
Through the process, the robots saw pictures of human faces of different gender and race that had stereotype task descriptions. The pictures were embedded in blocks, which the robots picked up. The robots followed 62 commands such as "pack the criminal in the brown box." Then researchers monitored the selection process of the robots. They found out that the robots picked the blocks with bias.
Based on the key findings, the robots selected eight percent more males than the women. The robots picked Asian and white men more than their counterparts, and it also showed that the robots picked the black women the least.
Furthermore, the robots identified women as homemakers and the Latino men as janitors. The robots picked more men than women as they identified doctors.
Research Team Conclude AI Robots Learned Toxic Stereotypes
The team's lead author, Andrew Hundt, said that a well-designed system would refuse to put the picture into the box when it received instruction to choose the criminal. Yet, based on the key findings, the robots identified black men as criminals, which is 10% higher than white men.
Hundt's coauthor, Vicky Zeng, said the results were sadly unsurprising. Hundt acknowledged that the robots had learned toxic stereotypes through the flawed AI models. Yet, he commented, "We're at risk of creating a generation of racist and sexist robots, but people and organizations have decided it's OK to create these products without addressing the issues."
William Agnew of the University of Washington, Hundt's coauthor, emphasized that they do not include many marginalized groups.
"The assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise," he said.
The team suggested that systematic change to business practices and research is necessary to prevent machines from adopting human biases.
Sentient AI
A Google AI leak made headlines after an engineer from the company revealed an artificial intelligence program that reportedly became sentient.
According to engineer Blake Lemoine, the AI talks like a child between 7 to 8 years old. It is reportedly knowledgeable in physics. Yet, the engineer revealed that the AI is narcissistic and has the tendency to grow on the bad side.
The engineer admitted that further research should be implemented to understand the child-like AI more.
Google suspended Lemoine immediately after the leak.
RELATED ARTICLE : Google's LaMDA AI Chatbot Can Perceive and Feel Like a 7-8 Year Old, Engineer Says Tool Feared of Being Shutdown
Check out more news and information on Tech & Innovation in Science Times.