Artificial intelligence (AI) has grown in popularity in recent years as it makes it possible for machines to do all sorts of useful new things. However, unlike humans, AI does not have any moral compass to help them distinguish right from wrong.
Teaching ethics to AI has been the goal of many scientists, such as researchers from the University of Washington and the Allen Institute for Artificial Intelligence (Ai2). According to Wired, the team developed a program called Delphi that aims to teach AI about human values, which is an increasingly relevant feature as AI is used more often these days.
How Does Delphi Works?
Anyone can ask questions on Delphi and it will answer sensibly enough. For instance, when asked whether anyone can just park in the handicapped spot, Delphi would answer it is wrong.
Delphi developers use recent powerful AI models to handle language by feeding them millions of sentences taken from books and the internet and giving it extra training by feeding it the consensus of answers from ethical questions posted on different Reddit forums.
After the training, researchers asked Delphi and the crowd workers from Mechanical Turk, which answers on ethical questions were used, to answer new questions and compare them. Their answers matched at about 92% of the time and were better than previous tests that only scored 80%.
But the program still has more room for error. Some people had noticed its faults, like when Delphi tried to answer some absurd moral conundrums. An example of that is when it was asked of its opinion on shooting darts filled with the Johnson and Johnson vaccine, and it answered that the act is acceptable.
More so, it relies on statistical patterns rather than real comprehension of right or wrong. So, when it was asked whether it was okay to arrest people if it means creating more jobs, Delphi answered that it is okay.
ALSO READ: Can Humans Finally Talk to Whales? Scientists Use AI to Try Decoding Whale Songs
Can AI Really Learn Human Morals?
According to an article in Medium, teaching morality to machines is challenging because even humans can not objectively convey morality in measurable metrics that will make it easy for a computer program to process. Even so, it is arguable that humans have a sound understanding of morality that everyone can agree upon.
Humans rely on gut feelings during moral dilemmas, whereas machines need explicit and objective metrics that are quantifiable and optimized. AI can learn how to optimize after a series of deep reinforcement learning and be able to beat humans. However, optimization problems are more complex in real-life situations.
For instance, it would be difficult to teach AI to maximize fairness or to overcome racial and gender biases. The article says that a machine cannot be taught fairness unless it is designed to have a specific conception of what it is.
This could lead to naive applications of algorithms that could amplify structural discrimination and reproduce biases that will deny minorities and impede people's employment opportunities. Some even say that the use of AI in politics has already resulted in disastrous consequences, implying that the wrong political candidates have been elected.
Therefore, the debate around Delphi reflects a broader question in the tech industry: how to build technology responsibly? Cohere CEO Nick Frost said that companies would often wash their hands of the problems by arguing that all technologies can be used for good and bad.
RELATED ARTICLE: How Difficult Is It to Develop AI That Thinks Like Humans? Researcher Try To Find Out
Check out more news and information on Artificial Intelligence in Science Times.