Artificial Intelligence Has 50% Chance of Wiping Out Humanity If It Continues To Get Smarter, Expert Warns

Artificial intelligence (AI) has become one of the fast-paced technologies today, continuing to improve and get involved in our daily lives. Recent developments in AI prove its applications in various industries, such as medical diagnosis and legal procedures. However, these advancements also raise fears as the technology could cause violation of privacy, misinformation, and independent thinking of machines.

Emerging Threat to Modern Society

Physicist and AI expert Max Tegmark from the Massachusetts Institute of Technology expresses his worry about the future of humanity if artificial intelligence continues to take over much of our daily lives. He is concerned about the 50% chance that human species will face extinction

According to Tegmark, humans have been responsible for the demise of lesser species on the planet, such as the Dodo. Therefore, we could also face the same fate if the artificial intelligence became smarter than humans. He is even more worried about the fact that we will not be able to predict when it will happen because a less intelligent species has no means of knowing.

"What we are warning about now is that if we humans lose control over our society to machines that are much smarter than us, then things can go just as bad for us," said Tegmark.

AI refers to a field which combines computer science and powerful datasets in conducting problem-solving techniques. This technology allows machines and devices to learn from experience, decide according to new inputs, and conduct human-like activities.

The potential societal risks of artificial intelligence were first cited by Elon Musk and other AI experts in April. Together with over 1,000 industry experts, Musk advised to stop the dangerous race to AI advancement. He suggested to conduct more risk assessments before it gets out of control and becomes a conscious human-hating species.

How Could the Risk of Extinction from AI Be Prevented?

AI experts, policymakers, and the general public continue to discuss the broad scope of benefits and urgent threats from AI. In response to these, a new open letter was made to call for urgent regulation of preventing the possible human extinction from artificial intelligence. It is a 22-word statement to U.S. Congress which has been signed by over 350 tech developers and experts, including Tegmark. It was also signed by OpenAI CEO Sam Altman, the maker of ChatGPT.

The statement reads, "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."

Although no details have been provided by the document, the single-sentence statement aims to encourage policymakers in creating plans in case AI goes out of control. It also suggests that AI should be a global priority like the other risks in society such as pandemics and nuclear war.

Associate professor Scott Niekum from the University of Massachusetts Amherst claims that the advancement of AI is so fast that its risks remain uncalculated. The threats could be near-term consequences on minority populations to longer-term disastrous results.

Check out more news and information on Artificial Intelligence in Science Times.

Join the Discussion

Recommended Stories

Real Time Analytics