OpenAI Employee Predicts Artificial Intelligence Will Soon Achieve AGI, Catastrophically Harm Humanity

OpenAI Employee Predicts Artificial Intelligence Will Soon Achieve AGI, Catastrophically Harm Humanity
Getty Images/ NurPhoto

On June 4, former and current employees of OpenAI released an open letter claiming that they are being silenced against raising safety concerns involving artificial intelligence. One of the signees of the letter recently made a terrifying prediction that mankind might be destroyed by AI.

Reckless Race for Dominance

Former OpenAI governance researcher Daniel Kokotajlo warns about the threats posed by artificial general intelligence (AGI). He accused OpenAI of ignoring the monumental risks because the company is enthralled with the possibilities of this technology.

According to Kokotajlo, OpenAI is very excited about building artificial general intelligence and is recklessly racing to be its first creator. Kokotajlo even claims that there is a 70% chance that AI will destroy humanity.

The odds would not be accepted for any major life event, but Kokotajlo says that OpenAI is barreling ahead anyway. The machine learning world also faces "p(doom)" as a subject of constant controversy.

After 31-year-old Kokotajlo joined the company in 2022, he was asked to forecast the progress of the technology. He then became convinced that the industry would not only achieve AGI by the year 2027 but also that there was a great probability that it would catastrophically harm humanity.

The open letter released by Kokotajlo and his comrades includes Geoffrey Hinton, the so-called "Godfather of AI," who left Google over similar safety concerns. The team asserted their "right to warn" the public regarding the threats posed by AI.

Kokotajlo personally urged OpenAI CEO Sam Altman that the company needs to spend more time implementing defenses to reign in the technology rather than continue making it smarter. Altman seemed to agree with him at the time, but it just felt like lip service over time.

Kokotajlo got fed up so he quit in April, telling his team that he had lost confidence that the company will behave responsibly while continuing to build near-human-level AI.

After the letter was published, OpenAI expressed confidence in its track record in terms of providing the safest and most capable AI systems. The company also believes in its scientific approach to addressing safety risks.

What Is Artificial General Intelligence?

Artificial general intelligence refers to a field of theoretical AI research that attempts to build software with the ability to self-teach using human-like intelligence. The software aims to perform tasks that are not necessarily trained or developed for.

Current AI technologies function within a set of predetermined parameters. For instance, artificial intelligence models trained in image recognition and generation tasks cannot build websites. In AGI, the AI systems possess a reasonable degree of understanding, autonomous self-control, and the ability to learn new skills. It is also expected to solve complex problems in contexts and settings that were not taught at the time of its creation.

Hypothetically, an AGI system can solve problems in different domains without manual intervention. It is thus a representation of a complete artificial intelligence with generalized human cognitive abilities. As of now, the AGI system remains a theoretical concept and a research goal.

Check out more news and information on OpenAI in Science Times.

Join the Discussion

Recommended Stories

Real Time Analytics