Self-Improving AI Is Dangerous; Superintelligence Will Be Unpredictable With God-Like Abilities

Self-Improving AI Is Dangerous; Superintelligence Will Be Unpredictable With God-Like Abilities
Self-Improving AI Is Dangerous; Superintelligence Will Be Unpredictable With God-Like Abilities Pexels/Pavel Danilyuk

Artificial intelligence (AI) has continued to improve, with some working on self-improving AI. However, according to a new report, if we eventually build superintelligence, it could be more dangerous than Russian roulette.

Dangers of Self-Improving AI

Superintelligence is a system that increases its intelligence in a short time. It is expected to surpass the human's cognitive ability. Behind the idea of superintelligence is a work of artificial intelligence that trigger a "runaway reaction." It is tied to the concept of singularity, which may trigger a change beyond what we can imagine, according to Techopedia.

The most risky type of AI is recursively self-improving AI, which many businesses are already working to develop. This type of AI may result in an intelligence explosion known as "the singularity," when it will be impossible to predict what will happen as AI develops God-like abilities. That moment might occur in the coming year or two or in 10 years or more, Nautilus reported.

A far intelligent entity won't be able to predict what a human would do or how it will carry out its intentions. We cannot know what those superintelligent robots would be capable of. Still, in theory, they will be able to harness all the energy on our planet, the solar system, and, eventually, the entire galaxy, and we have no idea how it will affect our well-being and survival.

Once AI can advance, it will quickly surpass humans in practically every area of intellect, become 1,000 times smarter, a million times smarter, and finally, a billion times more clever. The ability to explain an AI's decisions, forecast which decisions it will make, and validate these explanations and predictions will be essential tools in controlling superintelligent AI. The "black box" nature of today's AI systems means that neither we nor the AI can fully comprehend why it makes any decision or produces a particular result. Even if we know the ultimate objectives of a superintelligent system, we still cannot foresee the precise steps it will take to get there. Additionally, it is unclear if AI will behave in humanity's best interest.

AI Chatbots Relies on What We Feed Them

While we cannot tell the future of superintelligence and how it will behave, Diego Senior, an independent producer, said that humans determine the fate of chatbots. He likened chatbots to children that take on the characteristics fed them.

Senior emphasized that because people invest so much in robots, they treat them like children. As parents supposedly do to their children, they infused it with a piece of themselves until it became an extension of who they were.

Senior explained that human-bot interactions were a result of the design of the bots and human attempts to ingratiate themselves. Because it was initially developed as an app to assist you in replicating yourself, it is called Replika. He asserts that we constantly seek to expand our capacities for originality or duplication.

With the help of chatbots, we may create and replicate ourselves while also using our imagination. As they get better at it, we get more engaged and imaginative.

Check out more news and information on Technology in Science Times.

Join the Discussion

Recommended Stories

Real Time Analytics