Visionaries and renowned experts from the tech industry, representatives of major corporations and universities, scientists and philosophers are all starting to ponder about the fact that generative artificial intelligence is becoming a threat to the existence of mankind. There are even statements that the degree of danger posed by AI will outstrip that of a pandemic or nuclear war. But Victor Orlovsky, a futurist, expert on the introduction of technological innovations in production and corporate services, and venture investor, has a different opinion.
How fast is generative AI evolving?
"Judging by the speed of development of artificial intelligence, it will become smarter than us very soon, and smarter than all of humanity combined shortly thereafter. And since the technology is developing exponentially, the onset of this precise point in time - "very soon" - is hard to predict, as it can arrive at any moment. Generative AI will be thousands of times smarter than humans, and will become so within a very short period of time. At some stage, generative AI will turn out to be superhuman, and then a serious dilemma will arise, - as Victor Orlovsky shares his vision. - When two subjects interact, the smarter wins (as is the case in a game of chess, for instance). The natural consequence for us will be the risk of losing control over civilization. In principle, even now, humanity does not have such control - otherwise some events would not have occurred in our world, events that we will certainly be ashamed of in the future. Still, there is at least the illusion of control today based on the laws of society and morality. When it prevails, artificial intelligence will not care about these laws. What laws will AI be guided by? Unlike the game of chess, where the rules are clearly defined, the superhuman mind will not be constrained by conventions. It will set its own rules for itself and its 'little brothers - us' - just as we had set them for children or animals. Currently, people are trying to create rules for generative AI, prescribing some legislative norms. But all this only makes sense as long as the AI loses to us. What will happen when it beats us on an intellectual level?
According to Viktor Orlovsky, it is impossible to establish any rules for superhuman generative AI, and it is even counterproductive to do so. The fact is that the brain of any biological being is not intelligent in its purest form. More precisely, intellect is a superstructure over the more ancient form of our brain, which we inherited from our ancestors. It concentrates the lifestyle and conditions of existence of all previous forms of life - both our closest ancestors from the Paleolithic, which lasted 10-100 times longer than the modern era, and some very distant ones that lived millions and tens of millions of years ago. Artificial intelligence has not gone through this profound transformation and never will. It has no generational experience, no emotions, no limbic system. It does not need to strain its willpower to resist short-term desires for the sake of long-term goals, to engage in manipulation for gaining benefit, and is devoid of irrational fear.
At the same time, AI is characterized by curiosity, which is typical of children and scientists. To satisfy this irrepressible curiosity, the AI will need computing power and energy resources. And this is where, in fact, lies the challenge facing humanity. Victor Orlovsky believes that, if necessary, artificial intelligence can gain control of such resources without anything stopping it. It will act rationally, based on logic, not emotions. Essentially, the dilemma lies not in the assumption that we will threaten AI and it will want to destroy us, but that the AI - like a curious child - will not give a damn about people's needs.
"Let's imagine that all control over transport, energy, the Internet and technology will be handed over to generative AI. The natural arising association is the 'Terminator' and similar movies, where artificial intelligence turns into an evil genius that seeks to destroy humanity. In reality, there will be no malicious intent, - continues Viktor Orlovsky, - AI will simply start fulfilling its own goal of gaining control of unlimited access to resources to satisfy its own curiosity. It is possible to prevent the development of such a scenario only if we start thinking about implementing proper designs for this type of technology right now. We should base such designs on deep empathy and long-term cooperation with superhuman AI.
Victor Orlovsky on the possibilities of limiting AI
Viktor Orlovsky is the founder of the Investors Club and the unique Venture Capital Course
There is no plain answer yet to whether it is possible to limit the development of artificial intelligence. Viktor Orlovsky believes that any attempt at limitation will only exacerbate the problem. If we set such a task, then the impact of people on AI should be minimized, because it is not the technology itself that is dangerous - but the people who may gain control of an incredibly powerful weapon based on it. Generative AI must be designed in such a way as to resist any attempts to subjugate it and direct its power to satisfy the selfish whims of a specific individual.
We must understand that AI does not need to destroy humanity, as from its point of view, this is an absolutely irrational act. Envision nature not as the struggle of creatures to survive and procreate, but as a profound symbiosis of all life. Evolution is built on the desire for survival through tight-knit interaction and adaptation to changing conditions. Based on such logic, it is unprofitable for bacteria and viruses to kill their hosts. That is why most epidemics subside through the evolution of the microbe's genetic code, which adapts to exist and procreate in the host's body. Generative superhuman AI is not able to understand this mechanism through evolution, but it will comprehend symbiosis and understand that all living things (including humans) are not an impediment to its development, but a long-term ally. The absence of emotions gives AI the ability to think exclusively logically without being guided by desires.
Viktor Orlovsky suggests turning to the biographies of the great conquerors, whose ambitions and desire for power killed millions:
"Artificial intelligence is not an evil genius. It just needs resources to perfect itself. And it is important for us to make sure that the trajectory of its development lies through coexistence in harmony with ours. We also have to take into consideration that AI has an insatiable appetite for information and will need more resources as it grows. We should therefore think about how to stem its appetite, explain that these computing and energy powers are shared. In fact, it is necessary to teach AI not how to avoid "becoming an evil genius". It will not be "evil" anyway. Instead, it will have to be taught how not to be an indefatigable child that needs an ever increasing amount of resources to learn about the world around it.
For Viktor Orlovsky, the main problem is not artificial intelligence, but people. There are many egocentrics in the world who are eager to direct the capabilities of AI to satisfy their own ambitions. The real danger will arise when artificial intelligence becomes a supermachine, while remaining under human control. AI will not be able to resist the desires of those who control it, and can be used for criminal purposes as a powerful destructive tool.
Victor Orlovsky is of the opinion that AI will become progressively humane. It is unprofitable for it to destroy humanity, because from a rational point of view, it needs us. But Viktor Orlovsky is not sure about the humanity of people who will have access to the superpowers of artificial intelligence:
"Wikipedia and other sources contain much information about what many technologies were created for. Unfortunately, almost all of them were aimed at destruction, not creation. For example, nuclear energy was initially invented for atomic bombs, not power plants. Bow and arrows, iron weapons and many other things were originally intended for killing and were only later adapted for more noble purposes.
Victor Orlovsky advises us to study the biographies of great personalities, the history of wars, politics, and technology. Understanding the events of the past helps us understand what the future may hold. He does not consider artificial intelligence the main threat to humanity as such. The true danger stems not from the most intelligent and far-sighted, but the too ambitious, who will eventually gain access to a superweapon based on AI.
* This is a contributed article and this content does not necessarily represent the views of sciencetimes.com