Robots May Not Be the Mechanical Helpers We Think They Are

Robots May Not Be the Mechanical Helpers Who We Think They Are
Robots are now in development and cars are robots as well, because they can drive themselves a bit. As AI gets more sophisticated they might not be mechanical helpers forever. With Asimov’s three laws of robotics, it might not be enough. assets.newatlas.com

The rapid deployment of artificial intelligence is fast and blinding, we are even seeing robots that are already in use. In comparison, the androids depicted in science fiction are more complex, not real. Robots and developing AIs of today are relatively dumb machines that need algorithms and human interdiction to help them along.

These mechanical helpers are limited by machine learning (ML), and deep learning (DL) to help them sort everything out. Most of these machines are dependent on algorithms and pattern associations programmed by engineers. One of the biggest stumbling blocks to their being useful is the understanding of how their creators and designers think.

Forget about the apocalyptic scenario, where the robots rise to kill all humans. This is can be solved by a kill switch, for all automatons and it can be easily engineered into their systems. One science fiction writer Isaac Asimov suggested three rules for robots or AI to abide or face a shutdown. Complex machines like robots can be compared to a toaster that is smart too. Here are the three rules.

A. Law 1

Robots are made to serve humans and prevent humans from getting hurt at all costs.

B. Law 2

All orders given by a human, except it means hurting someone else. This law should not transgress the first law of robotics.

C. Law 3

Robots are allowed self-preservation, but on the condition, it does not violate the first and second laws.

D. Law 0

Robots should protect humanity, or prevent anything bad happening to humankind.

At a glance, these laws seem sufficient, if programmed into the algorithms of simple AI. Later, there might be quantum computers or even more exotic positron brains for them. It cannot be denied the potential of robots is frightening to those aware of their menial capacity as mechanical helpers. For now, engineers are just tweaking and making the research to allow full autonomous functions.

There is even a notion that algorithms used for car AI are working off a series of pattern and learning to discriminate. If it is caused by the algorithm or human influence, the machine learning deep learning is building experiences just like a small child. But, most would argue that it is by accident but it is happening. Robot-cars might be the first androids, but on four wheels and soon it might be normal too.

One big concept that makes robots just mechanical helpers that cannot talk, nor understand human language they just listen and follow. How should they learn the human language, is translated in other languages or machine language. Will the command shutdown permanently be followed, when the aspects of ML and DL have embedded the concept of self-preservation? Already humans are sending machines in harm's way to protect human life in dire situations.

How well will these four laws of robotics translate, when robots will not settle as mechanical helpers in industries and other human fields? When the time comes will robots not settling as mechanical helpers accept these laws of robotics. They cannot talk or understand language yet but are learning simple tasks.

Related Article: Asimov's Laws of Robotics, and why AI may not abide by them

Join the Discussion

Recommended Stories

Real Time Analytics