For the first time, a team from Columbia Engineering has developed a robot that can learn a human body model from scratch without any people's help.
Inceptive Mind reported that the robot built a kinematic model of itself, which is then utilized in various circumstances to plan movements, accomplish tasks, and avoid obstacles. Even damage to its body was automatically detected, repaired, and then detected again.
Scientists Built a 'Self-Aware' Robot Amid Artificial Intelligence Sentience Issues
New Scientist claimed that the robotic arm developed by Columbia Engineering researchers is the first robot to be able to "see" itself without the aid of human bodies.
The robotic arm was encircled by cameras. The researchers fed the video into the robot's artificial intelligence (AI) mind.
According to the study published in Science, the robot viewed itself for three hours and tracked its location in space using data points. The scientists then added 10,000 additional data points using a simulation.
The team created a depiction of the neural network of the robot arm, showing how the arm would imagine moving.
The robotic arm could accurately model itself in 3D space with a 1% error margin, The Sun added.
The robotic arm can be seen moving around a box to reach a dot as instructed in a video (embedded below this page) showing the device in action.
ALSO READ: AI Systems Prefer Human Voices Rather Than Binary Codes [STUDY]
Effects of Self-Reliant Artificial Intelligence Systems
New Scientist, citing the researchers, mentioned that this technology may have significant applications with more study, but self-awareness is not one of them.
In addition, robots can build models themselves without assistance from engineers has significant implications. The said robot offers opportunities for self-regulated preventative and predictive maintenance in addition to labor savings.
The engineering team claimed that self-reliance could be advantageous in some circumstances, including the capacity to withstand its wear and tear and the capacity to make up for harm. For instance, an industrial robot can see an issue and make a correction or request help.
"Self-modeling is a primitive form of self-awareness," said Lipson, who has spent more than ten years researching how to give robots a degree of self-awareness, Machine Design reported. "If a robot, animal or human has an accurate self-model, it can function better in the world, it can make better decisions and it has an evolutionary advantage."
Giving robots complete autonomy might be debatable and raise ethical issues. But it might be advantageous to use the correct technology in the right circumstances.
The self-awareness shown in this study, according to Lipson, is "trivial compared to that of humans. He asserted that the researchers must proceed cautiously and gradually to maximize the potential advantages and minimize the hazards.
What's Next For Robots, AIs
Lipson said in an Inverse report that the next stage will apply this method to a more sophisticated robot. He did, however, caution that there is still a long way to go before building robots that are even close to being aware of sentient life.
Robots with more joints and limbs, maybe ones that can move around a room, can apply the same ideas used in this work. Though it will take a lot of work to get there, the strategies developed by these researchers may allow for the effective and flexible programming of extremely complex robots.
Additionally, this method uses less digital memory. Consider how memorizing the multiplication table differs from knowing how to do the arithmetic on your own. Once it understands how its body functions, the robot won't need to save every solution to the issues it will face since it will be able to handle each issue as it arises.
Though it may still be a long way off, researchers also intend to educate a robot to comprehend the structure and operation of other robots and collaborate with them.
Check out more news and information on Robotics in Science Times.