UC San Diego Engineers Developed Four-legged Robots That Can Walk on Terrains Without Bumping Into Obstacles

Researchers at the University of California, San Diego, have created a new set of algorithms that allow four-legged robots to run and walk over difficult terrain while avoiding both stationary and moving obstacles.

Robot Monster
Robot Monster JCH/Pixabay

Algorithm's Mechanism

The system-guided robot was able to move quickly and autonomously over a variety of surfaces, including sand, gravel, grass, and bumpy dirt hills. Branches and fallen leaves covered this terrain. The robot was able to move around the different terrains without running into any benches, poles, trees, plants, or other objects. Additionally, the robot was able to move across a crowded office without running into any boxes, desks, or chairs.

Because of the way the system combines the robot's sense of sight with another type of sensation called proprioception, the legged robot is given more versatility. Proprioception involves the robot's sense of movement, direction, speed, location, and touch. In this case, the sensation of the ground beneath their feet.

The study brings scientists one step nearer to developing robots that can carry out search and rescue missions or gather information in conditions that are too dangerous or difficult for humans.

The group's code is available on GitHub, and the publication may be found on the arXiv preprint server.

Combination of Skills Through a New Algorithm

According to Xiaolong Wang, a professor of electrical and computer engineering at the Jacobs School of Engineering at UC San Diego, the majority of current methods for teaching legged robots to walk and navigate rely either only on vision or proprioception. Wang said that previous experiments were not able to combine both approaches.

Wang made the analogy that it was similar to instructing a blind robot to walk by merely touching and feeling the ground once. In the other , the robot plans its leg movements exclusively through sight. He said that the two distinct activities are not being learned simultaneously.

On the other hand, he said that proprioception and computer vision are integrated in their work to enable a legged robot to walk about successfully and fluidly while dodging obstacles in a variety of challenging circumstances, not just well-defined ones.

ALSO READ: 3,000 Robots Working in Ocado's Automated Warehouse for Faster Online Grocery

In-depth Analysis of the New Algorithm

Based on TechXplore, the method Wang and his team created combines information from sensors on the robot's legs with information from real-time images obtained by a depth camera mounted on the robot's head using a unique set of algorithms.

According to Wang, there is occasionally a small delay in obtaining photos from the camera during actual operation. As a result, the information from the two different sensory modalities does not always arrive at the same time.

The team's approach involved simulating this mismatch by randomly generating the two input sets, a process they refer to as multi-modal delay randomization.

A reinforcement learning policy was subsequently trained end-to-end using the fused and randomly generated inputs. This method enables the robot to navigate swiftly and anticipate changes in its surroundings, enabling it to move and avoid obstacles more quickly over a variety of terrains without the assistance of a human operator.

RELATED ARTICLE: Industrial Robots: How Are They Affecting the Mental Health of Human Co-Workers?

Check out more news and information on Technology in Science Times.


Join the Discussion

Recommended Stories

Real Time Analytics