BADGR robots use deep learning to plan and cross barrier-free paths

Past projects and studies have shown that deep learning is an effective technique for training robots to do specific things. For example, we have seen OpenAI use neural networks to train Dactyl to solve the Rubik’s Cube, and an algorithm called 6-DoF GraspNet that helps robots pick up any object. Now researchers at the University of California, Berkeley, have created the Berkeley Autonomous Driving Ground Robot (BADGR).

BADGR is an end-to-end autonomous robot trained with self-monitoring data. Unlike most traditional robots that rely on geometric data to plan collision-free paths, BADGR relies on “experience” to cross the terrain.

At the heart of BADGR is the Nvidia Jetson TX2, which processes on-board cameras, six-degree-of-freedom inertial unit sensors, 2D LIDAR sensors and GPS systems. Specifically, BADGR has an artificial neural network that provides feedback from real-time camera sensor observations and a series of future planned actions.

BADGR robots use deep learning to plan and cross barrier-free paths

The neural network then predicts the best possible path to the target. This approach has one major advantage over the traditional method of treating path traversal as a geometric problem: traditional techniques can avoid high grass in the path, where BADGR can navigate. In addition, this allows BADGR to improve as more data is collected. The researchers note that:

The key insight behind BADGR is that by learning directly from real-world experiences, BADGR can understand navigation capabilities, improve as more data is collected, and extend it to invisible environments.

The team says the success of BADGR raises a number of questions. Primarily, how will robots securely collect data in invisible, even hostile environments? How will BADGR adapt to a dynamic environment with life disorders, such as walking?

The paper has been published in arXiv. The researchers also published their findings in BADGR’s GitHub repository.