Scientists at the Massachusetts Institute of Technology have created a new simulation system to train self-driving cars with unlimited steering possibilities. The goal of the simulation system is to help self-driving cars learn to navigate the worst-case scenarios before they can drive freely on real streets across the country and around the world. At present, the control system of autonomous vehicles relies heavily on real data sets from the driving trajectory of human drivers.
Based on these data, self-driving cars can learn to simulate safe steering and control in a variety of situations. There is little real data from dangerous edge cases, such as near-collisions or forced-out of roads. A computer program called the “analog engine” aims to help train self-driving systems to recover from these potential scenarios by rendering detailed virtual worlds to mimic real-world situations.
Researchers at the Massachusetts Institute of Technology have solved the problem using a immersive simulator called virtual image synthesis and autonomous conversion. The system uses a small set of data captured by humans while traveling on the road to synthesize virtually unlimited points of view from the trajectory so that the vehicle can enter the real world.
In this immersive simulator, the autopilot system is rewarded for driving safely without a crash, so it learns how to get to its destination safely, including handling any situations encountered, including turning between lanes or regaining control after recovering from a near-car crash. In tests, the MIT simulator was able to be securely deployed in a full-size driverless car and navigate the previously unseen streets.
The autopilot system is able to successfully restore the car to a safe driving trajectory in seconds before it is simulated and approaching a variety of crashes. The MIT work was done in collaboration with the Toyota Institute.