A team from the Massachusetts Institute of Technology (MIT) is studying whether artificial intelligence in self-driving cars can classify the character of the surrounding human drivers,media New Atlas reported. The team at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) hopes to accurately assume that the character of a human driver is better able to enable self-driving cars to predict driver behavior. Clearly, in a world where AI-driven cars are mixed with cars that are still human-operated, this will mean greater safety.
The AI currently used for self-driving cars largely assumes that all humans are doing the same thing and spend countless resources adapting to unaccustomed conditions. This means caution, for example, when an AI attempts to navigate, there is a long wait at a four-way stop. This attention reduces the chance of an accident at an intersection, but can lead to other hazards as the driver in and around the waiting car reacts to its highly conservative driving behavior.
A new paper by the CSAIL team outlines how methods in social psychology and game theory can be used to classify human drivers in ways that are useful to AI. Using a method called “social value orientation” (SVO) can be used to rate drivers based on their self-interest (“selfish”) or altruism (“pro-social”). Their goal is to train AI to assign sVO scores to drivers, create risk assessments based on this, and use this information to change their own behavior.
By conducting a simulation test, the computer obtained a short motion summary that mimicked the behavior of other cars, which improved AI’s prediction of these motor movements by 25 percent. For example, in a left-turn simulation, the computer is able to more accurately assess the safety of entering an intersection based on the upcoming driver’s social capabilities.
The team will continue to build algorithmic predictions for evaluating other drivers, so it is not ready for actual implementation. Even if the vehicle is not self-driving, the safety of sVO at work is also important. For example, a car entering the driver’s blind zone can be evaluated for SVO to inform the driver of the warning level. Rear-view mirror warnings that oncoming radical drivers can also help improve responses to seemingly unstable or aggressive behaviour.
“Creating more human-like behavior in self-driving cars is critical to the safety of passengers and surrounding vehicles because the predictability of behavior makes it possible to understand and respond appropriately to the behavior of the self-driving car,” said Wilko Schwarting, a graduate student at the Massachusetts Institute of Technology and lead author of the paper. “
The MIT team plans to advance the study by applying SVO modeling to pedestrians, cyclists and other traffic factors in driving environments. They also plan to investigate whether the system will affect robot systems other than cars, such as home robots.
The paper is due to be published this week in the Proceedings of the National Academy of Sciences.