The Massachusetts Institute of Technology has created a new system for self-driving cars that may one day help them run safely with human drivers. A team of researchers at the Massachusetts Institute of Technology has been exploring whether self-driving cars can be programmed to classify the social personality of other drivers, so that self-driving cars can better predict what different cars will do.
This will allow self-driving cars to drive more safely around other vehicles. The team published a paper looking at the integration of social psychology tools to classify the selfish or selfless driving behaviors involving a particular driver. The team used something called “social value orientation” (SVO), which indicates the extent to which someone’s selfishness (self-interest) is altruistic or cooperative (pro-social). The system estimates the driver’s SVO and creates a real-time driving trajectory for self-driving cars. The algorithm is tested for combined lanes and unprotected left turns. The algorithm allows the team to better predict the likely behavior of other cars with a 25 percent coefficient.
In a left-hand turn simulation, cars with new algorithms wait when approaching drivers become more selfish and turn when another car is more affable. The new algorithm, while not strong enough to be used on real roads, can help drivers in different ways.
The algorithm can be used to warn the driver in the rear view mirror that a blind car has an aggressive driver, allowing the driver to adjust accordingly. The algorithm could also make self-driving cars one day behave more like humans. One problem with self-driving cars today is that they are programmed to assume that all drivers behave the same way. This makes self-driving cars conservative in decision-making in four-direction parking lots and other intersections, which could anger other human drivers.