MIT’s latest research: Increased social awareness of self-driving cars can improve travel safety

According to MIT News, a team of researchers at MIT News’s Computer Science and Artificial Intelligence Laboratory (CSAIL) used social psychology tools to classify drivers’ selfishness as indicators of their social value orientation, thereby improving the accuracy of autonomous vehicles in predicting other vehicle driving behavior on the road. Finally achieve the purpose of travel safety. The paper has been published in the Proceedings of the National Academy of Sciences.

In the age of autonomous driving, even cars with powerful sensors and complex data processing capabilities lack what almost every 16-year-old has: social consciousness.

Although self-driving technology has made great progress, autonomous vehicles in the binary thinking, still only the vehicles encountered on the road as obstacles, ignoring the driver sonly ingested, motivation and personality.

MIT's latest research: Increased social awareness of self-driving cars can improve travel safety

Photo Source: MIT NEWS Owner: MIT NEWS

A new direction for autonomous driving

Recently, in an attempt to raise social awareness of self-driving cars, a team at The Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has attempted to explore whether self-driving cars can be programmed to classify the social character of other drivers, enabling self-driving vehicles to better predict the possible driving behavior of surrounding cars. This improves driving safety.

The research team combines social psychology and game theory as a theoretical framework for envisioning the state of society. In a newly published paper, researchers integrate social psychology tools to classify drivers’ driving behavior based on their selfishness or selflessness. Specifically, the researchers used Social Value Orientation (SVO) to determine the real-time driving trajectory of self-driving cars.

When testing algorithms for parallel and unprotected left-turn simulation tasks, the team found that autonomous vehicles using this scenario improved the accuracy of predicting other vehicle behavior by 25 percent.

For example, in a left-turn simulation, if the self-driving vehicle is more selfish to predict the driver of the next vehicle, the self-driving vehicle will give it a head start, and when it comes to a more socially ethical driver, they may turn directly. Similarly, when self-driving cars turn left and go straight into a combined lane, there are usually two options: drivers who are true to social ethics are willing to let other cars merge into lanes, while self-centered drivers don’t.

The paper was published in proceedings of the National Academy of Sciences. Wilko Schwarting, a graduate student and first author, says drivers’ tendencies to cooperate or compete often affect their performance. In this paper, the team attempted to quantify these.

The paper’s co-authors also include MIT professors Sertac Karaman and Daniela Rus, research scientist Alyssa Pierson, and former lab postdoctoral j.d., Javier Alonso-Mora-Mora-Mora.

Wilko Schwarting also said that setting more human-like behavior in self-driving vehicles is critical to the safety of passengers and surrounding vehicles. This is because driving in a predictable manner ensures that drivers of other vehicles around them can also be pre-judged and respond appropriately.

Challenges and opportunities coexist

At the same time, one of the central problems of self-driving cars today is that, by programming, they assume that all humans behave in the same way, so they may be more cautious at intersections. While this caution reduces the chance of an accident, it can also cause inconvenience to other drivers. After all, most traffic accidents are tail-end studs caused by driver impatience.

It’s worth noting that while the system isn’t currently in practice in a driving situation, it may also help drivers. Suppose you’re driving and a car suddenly enters your blind spot. The system will then alert you through the rearview mirror that the driver of the car is strong, so you can adjust accordingly. At the same time, the system can allow self-driving cars to really learn more about human behavior, so that the drivers of the surrounding cars can understand.

As Daniela Russ says:

By modeling the personality of the driver, the model is precisely integrated using the social value orientation in the decision-making module of the self-driving car, which opens the door to the sharing of safer and more seamless road resources between the traditional and self-driving cars.

In the next phase, the research team plans to apply this solution to other subjects in pedestrian, bicycle and driving environments. In addition, they will study other robotic systems that are closely related to human life, such as home robots, and integrate social value orientation into the predictive and decision-making algorithms of these robot systems. Alyssa Pierson notes that the ability to directly estimate social value orientation sits in non-laboratory conditions, based on observed behavior, is also important for areas other than autonomous driving.

Add a Comment

Your email address will not be published. Required fields are marked *