Existing self-driving technology mainly detects obstacles encountered on the road ahead through LiDAR and radar, but neither system is good at identifying vehicles in foggy environments. Now, however, engineers have found that dual radar technology can do a good job.
The LiDAR (Light Detection and Rangeflight/Lidar) sensor measures the shape and distance of an object by sending out a laser pulse, and then measures the time the light is reflected back from the object to determine the obstacle. The radar unit emits radio waves that are also reflected back by objects located in its path.
Unfortunately, air obstacles such as fog, dust, rain or snow absorb light used by the LiDAR system, making them unreliable. Although the radar is not adversely affected, it can only generate some of the images it detects – because even under ideal conditions, only a small fraction of the radio signals emitted are reflected back to its sensors.
Led by Professor Dinesh Bharadia, a team at the University of California, San Diego solved the erred problem by installing two radar units on the bonnet of the car, about one car wide (1.5 m / 4.9 ft).
The team then used a special algorithm to combine the received reflections to create a composite image, while also filtering out unrelated background “noise.” This setting has been successfully tested under simulated foggy conditions.
“By installing two radars at different favorable locations, and they have overlapping field-of-view perspectives, we can create a high-resolution area that makes it easy to detect objects that exist,” said Kshitiz Bansal, a doctoral student.
Researchers are now in talks with Toyota, which may combine the technology with optical cameras on vehicles. This may eventually prove unnecessary with more expensive LiDAR sensors.