At a time when Tesla’s market capitalisation is at another high and CEO Elon Musk is in the spotlight, industry experts are warning against “not being too risky.” Tesla’s share price has now exceeded $1,000, surpassing Toyota as the world’s most valuable car maker. Behind the boom in capital, Tesla’s sales in China also surged again in May, selling 111,000 domestic Model 3s, up 205 percent month-on-month.
But no one can hide its deadly product flaws. On June 1, a moving Tesla Model 3 on a Taiwanese highway crashed into a white van that had overturned on the road. It is understood that the car was in the Autopilot Assisted Driving System (AP) on at the time of the incident.
Notably, this is not the first time Tesla’s self-driving system has failed to recognize white objects in certain circumstances. The reliability of the system caused by the “color-blind” BUG (vulnerability) caused by many casualties has been called into question.
AP can’t recognize white objects
According to the owner of the crash, his Model 3 was in ap-open at the time of the crash and was travelling at about 110km/h. As soon as he saw the truck, he hit the brakes at all, and the braking time and the braking distance were very short, and eventually hit the truck.
Fortunately, the goods pulled by the van at the time were made of cream-like soft material, so they provided a large cushion for the collision. Although the model 3 was partially damaged, the driver was not injured.
Various indications and analysis indicate that the accident was mainly caused by a BUG in the self-driving system. Public lying shows that Tesla’s current AP and even higher FSD self-driving kits rely on camera-based, multi-sensor fusion vision. It is understood that the Model 3 is equipped with eight cameras, 12 ultrasonic sensors and an enhanced millimeter wave radar.
“Cameras have inherent flaws, and when there are strong reflections, or white-based objects, they may judge clouds or even think there are no obstacles. Especially in the strong light, the shortcomings of Tesla’s scheme are obvious. Yang Shengbing, an associate professor at Wuhan University of Technology’s School of Automotive Engineering, told the Daily Economic News that even if Tesla is equipped with millimeter-wave radar, it is limited by a short perceived distance and the system cannot respond quickly if the speed is too fast. After all, algorithms also take time, and they need to leave enough time for scheduling and reaction.
In fact, the accident is not the first time Tesla has exposed flaws in the system. In May 2016, a Florida man driving the Model S crashed into a white semi-trailer driven in the middle of the road while the AP was turned on. The accident resulted in the Model S head being “cut off” directly and the driver’s death on the spot, the first fatal accident caused by a Tesla AP failure.
Three years later, still in Florida, a Tesla Model 3 crashed into a white tow truck that was slowly crossing the road at 110 km/h. It is understood that the Model 3 is also in the AP-turned-state, but the driver and AP did not make evasive action, the car was “cut head” tragic, the driver was also killed on the spot.
The Daily Economic Sreader sought confirmation from Tesla about the vulnerability of the AP’s inability to identify the white object, but did not respond to a request for comment as of the time of writing.
The vision scheme is not reliable?
A succession of accidents has called Tesla’s AP into question, and its vision solutions have been caught in the air.
It is understood that in the self-driving solution, there are currently two mainstream sensor routes: one is Waymo as the representative of lidar-based, other sensor-assisted technical routes, and the other is tesla-based camera-based, multi-sensor fusion vision. In fact, in the actual landing process, the above programs have encountered a series of problems.
“Lidar is expensive, difficult to mass produce in the short term, has a short service life, is very sparse in the laser point of a distant target obstacle, and the speed of safe driving is limited. “The head of self-driving start-up New York Technologies told the Daily Economic News that lidar has no color information and needs to be individually equipped with cameras to identify traffic lights, traffic signs and other information.
“The visual-based solution can handle scenes such as light rain, snow, mist, night, but bad weather conditions are very important driving challenges for visual, lidar, and even human drivers. “The above-mentioned New Zealand technology leaders said, in addition to the camera and other perceptive technology development, but also to ensure the stability of the overall system, to create a full set of testing system.
It’s worth noting that NewTech, founded by a core member of Tesla’s self-driving team, takes a similar technology path to Tesla.
Yang Shengbing believes that there should be no talk of a better technical route, the most important thing is to improve the reliability of these programs. “For now, improving reliability can only be a measure of enhanced testing and enhanced vehicle safety levels, including actual testing. Now, driverless for all kinds of scenarios are in place, and some have achieved good results. But the world faces challenges in terms of reliability. Yang Shengbing said.
The risked Musk.
While the visual solution doesn’t solve the problem of autonomous driving reliability, Musk is bold enough to use it in a production vehicle. Because in his view, the car has become an electronic product, temporary imperfection can be optimized by subsequent long-range OTA upgrade.
However, in Yang Shengbing’s view, the first thing to do is to become a car man, the automotive industry in the function, safety, development structure and process system, etc. , need a long time precipitation.
In addition to the AP loophole, Tesla has made a lot of jokes about traditional manufacturing. For example, leaving the braking feel to the user (switch adjustment) settings, power flying cars, etc., these events reflect Musk’s lack of understanding of the car and not well followed the norms and standards of the automotive industry.
“Musk is still too risky, and the self-driving algorithm is going to make the most conservative estimates, not the ones that come in. Especially at high speeds, risking lives can be put in danger. Yang Shengbing said that the current self-driving algorithm must be the last level of human control, even if not to judge what obstacles ahead, but also to slow down, to intervene.
Compared to Musk’s venture, traditional car companies and suppliers are relatively conservative, and safety is their bottom line. “It’s not hard to develop L1 and L2-level self-driving cars, but going to the next level requires companies that rely on billions of dollars in volume, and the long-term accumulation of the automotive industry, otherwise there will be definitely a problem with understanding.” Yang Shengbing believes that vehicle products such as BMW, Volvo, Bosch and Ampofor, which have been precipitated in the automotive industry for many years, are more reliable in safety control.
“We’re not going to let immature autonomous driving aids threaten occupant safety. Once this technology is used in our products, it is important to ensure the absolute safety of the occupant. Polestar China President Gao Wei told daily economic news (microsignal: nbdnews) that the L2 self-driving feature is more mature and safer, providing the crew with the most basic safety assurance and moderate self-driving assistance experience.
At present, in addition to Tesla, Xiaopeng P7, Chang’an UNI-T, GAC New Energy Ean V and other models have also started to carry the L3-class self-driving system. In this regard, Yang Shengbing reminded the vast number of consumers: “Before the arrival of the L5 fully autonomous driving, users with self-driving features of the car, must be experience-oriented, or in a limited range, fixed scenes and other safe and controllable range to use.” “