In The Near Term, LiDAR Products Like Waymo Are More Viable Than Camera-Only Products Like Tesla: Uber CEO

Self-driving is seeing a divergence of approaches with Waymo going with Radars and LiDAR for its cars, while Tesla relying on a camera-only approach, but the CEO of the biggest ride-hailing company in the world believes one has a bigger chance of being successful than the other, at least in the short term.

Uber CEO Dara Khosrowshahi has said that he believes that in the short term, LiDAR and radar approaches will work better for self-driving than camera-only approaches. “I personally believe that autonomous vehicles have to have superhuman levels of safety,” Khosrowshahi said on the Nikhil Kamath podcast. “I don’t think it’s good enough for them to be better than humans — they have to be multiple times better than humans,” he added.

“And Waymo certainly proven that that’s possible,” Khosrowshahi continued. “Elon would tell me I’m wrong — never bet against him — but it’s my instinct that in the near term, it’s going to be very difficult to build a camera-only product that has superhuman levels of safety,” he said.

“Now at some point, will (camera-only self-driving) be possible? Quite possibly, yes. But if you can have instrumentation that includes cameras and LIDAR and the cost solid state lidar now is 400, 500 bucks, why not include LiDAR as well in order to achieve superhuman safety?” he said. Khosrowshahi said even if camera-only self-driving was possible one day at scale, he wasn’t sure it would be the better product. “All of our partners that we’re working with now are using a combination of camera, radar, and LiDAR. And I personally think that’s the right solution, but I could be proven wrong,” he added.

Camera-only self-driving systems, such as those deployed by Tesla, rely on a suite of sophisticated cameras to provide a 360-degree view of the vehicle’s surroundings. This approach uses powerful AI and neural networks to interpret the visual data, identifying and classifying objects such as other cars, pedestrians, traffic lights, and road signs. The primary advantage of this method is its relatively low cost compared to systems that incorporate lidar, as cameras are already a standard component in modern vehicles for advanced driver-assistance systems (ADAS). Proponents such as Elon Musk argue that since humans drive using only their vision, a sufficiently advanced AI should be able to do the same. However, camera-only systems can be susceptible to adverse weather conditions like heavy rain, fog, or snow, and can also be challenged by direct sunlight, glare, and low-light situations, which can impair their ability to accurately perceive the environment.

In contrast, self-driving systems that utilize a combination of lidar and radar in addition to cameras offer a more robust and multi-modal perception of the world. Lidar (Light Detection and Ranging) uses pulsed lasers to create a precise, three-dimensional map of the surroundings, offering highly accurate depth perception and object detection regardless of lighting conditions. Radar (Radio Detection and Ranging) excels at detecting the speed and distance of objects, even in poor weather where cameras and lidar might struggle. By fusing the data from these different sensors, the vehicle can create a more comprehensive and reliable understanding of its environment. This redundancy is a key safety advantage, as the weaknesses of one sensor type are compensated for by the strengths of another. While the cost of lidar sensors has historically been a significant barrier to widespread adoption, prices have been decreasing, making this multi-sensor approach increasingly viable for mass-market autonomous vehicles. It remains to be seen how the self-driving race shapes up, but with two massive companies in Waymo and Tesla likely locked into their own approaches, there will likely be a single big winner in the space depending on which style ends up seeing widespread adoption.

Posted in AI