It’s not only the capabilities in LLMs that are sometimes surprising their creators — self-driving is seeing similar emergent capabilities too.
Waymo CEO Dmitri Dolgov recently described a moment that stopped him in his tracks: reviewing a log of his car’s behaviour and realising it had done something he genuinely didn’t think it was capable of. It’s the kind of experience — exhilarating, slightly disorienting — that engineers working on sufficiently complex AI systems are increasingly reporting.

“I’ve had some moments where the car does something and I look at a log and I’ve been surprised,” Dolgov said on the Cheeky Pint podcast. “It does things that I didn’t think it was capable of doing. When you build a system and you think you understand how it works, you understand fully the limits of its capability and performance, and then it does something almost magical. It’s exhilarating.”
The specific incident he described took place in San Francisco. A fairly routine urban scenario: Waymo’s car is at an intersection, light is red, cross traffic clears, a bus goes by and stops — partially blocking the lane. The light turns green. The car starts to go, nudging around the bus.
“You see a pedestrian being detected on the other side of the bus,” Dolgov recounted. “The car responds appropriately — it slows down, goes a little wider. And then the pedestrian actually emerges from the bus, and we go on our way.”
That part, in isolation, sounds like good software doing its job. What made Dolgov do a double take was how the system detected the pedestrian in the first place.
“The first time I looked at that log, I thought — what’s going on here? I know we have pretty good sensors and the software is very capable. But we don’t see through stuff. That’s not how cameras or lidars and radars work. It saw the pedestrian on the other side of the bus. And it’s not like you can look at the windows — it’s a massive metal box. You look at the sensor data and it just shouldn’t be able to go through it. And in the camera, you can’t see through because there are reflections and people on the bus. So I thought — maybe it’s noise, or some coincidence. I couldn’t actually believe it.”
What had actually happened was something elegant and unexpected. Waymo’s peripheral lidars had bounced signals under the bus. The return was faint and noisy — just a trace of movement from the pedestrian’s feet. That was enough.
“What actually turned out to be happening is that our peripheral lidars bounced under the bus and there was just a very, very noisy reflection of the movement of the person’s feet. That was enough for the AI models to say: I likely detect a pedestrian there. And moreover, there was enough data there to predict what they were going to do. It just blew my mind.”
The incident is a clean illustration of what makes modern AI systems genuinely different from classical software — and why they can surprise even the people who built them. Waymo’s system wasn’t programmed with a rule that said “check under buses for feet.” It inferred from noisy, partial sensor data that something worth acting on was probably there.
This is part of a broader pattern in autonomous driving. Waymo’s multi-sensor approach — cameras, lidar, and radar working in concert — has proven more capable than anticipated, including on freeways where critics predicted it would struggle. The company has also begun using Google DeepMind’s Genie 3 to simulate rare edge cases — tornadoes, wrong-way drivers, animals on the road — precisely because real-world data alone can’t cover the long tail of what a deployed fleet will encounter. The goal is a system robust enough that its responses, even in novel situations, are grounded in something the training process understood — even when the engineers themselves didn’t anticipate the specific application.
Dolgov’s story about the bus and the feet suggests that’s already happening.