Don’t Want AI To Make Humans Disappear, It Should Enable Them: Peter Norvig

AI seems to be a runway freight train at the moment in terms of its growing capabilities, but even some technologists seem to believe that there should be limits to how far it’s allowed to progress.

Peter Norvig, who previously served as a director of research and search quality at Google and co-authored the seminal textbook Artificial Intelligence: A Modern Approach with Stuart J. Russell—used in over 1,500 universities across 135 countries—has raised concerns about how we conceptualize AI’s role in human life. His worry centers not on AI’s capabilities themselves, but on a troubling assumption embedded in how we discuss automation: that human presence should diminish as AI advances.

“I see too much of people saying AI is going to be one dimensional and automation’s gonna be one dimensional. And the more the better. And I think that’s a mistake that I’m worried about,” Norvig explains. He points to a specific example that crystallizes his concern: “There’s a great diagram from the Society of Automotive Engineers of level of self-driving cars, and they define that as five levels of self-driving. And they did a great job of that. And that’s really useful. And now you can say, you know, where is Waymo or Tesla? Are they at level two or level three or what level are they at? And that was useful.”

However, Norvig’s issue isn’t with the classification system itself—it’s with the visual representation that accompanied it. “But the diagram they used to accompany those levels was worrying to me because they’ve got this diagram and at level one they have this icon of a person behind the car holding onto the steering wheel, and then when you get up to level five, that person has disappeared and they’ve just become a dotlike outline, right?”

This seemingly innocuous design choice reveals a deeper philosophical problem with how we frame technological progress. “And so it’s like, I don’t want technology that makes me disappear. I want technology that respects me. And I don’t want this trade off to be one dimensional of if I get more automation then I disappear more. I’d rather have it be two dimensional and let me choose.”

Norvig’s observations touch on a critical tension in AI development: the assumption that progress means replacing humans rather than empowering them. His call for a “two dimensional” approach—where automation and human agency aren’t inversely related—challenges the prevailing narrative that peak AI achievement means minimal human involvement. This concern resonates beyond autonomous vehicles. In creative industries, professionals worry about AI art generators replacing illustrators. In customer service, chatbots increasingly handle interactions with no human escalation path. In software development, AI coding assistants raise questions about the future role of programmers. Norvig’s perspective suggests these aren’t inevitable outcomes but design choices—and that we should be building AI systems that amplify human capabilities and preserve human choice rather than engineering humans out of the picture entirely. As AI capabilities accelerate, his reminder that technology should “respect” rather than erase us offers a crucial framework for evaluating not just what AI can do, but what role we want it to play in human life.

Posted in AI