Former Google CEO Eric Schmidt Explains Why The Unpredictability Of AI-Based Weapons Systems Could Deter War

AI-powered warfare sounds like a terrifying outcome of the new technology, but former Google CEO Eric Schmidt has a counter-intuitive view on how AI-based weapons systems could deter conflict.

Schmidt, who led Google from 2001 to 2011 and has since become a prominent voice on technology policy and national security, recently offered a striking perspective on the future of warfare. Rather than viewing AI-driven military systems as an escalation of conflict, he argues they could create a new form of deterrence—one based not on mutual assured destruction, but on mutual unpredictability. His analysis centers on the emerging battlefield reality of autonomous drone warfare and the strategic implications of reinforcement learning systems that neither side can fully predict or counter.

The Drone Arms Race

Schmidt begins by describing the current trajectory of military technology: “The next thing that happens is that both sides develop drone capabilities, which is what you’re seeing now, and each then becomes a war of drone against drone. So you have drone against anti-drone, and so then the shift moves to how do you detect the enemy drone, and how do you destroy it before it destroys you.”

This evolution is already visible in conflicts around the world, from Ukraine to the Middle East, where drone technology has fundamentally altered modern warfare. But Schmidt’s analysis goes further, projecting a future state that he believes few have fully considered.

The Unpredictability Paradox

“The ultimate state is very interesting, and I don’t think anyone has foreseen this,” Schmidt explains. “If you go back to our conversation about reinforcement learning and planning, which is what you’re seeing with AI, let’s say that we’re on one side and we have a million drones, and there’s another side over here that has another million drones.”

Here’s where his argument becomes counterintuitive: “Each side will use reinforcement learning AI strategies to do battle plans, but neither side can figure out what the other side’s battle plan is. And therefore the deterrence against attacking each other will be very high. But in an AI world where you’re doing reinforcement learning, you can’t count what the other side is planning. You can’t see it, you don’t know it, and I believe that that will deter what I view as one of the most horrendous things ever done by humans, which is war.”

Mutually Assured Destruction 2.0

Schmidt’s vision isn’t one of bloodless victory, but rather of conflicts so devastating that they become unthinkable. “It’s very important to understand that there’s no winners,” he emphasizes. “By the time you had a drone battle of the scale I’m describing, the entire infrastructure of your side will be destroyed. The entire infrastructure of the other side will be destroyed. These are lose-lose scenarios.”

The Broader Implications

Schmidt’s thesis represents a fascinating evolution of Cold War deterrence theory. During the nuclear age, mutual assured destruction (MAD) prevented superpower conflict because both sides understood the consequences. Schmidt suggests AI warfare could create a similar equilibrium, but through opacity rather than transparency—neither side can predict the outcome well enough to risk engagement.

This comes as military powers worldwide are racing to develop autonomous weapons systems. The U.S. Department of Defense has made AI a priority under its Replicator initiative, aiming to field thousands of autonomous systems. China has similarly invested heavily in drone swarms and AI military applications. Meanwhile, Ukraine’s conflict has become a testing ground for commercial drone warfare, with both sides deploying increasingly sophisticated autonomous systems.

However, Schmidt’s optimistic deterrence scenario raises profound questions. Will the unpredictability of AI systems truly prevent conflict, or could it lead to accidental escalation when neither side understands what triggered an autonomous response? International efforts to regulate lethal autonomous weapons have stalled, suggesting the world may stumble into this future without adequate safeguards. Whether AI-powered warfare becomes a stabilizing force or an unprecedented catastrophe may depend on decisions made in the coming years—decisions that will be shaped by perspectives like Schmidt’s, even as the technology rapidly outpaces policy frameworks designed to contain it.

Posted in AI