AI agents won’t only change how work is done, but will also require changes to the tools on which it’s done.
Jeff Dean, Chief Scientist at Google DeepMind and Google Research, made this case at Nvidia’s GTC 2026 developer conference, where he sat down with Nvidia Chief Scientist Bill Dally for a wide-ranging conversation on the future of AI. Dean’s core argument: as AI agents grow faster and more autonomous, the tools they rely on — built for human-paced work — will become the new bottleneck.

“I do think there’s some challenging things because, as we get these agent-based systems, they typically look like they have a whole bunch of trajectories that they’re rolling out,” Dean said. “You’d like those to be as low-latency as possible in the models to generate the next bit of code or the next set of actions the models can take.”
That demand for low latency runs directly into a problem: the tools agents use were never designed for machine-speed operation. “They’ll interact with some environment, and often the way they interact with that environment is they use tools that were designed for human-speed interaction,” Dean noted. “The startup time of your C compiler is not necessarily something that people pay a lot of attention to, but they need to pay a lot more attention to it.”
The reason becomes clear when you consider the scale of the speed differential. “In this world where you have an agent that is operating 50 times faster than a human, the startup time of all your tools, I think, is going to start to be an Amdahl’s Law-like bottleneck,” Dean said. “‘Cause if you make your model infinitely fast, you are going to get — depending on what you’re doing — a factor of two or three end reduction in latency if your tools are a pretty significant fraction of what you’re doing.”
Amdahl’s Law, named after computer scientist Gene Amdahl, is a principle from parallel computing that says the speedup of a system is limited by the parts of it that can’t be sped up. If 50% of a task is parallelisable and 50% is sequential, you can never more than double your total speed, no matter how fast you make the parallel portion. Dean is applying this same logic to AI agents: even if inference becomes near-instantaneous, sluggish tools will cap the real-world gains.
The implication is stark: “I think we’re going to need to start to really re-engineer a lot of the tools that these models use.”
When asked if this is already happening in coding, Dean confirmed it is: “Yeah. It’s happening for coding tools. It’s happening for even being able to sort of manipulate your spreadsheets and your documents.”
Dean’s observation captures something the AI industry has been slow to confront. The focus has been overwhelmingly on making models faster and smarter — but the surrounding software ecosystem still assumes a human is in the loop, operating at human speed. Compilers, file systems, APIs, IDEs — most were architected decades ago, and their latency was never a serious concern because humans are the slowest link in any workflow.
That’s no longer true. AI coding agents like Cursor, GitHub Copilot, and Devin are already writing significant portions of production code at top companies. Google has said over 30% of its code is now AI-generated; at Anthropic, Claude Code is reportedly writing 80% of its own code. OpenAI employees have made similar claims. When agents are generating code at this volume, even marginal tool latency compounds quickly across thousands of concurrent tasks.
Dean had previously said he expects AI to operate like a junior developer within roughly a year — working around the clock, 24×7, without breaks. If that’s the trajectory, the tooling gap Dean identifies will only widen. An agent running 50 times faster than a human and spinning up a slow compiler hundreds of times a day will be throttled in ways that are largely invisible today.
The problem extends beyond code. Dean specifically called out spreadsheets and documents — the everyday instruments of knowledge work. As agentic AI moves into enterprise workflows, the tools that support those workflows — ERPs, CRMs, document editors, data pipelines — face the same reckoning. Most were built for people who think in seconds and minutes, not for agents that think in milliseconds.
This is the next infrastructure challenge of the AI era. The chip race, the model race, the inference optimization race — all of it runs into Amdahl’s Law if the tools sitting at the edge of the pipeline aren’t rebuilt for machine-speed consumption. The gains from faster models will increasingly be eaten by the overhead of tools that haven’t kept pace.
The broader economic implications are significant. AI agents are on course to reshape how software is built, how decisions are made, and how organisations allocate human attention — compressing timelines, reducing headcount on routine tasks, and opening up entirely new categories of automation. But the full productivity dividend won’t be realised until the underlying tools catch up. The compiler, the spreadsheet, the document editor — in an agent-first world, these aren’t just convenience software. They’re infrastructure. And right now, they’re not built for what’s coming.