Thus far, most AI leaders have been telling people to learn how to use AI tools, saying that the future would belong to those who are able to harness and manage AI to get work done. But this scenario might not last for very long.
In a recent interview with the New York Times, Anthropic CEO Dario Amodei offered a sobering assessment of the timeline for AI autonomy in the workplace. His comments suggest that the current “centaur” model of human-AI collaboration—where humans supervise and direct AI systems—may be a fleeting phase, particularly in fields like software engineering. What’s striking about Amodei’s warning is not just the inevitability of AI displacement, but the speed at which it could occur.

“What we’re going to see is first, the model only does a piece of what the human software engineer does, and that increases their productivity,” Amodei explained. “Then even when the models do everything that human software engineers used to do, the human software engineers kind of take a step up and they act as managers and supervise the systems.”
This intermediate stage, where humans oversee AI work, draws its name from a chess analogy. As Amodei noted: “This is where the term centaur is used to describe essentially man and horse fused, AI and engineer working together. This is like centaur chess. After Gary Kasparov was beaten by Deep Blue, there was an era that I think for chess was 15 or 20 years long where a human checking the output of the AI playing chess was able to defeat any human or any AI system alone. That era at some point ended, and then it’s just the machine.”
The parallel to software development is clear, and concerning. “I think we’re already in our centaur phase for software, and during that centaur phase, if anything, the demand for software engineers may go up,” Amodei said. “But the period may be very brief.”
It’s this brevity that forms the core of Amodei’s worry. “I have this concern for entry-level white collar work, for software engineering work. It’s just going to be a big disruption. My worry is just that it’s all happening so fast. People talk about previous disruptions, right?”
The implications of Amodei’s assessment are significant for both workers and policymakers. Unlike previous technological disruptions—the Industrial Revolution, the rise of computers, even the internet—which unfolded over decades and allowed for workforce adaptation, AI’s trajectory suggests a compression of these phases into a matter of years. The chess analogy is particularly apt: the centaur era in chess lasted roughly two decades before machines became definitively superior. In software engineering, Amodei suggests, that transitional period could be far shorter. This acceleration leaves little time for workers to retrain, for educational institutions to adapt curricula, or for social safety nets to be established. The traditional advice to “learn to work with AI” may offer only temporary reprieve if the supervising role itself becomes obsolete. Recent developments support this timeline concern—AI coding assistants have rapidly evolved from autocomplete tools to systems capable of generating entire codebases, and Anthropic’s own Claude Code and competing systems from OpenAI and Google are demonstrating increasingly sophisticated reasoning and task completion abilities. For entry-level workers especially, the window to establish expertise that AI cannot replicate may be narrowing faster than many realize.