AI is constantly getting better at a whole variety of tasks, but one of the foremost figures in the current AI revolution has explained why it won’t stop until it’s able to do everything humans can do.
Ilya Sutskever, the co-founder and former Chief Scientist of OpenAI whose work has been foundational to the current AI revolution, has laid out a logical, albeit “intense,” argument for why Artificial General Intelligence (AGI) is not a matter of if, but when. He asks us to look past the current capabilities and limitations and consider the fundamental nature of intelligence itself.

He begins by acknowledging the uncertainty of the timeline but stresses the certainty of the outcome. “In some number of years—some people say three, some say five or ten, numbers are being thrown around—it’s a bit hard to predict the future. But slowly or maybe not so slowly, AI will keep getting better, and the day will come when AI will do all the things that we can do,” Sutskever states. He clarifies that he isn’t talking about a subset of human abilities: “Not just some of them, but all of them. Anything which I can learn, anything which any one of you can learn, the AI could do as well.”
For Sutskever, the logic is disarmingly simple and rests on a single, powerful analogy. “How can I be so sure of that? The reason is that all of us have a brain, and the brain is a biological computer,” he argues. “So why can’t a digital computer, a digital brain, do the same things? This is the one-sentence summary for why AI will be able to do all those things.”
This inevitable development, he urges, forces us to confront dramatic and world-altering questions. “You can start asking yourselves, what’s going to happen when computers can do all of our jobs? Those are really big questions,” he says. “You start thinking about it a little bit, you go, ‘Gosh, that’s a little intense.’ But it’s actually only part of the intensity.”
The true acceleration, he suggests, will begin when AI is turned back onto itself, creating a recursive loop of improvement. “What will we, the collective ‘we,’ want to use these AIs for? To do more work, grow the economy, do R&D, do AI research. Then the rate of progress will become really, extremely fast for some time, at least,” he explains. Sutskever admits that grappling with this future is profoundly difficult. “These are such extreme, unimaginable things… It’s very difficult to internalize and to really believe on an emotional level. I’ve struggled with it, and yet the logic seems to dictate that this very likely should happen.”
Sutskever’s argument boils down to a powerful, reductionist premise: if the human brain is a physical system governed by biological processes, then it is, in essence, a highly complex computational device. There is no magic, no ethereal soul in the machine, only intricate wiring and firing neurons. Therefore, a sufficiently advanced digital computer, unconstrained by the slow pace of biological evolution, should not only be able to replicate its functions but ultimately surpass them. This viewpoint positions AGI as an engineering problem to be solved, rather than a philosophical impossibility.
This conviction is more than just academic; it is driving tangible action at the highest levels of the tech industry. Sutskever, who was at the center of the 2023 leadership turmoil at OpenAI—reportedly due to his concerns about the pace of development versus safety—recently announced the formation of a new company, Safe Superintelligence Inc. (SSI). The firm’s singular goal is to pursue “safe superintelligence” in a focused research environment, insulated from the commercial pressures that drive companies like Google, Microsoft, and even his former home, OpenAI. This move underscores how seriously Sutskever takes his own predictions, believing the creation of superintelligence is so consequential that it requires a dedicated, safety-first approach, separate from the race for product deployment. His perspective serves as a stark reminder that for the architects of our AI future, the journey toward machines that can do everything we can is not just a thought experiment—it seems to be an active mission.