Why Building Human-level AI Could Inevitably Lead To Superintelligence

While there are a variety of opinions on when AI will become superintelligent, most people seem to agree that it will reach human intelligence fairly soon. But as it turns out, once AI systems become as smart as humans, they might inevitably lead to the creation of superintelligence.

Recently, on the Joe Rogan Experience podcast, Jeremie Harris, CEO of Gladstone AI, offered a compelling argument for why achieving human-level artificial intelligence could quickly pave the way for superintelligence. His perspective hinges on AI’s ability to conduct AI research, effectively bootstrapping its own development at an exponential rate. This idea, while seemingly simple, has profound implications for the future of AI and potentially, humanity.

Harris explains this accelerating trajectory: “If you build a human-level AI, one of the things it must be able to do as well as humans is AI research itself…or at least the parts of AI research that you can do in software by coding.” He continues, highlighting the automation potential: “So, one implication of that is you now have automated AI researchers. And if you have automated AI researchers, that means you have AI systems that can automate the development of the next level of their own capabilities.”

This self-improving cycle, Harris notes, leads directly to the often-discussed concept of the Singularity: “Now you’re getting to that whole Singularity thing where it’s an exponential that just builds on itself and builds on itself.” This exponential growth in AI capabilities, driven by its own research efforts, is why many experts believe superintelligence isn’t a distant prospect once human-level AI is achieved. As Harris concludes: “Which is kind of why a lot of people argue that if you build human-level AI, superintelligence can’t be that far away. You’ve basically unlocked everything because we kind of have gotten very close, right?”

It appears that it’s this line of thinking that’s made people give aggressive targets for superintelligence, even though AI systems seem to be quite deficient in many fields at the moment. If AI systems can simply become good enough as humans at most parts of AI research, the fact that they can be replicated at will and be made to work all the time would mean that they would be able to perform research much better than humans, which are finite in number and also need rest and recreation. With breakthroughs in coding and understanding documents, it’s not inconceivable that AI systems could soon do everything an AI researcher does. Once that happens, millions of such AI researchers could be inexpensively deployed to AI research, and they could then come up with novel ideas much faster than humans. This system could then accelerate itself, and lead to an explosion of intelligence in a very short period of time. As such, it does seem that superintelligence doesn’t need to be extrapolated from current AI progress — all it’ll take to get there is AI agents that can mimic humans in AI research.

Posted in AI