Nick Bostrom Explains What Will Happen Once Superintelligence Is Reached

Meta and many other labs are explicitly working on creating Superintelligence, but philosopher Nick Bostrom has described what will happen once we actually get there.

Nick Bostrom, a leading voice in the field of existential risk and the future of humanity, particularly concerning artificial intelligence, has offered a compelling vision of a world where superintelligence is a reality. In a thought-provoking statement, Bostrom outlines a cascade of technological advancements and societal shifts that could occur following the successful development and alignment of AI surpassing human cognitive abilities. His insights delve into the potential for unprecedented progress and the radical transformation of human existence.

“Say we succeed in developing superintelligence,” Bostrom says. “Let’s assume we solve the alignment problems and at least do some reasonably good job with governance. I think what we then get is a leap towards something approximating technological maturity where it’s not just that we have advanced AI, but that AI helps us invent all kinds of other super fancy technologies,” he adds.

Bostrom continues by drawing parallels to long-held aspirations of science fiction, suggesting that superintelligence could rapidly accelerate our ability to achieve seemingly fantastical feats. “So all those things that in traditional science fiction, right, you have space colonies and perfect virtual realities, cures for aging, uploading into computers, all those things that we know are compatible with the loss of physics but just way beyond us. But that maybe we, human scientists, could maybe invent this if we had 2000 years, 5,000 years to work on them.”

He posits a significant acceleration of progress, stating, “I think those might happen quite soon after the arrival of Super Intelligence. We then have this very different set of rules of the game within which to try to conceive of what a good human life would be, where many of the constraints, the practical limitations that sort of limit what is possible today would no longer apply.”

Finally, Bostrom touches upon a fundamental shift in the human condition. “And at the superficial level, the need for economically productive work by humans would go away. Right. And then I think that would be more fundamental unlockings as well, that we can maybe get to.”

The implications of Bostrom’s vision are profound. The arrival of superintelligence, assuming alignment and governance challenges are met, could usher in an era of unprecedented technological abundance. Imagine AI collaborating with human scientists to overcome fundamental limitations in physics, biology, and engineering. The development of sustainable space colonies, highly realistic virtual environments, and effective treatments for aging could become tangible realities much sooner than current projections suggest.

Furthermore, the potential obsolescence of human labor in traditional economic structures raises significant societal questions. While this could free humanity from mundane and repetitive tasks, it necessitates a fundamental rethinking of purpose, value, and societal organization. The concept of a “good human life” would need to be redefined in a world where basic needs are met without widespread human employment. Recent advancements in generative AI, capable of producing high-quality content and automating increasingly complex tasks, offer a glimpse into this potential future, highlighting the accelerating pace of AI development and the growing urgency of addressing the ethical and societal implications of advanced artificial intelligence.