It might not be enough for humans to create superintelligence — we could also need to make sure that it is compatible with other kinds of superintelligences.
That’s the provocative argument from Nick Bostrom, the Swedish philosopher who wrote the influential 2014 book Superintelligence: Paths, Dangers, Strategies and founded Oxford’s Future of Humanity Institute. In a recent discussion, Bostrom introduced what he calls a “dimension that has to some extent been missing in the classical discourse around AI safety” — the possibility that any superintelligence humanity creates would be entering a cosmos already populated by other superintelligent beings, and that our creation would need to coexist peacefully within what he calls a “cosmic host.”

“The idea being that if we give birth to superintelligence, it will enter a world in which there quite possibly are other super beings already in existence,” Bostrom explains. He outlines several scenarios where such entities might exist. “These could be other AI built by some alien civilization in some remote galaxy. That could be in the Everett interpretation of quantum mechanics. There are many branches and so there might be other branches of earth originating life that have produced or will produce different forms of superintelligence.”
The philosopher also points to more speculative frameworks. “If the simulation argument is to be trusted, we maybe in a simulation. The simulators then would be presumably super intelligent and be super beings. And of course traditional theological conceptions as well. God is a super being, usually super intelligent.”
Bostrom argues this cosmic perspective should fundamentally shape how we approach AI development. “In either of these cases, there would be this kind of cosmic host consisting of these other super beings. And one important theorem for us going forward here, I think, is that if we create superintelligence, we would want to make it such that it can get along with this cosmic host and maybe adhere to whatever norms might have been developed within this cosmic host.”
This cosmic view radically reframes humanity’s position. “That I think adds a dimension that has to some extent been missing in the classical discourse around AI safety. That might be this much larger picture where we are very small and very weak and very new, and there is this kind of incumbent set of super powerful beings. And how our superintelligence interacts with that might be a very critical part of how well things go.”
As for what this means practically, Bostrom admits uncertainty. “The upshot of that is a bit unclear. We don’t know precisely what that means, but I think it slightly increases the chance that we ought to develop superintelligence. And also I think that we should approach it more with a little bit of an attitude of humility that we don’t know very much here.”
Bostrom’s cosmic framing arrives as AI safety debates intensify across the industry. OpenAI, Anthropic, and DeepMind have all established safety teams focused on alignment — ensuring AI systems follow human values and intentions. But most current safety research assumes a relatively isolated context: how do we control an AI that’s vastly more intelligent than us? Bostrom suggests we may need to think bigger, considering not just human-AI relations but how our AI might fit into a larger cosmic order. His argument also echoes growing interest in what some researchers call “acausal cooperation” — the idea that sufficiently advanced intelligences might coordinate across space and time through logical reasoning alone, even without direct communication. If such a cosmic framework of superintelligent norms exists, Bostrom suggests, building an AI that violates those norms could be catastrophically shortsighted. Whether that means approaching AI development with more caution or more urgency remains, as he notes, unclear — but it does suggest that humanity’s approach to creating superintelligence may need to account for audiences far beyond our own planet.