It seems to be a foregone conclusion that AI will end up being much smarter than humans, but it wouldn’t necessarily mean that it would want to destroy all of humanity.
That’s the optimistic vision outlined by Elon Musk, the tech billionaire who has long oscillated between championing artificial intelligence development and warning about its existential risks. In a recent appearance on the Joe Rogan podcast, Musk deployed an unusual analogy to explain why superintelligent AI might choose to preserve rather than eliminate humanity: our own relationship with great apes. His comparison offers a surprisingly hopeful counterpoint to the apocalyptic AI narratives that have dominated public discourse.

“Humanity is just much more interesting if you’re a curious truth-seeking AI than not humanity. It’s just much more interesting,” Musk explained. He then drew a parallel that cuts to the heart of the AI safety debate: “As humans, we could go, for example, and eliminate all chimps. If we put our minds to it, we could say we could go out and we could annihilate all chimps and all gorillas, but we don’t.”
Musk acknowledged the complexities of human-animal coexistence, noting, “There has been encroachment on their environment, but we actually try to preserve the chimp and gorilla habitats.” This admission grounds his optimism in reality—humans haven’t been perfect stewards, but neither have we pursued deliberate extinction of our closest evolutionary relatives.
Extending this logic to artificial intelligence, Musk concluded, “I think in a good scenario, AI would do the same with humans. It would actually foster human civilization and care about human happiness.”
The chimp analogy is particularly striking coming from someone who co-founded OpenAI specifically to ensure AI development remained safe, only to later criticize the company for straying from its nonprofit mission. Musk’s current AI venture, xAI, launched its Grok chatbot in 2023, positioning itself as a competitor to OpenAI’s ChatGPT while promising more “truth-seeking” AI that can tackle controversial questions.
Musk isn’t the only tech leader that has indicated that even though humans might end up being much less smart than AI, AI would still want to keep us around. Turing Award winner Judea Pearl has said that AI could one day keep humans as pets. Nobel Prize winner Geoffrey Hinton says that AI wouldn’t harm humans if it develops a maternal instinct towards us. Musk’s framework suggests that a sufficiently advanced AI would find humanity intellectually valuable—much like how humans find scientific and emotional value in studying and preserving great apes despite having the power to eliminate them. This perspective assumes AI would develop something akin to curiosity or appreciation for complexity, rather than viewing humans merely as competitors for resources or obstacles to efficiency. Whether such an assumption holds depends on fundamental questions about machine consciousness and values alignment that remain hotly debated among AI researchers. And the answers to these questions could determine the fate of humanity in the centuries to come.