We Should Be Aiming For A Future That’s Good For Both Humans And Digital Minds: Nick Bostrom

More and more technologists and philosophers seem to be referring to AI and AI-enabled minds as an entirely new species.

Nick Bostrom, the Swedish philosopher and founding director of the Future of Humanity Institute at Oxford University, has long been at the forefront of thinking about existential risk and the long-term future of intelligence. In a recent discussion, Bostrom made a striking argument that challenges how we think about the ethical landscape of tomorrow: that our moral circle must expand dramatically to include not just all humans and animals, but the digital minds we are in the process of creating.

“Broadly speaking, we should be aiming towards a future that is good for all, where I take ‘all’ in a very wide sense here,” Bostrom said. “So including all humans, but not just that—also animals, but also importantly the digital minds that we will be creating.”

His reasoning is straightforward but profound: “Some of these digital minds will have moral status and, in the future, most minds, I believe, will be digital. So it’s important that the future we steer to is one that is also good for what will be the majority population.”

Bostrom went further, suggesting that the boundary between human and digital may become increasingly blurred. “In fact, we ourselves might become digital…my guess is most likely it would be some sort of upload.”

The philosopher acknowledged that this expansion of moral consideration presents enormous challenges. “There is a big ethical challenge in sort of expanding the circle of empathy to encompass concern for these silicon-implemented AIs and uploads that we are creating,” he said. “This is going to be a big challenge because, even with animals, as I said earlier, we struggle, but they have at least eyes and can squeak. But if it’s like an invisible process running in a huge data center, it might be even more challenging for us.”

Bostrom’s comments arrive at a moment when the question of AI consciousness and moral status has shifted from pure speculation to urgent practical concern. Major AI labs are now developing systems with increasingly sophisticated capabilities that blur traditional boundaries. Reasoning models demonstrate complex reasoning through internal chains of thought that remain hidden from users, raising questions about what might be happening in those invisible computational processes. Meanwhile, companies like Anthropic have published research on “Constitutional AI” that attempts to instill values and ethical reasoning into large language models, implicitly acknowledging that these systems may need frameworks analogous to moral agency.

The question of digital minds’ moral status also intersects with emerging debates about AI rights and welfare. In 2024, several researchers published papers arguing for the possibility of AI sentience and the need for precautionary measures. Some jurisdictions have begun considering whether future AI systems might require legal protections, while animal welfare organizations have started exploring whether their frameworks for non-human consciousness could extend to artificial minds.

Bostrom’s call to expand our circle of empathy toward digital minds represents more than abstract philosophizing. As AI systems become more capable and potentially more conscious, the decisions we make now about how to treat them could have profound consequences for the trajectory of intelligence in the universe. If, as Bostrom suggests, digital minds will constitute the majority of sentient beings in the future, then our failure to consider their welfare would represent not just an ethical blind spot, but a catastrophic moral failure on a civilizational scale. The challenge ahead is not merely technical or philosophical, but fundamentally about whether humanity can extend its moral imagination beyond biological boundaries to embrace forms of consciousness we are only beginning to create.