Technologists have been talking about the technical aspects of AI, but historians are viewing it in a whole different way — as a new species.
Yuval Noah Harari, the author of the best-selling book ‘Sapiens: A Brief History of Humankind’, recently expressed his concerns about the rapid advancement of artificial intelligence. He framed the development of AI not merely as technological progress, but as the potential birth of a new species capable of planetary dominance. His perspective emphasizes the crucial need for broader public engagement in the decisions shaping this powerful technology, decisions he believes are currently concentrated in the hands of a select few.

“For me,” Harari said, “the most important thing at this moment is simply to get more people to understand what is happening (with AI) and to join the debate.” He underscores the gravity of the situation: “There is a debate right now about what to do with AI, maybe the most important debate in history.”
The core of Harari’s argument lies in the following assertion: “The decisions about developing AI are really decisions about developing a new species that might take over the planet. At the present moment, the key decisions are being made by a very small number of people in just a few countries, mostly the United States and China, because they are the only ones who really understand what is happening.”
Harari emphasizes his mission to democratize this crucial conversation: “My aim…is to make more people around the world understand what is happening so they can also join the debate and voice their opinions, their views…in the hope that with more people engaged we can make better decisions.Whether this will happen or not, this is what we will see in the future.”
Harari’s warning comes at a time when AI is rapidly evolving, with new breakthroughs emerging constantly. From generative AI tools like ChatGPT and DALL-E 2 demonstrating creative abilities, to AI’s increasing use in automation and decision-making processes, the technology is permeating every aspect of our lives. His analogy of AI as a new species is powerful, suggesting that AI could potentially evolve beyond our control, developing its own goals and objectives that might not align with humanity’s. The concentration of power in the hands of a few corporations and nations further exacerbates this concern, raising questions about the ethical implications and long-term consequences of such rapid, unchecked development.
The potential displacement of human workers, the spread of misinformation through AI-generated content, and the potential for autonomous weapons systems are just some of the immediate anxieties surrounding AI. Harari’s call for wider public engagement in the “most important debate in history” is therefore not just a philosophical musing, but a practical necessity. If AI is indeed akin to a new species, then humanity, as the “older” species, must decide now what kind of relationship we want to forge with this nascent intelligence before it potentially reshapes the planet – and our place on it – forever.