AI Could Create Systems That Humans Don’t Understand, Like Horses Don’t Understand Money: Yuval Noah-Harari

AI is rapidly becoming more and more capable, and it may soon reach a stage where humans no longer have the intellectual aptitude to understand what it’s even doing.

Renowned historian and philosopher Yuval Noah Harari, author of international bestsellers like Sapiens and Homo Deus, is known for his sweeping analyses of humanity’s past, present, and potential future. In a recent reflection on the artificial intelligence revolution, Harari offered a compelling and characteristically sobering perspective: AI could create systems so complex that humans become to them what horses are to the concept of money – entirely oblivious to the mechanisms governing their existence.

yuval noah harari

Harari began by advocating for a nuanced approach to the AI revolution: “I think that the basic attitude towards the AI revolution should be one that avoids the extremes: either being terrified that AI is coming and will destroy all of us, but also to avoid the extreme of being overconfident that AI will improve medicine and will improve education, it’ll create a good world.”

He emphasizes the need for a “middle path,” starting with a fundamental appreciation of the scale of the impending transformation. “We need a middle path of, first of all, simply understanding the magnitude of the change we are facing. All the previous revolutions in history pale in comparison with this revolution because, throughout history, every time we invent something, human beings are still making all the decisions.”

To illustrate the shift away from human-centric decision-making, Harari pointed to a fascinating, if potentially alarming, development in the financial and cultural spheres: “So, for instance, in the financial system. I just recently read an article about an AI that created a religion, wrote a holy book of the new religion, and also created or helped to spread a new cryptocurrency. And it now has, in theory, $40 million—this AI.”

This example serves as a springboard for a critical question about AI’s future autonomy and its potential impact on systems we currently control: “Now, what happens if AIs start to have money—start to have money of their own—and the ability to make decisions about how to use it? If they start investing money in the stock exchange?” Harari posits. “Suddenly, to understand what is happening in the financial system, you need to understand not just the ideas of human beings; you also need to understand the ideas of AI. And AI can create ideas which will be unintelligible to us.”

The crux of Harari’s warning lies in this potential for an intellectual chasm, using a powerful analogy: “The horses could not understand human ideas about money. I can sell you a horse for money; the horse doesn’t understand what is happening because the horse doesn’t understand money. The same thing might happen now, but we will be like the horses. The horses and elephants cannot understand the human political system or the human financial system that controls their destiny. Decisions about our lives will be made by a network of highly intelligent AIs that we simply can’t understand.”

Implications of an Unintelligible Future

Harari’s concerns touch upon a fundamental aspect of control and comprehension. If AI develops systems—financial, logistical, perhaps even social or political—that operate on principles beyond human understanding, our ability to oversee, regulate, or even predict their behavior diminishes significantly. Imagine a global financial market largely influenced by autonomous AI agents making trillions of transactions based on strategies and data patterns that are simply too complex or alien for human economists to grasp. The stability and fairness of such a system would be difficult to ensure.

This isn’t merely a far-fetched sci-fi scenario. We already see glimpses of this in high-frequency trading algorithms, where buy and sell orders are executed in microseconds based on complex models that are often “black boxes” even to their creators. The “Flash Crash” of 2010, where the stock market inexplicably plummeted and then recovered within minutes, was largely attributed to the interplay of automated trading systems. As AI’s capabilities grow, the scale and complexity of such opaque systems could expand dramatically.

Echoes in Current AI Developments

The trend towards increasingly complex and less interpretable AI models is already visible. Deep learning models, for instance, can achieve remarkable performance in areas like image recognition, natural language processing, and even scientific discovery, but their internal decision-making processes are often notoriously difficult to dissect. Researchers are actively working on “explainable AI” (XAI) to make these systems more transparent, but the challenge grows with the models’ sophistication.

The challenge, as Harari implies, is not to halt AI development but to proceed with caution, humility, and a profound awareness of the transformative power we are unleashing. It calls for robust frameworks, a concerted push for AI interpretability, and a societal conversation about the level of autonomy we are comfortable granting to non-human intelligences, especially when their “thoughts” may operate in realms beyond our own cognitive horizons. The goal is to ensure that as AI evolves, humanity doesn’t inadvertently become the “horse,” subject to forces it no longer comprehends.

Posted in AI