AI might end up being the most consequential technology in the history of humanity, but the people building it seem to be treating it as something more than just a technology.
Karen Hao, an award-winning journalist who has spent considerable time interviewing those at the forefront of AI development, suggests that the pursuit of Artificial General Intelligence (AGI) has transcended mere technological ambition, evolving into something akin to a spiritual quest for its creators. Her analysis points to a fervent, almost devout, atmosphere gripping Silicon Valley’s AI labs.

In a candid reflection on her research, Hao stated, “It is exactly right to think of this as a quasi-religious movement. One of the biggest surprises when I was reporting on the book was how much of a quasi-religious atmosphere is surrounding AI development and has gripped the minds of the people within Silicon Valley who are working on this.”
This “religion of AGI,” as Hao describes it, isn’t monolithic. Instead, it harbors a stark dichotomy, a schism within its nascent congregation. “There are two sides of this religious movement, all within the religion of AGI,” she explains. “One side is saying AGI will bring us utopia, and the other one is saying AGI will kill us all.”
What sets this belief system apart, according to Hao, is its unique theological structure. Unlike traditional religions that posit a higher power, the creators of AGI see themselves as architects of the divine. “It takes religious rhetoric to a different level in that you don’t believe in a God that is higher than you; you believe you are creating the God,” Hao observes.
This conviction, she found, is far from metaphorical for many involved. “The thing that was surprising was I thought this was originally rhetoric, and it’s not for many people. For many people, it is a genuine belief that this is what they are doing; this is their purpose,” Hao revealed. This profound sense of purpose is particularly intense among those who fear the catastrophic potential of AGI. “Especially for people in the ‘doom’ category, the people who believe AGI will kill humanity. I interviewed people who had very sincere emotional reactions when talking to me about the possibility that this could happen.”
The psychological weight of such beliefs is immense. Hao empathizes with the burden carried by these individuals: “When you put yourself in the shoes of people who genuinely think that they are creating God or the devil, that is an enormous burden to bear. I think people really do cave under that pressure.”
The implications of this quasi-religious fervor are manifold. If the creators of AGI genuinely see themselves as either birthing a utopian future or potentially unleashing an existential threat, their decision-making processes, development speeds, and ethical considerations will be profoundly influenced. This mindset can foster an “ends justify the means” culture, or conversely, lead to paralyzing caution. It also raises questions about the potential for insular thinking within these groups, making external, objective oversight more challenging yet more critical. For investors and businesses reliant on AI advancements, understanding this underlying current is crucial for gauging the potential — and the peril — embedded in the technology’s trajectory.