Why Dijkstra Didn’t Want Programmers To Use The Term ‘Bug’

Programmers have called parts of their program that don’t quite work ‘bugs’ for decades, but a programming legend believed that this term wasn’t quite befitting.

Edsger W. Dijkstra — the Dutch computer scientist who gave the world Dijkstra’s shortest path algorithm and helped lay the mathematical foundations of structured programming — had a pointed view on the language programmers use to describe their mistakes. In a lecture, he made a simple but sharp request: stop calling errors “bugs.” The word, he argued, wasn’t just imprecise. It was an act of evasion.

“Having seen how we can convince ourselves that programs indeed are totally correct,” Dijkstra began, “please realise that if you have written a program and it’s not correct, it is a little bit cowardly to call errors ‘bugs.'”

He didn’t stop at calling it cowardly. He went further, diagnosing exactly why the word was dangerous:

“Calling errors ‘bugs’ is a very primitive, animistic attitude. It suggests that the bug has a life of its own, and that you’re not totally responsible for it. That the mean little bug crept in behind your back, at the moment you were not looking.”

The image Dijkstra conjures is vivid and deliberate — the programmer as an innocent bystander, victimised by a creature that sneaked into their code unbidden. That framing, he insisted, was not just wrong but dishonest.

“This is not true,” he said flatly. “If the program is not correct, you made an error.”

His closing was a plea dressed as a demand:

“My request — my prayer, so to speak — is that you stop using the term ‘bugs’ for program errors, but call them what they are: errors.”

The distinction matters more than it might appear. When a program fails, calling it a “bug” subtly distributes the blame — to the machine, to chance, to some ambient chaos in the software universe. Calling it an error puts it squarely where Dijkstra believed it belonged: with the programmer. As he wrote elsewhere, the animistic metaphor of a bug maliciously sneaking in “while the programmer was not looking is intellectually dishonest, as it disguises that the error is the programmer’s own creation.” He also noted the practical consequence: a program with one “bug” used to be considered “almost correct,” but a program with an error is simply wrong.

This is not merely a semantic argument. It reflects Dijkstra’s broader conviction that programming is, at its core, a mathematical discipline — one that demands rigour, ownership, and intellectual honesty. He believed it was “not only the programmer’s responsibility to produce a correct program but also to demonstrate its correctness in a convincing manner.” Sloppy language, in his view, enabled sloppy thinking.

The irony is that the industry has moved in the opposite direction. As AI coding tools generate more and more of the code in production systems, the question of who owns an error becomes murkier, not clearer. Microsoft CTO Kevin Scott has said that 95% of code will be AI-generated within five years, while Anthropic CEO Dario Amodei has suggested that 90% of coding could be done by AI within months. When a model writes the code, and it fails, is that a bug — or an error? And whose?

The question isn’t academic. As AI software engineers take on increasingly autonomous roles, there is a real risk that accountability dissolves entirely — hidden behind the very animism Dijkstra warned against, now upgraded from folklore to machine learning. Calling an AI’s faulty output a “hallucination” or a “bug” does the same rhetorical work as it always did: it lets someone off the hook.

Dijkstra’s prayer, it turns out, was not just about words. It was about whether programmers — and now the companies deploying AI in their place — are willing to own what they build.