For well over a decade, Stack Overflow has been the digital lifeline for programmers. It was the go-to public library for anyone stuck on a tricky piece of code—a bustling question-and-answer forum where developers could post their problems and receive help from a global community. But the arrival of powerful AI tools like ChatGPT is rapidly changing that dynamic, and new research puts a number on just how profound the shift has been.
A study analyzing activity on the platform reveals that the release of ChatGPT triggered a sharp and significant decline in user engagement on Stack Overflow. The findings suggest that developers are increasingly turning to private conversations with AI for answers they once sought from public forums.
A Steep Drop in Posts
The researchers measured the impact of ChatGPT by comparing Stack Overflow’s activity against several similar Q&A platforms that were less likely to be affected. These included math-focused forums where ChatGPT’s capabilities were weaker at the time, as well as Russian and Chinese-language programming sites where access to the AI tool is officially limited.
Using this comparative model, the study estimates that weekly posts on Stack Overflow fell by 16% following ChatGPT’s launch. This effect wasn’t a temporary dip; it intensified over time, reaching a sustained decline of around 25% within six months. In absolute terms, the average number of weekly posts plummeted from roughly 60,000 to 40,000 in that period—a drop that, based on prior trends, would have otherwise taken more than five years.

One might assume that AI is simply weeding out the simpler, repetitive, or lower-quality questions that clog up the platform. However, the study found no evidence to support this. By analyzing user votes—a key indicator of post quality on the site—the researchers observed no significant change in the average scores given to posts after ChatGPT was released. This suggests that ChatGPT isn’t just a substitute for low-effort questions but is displacing a wide variety of content, including high-quality contributions.
The Most Popular Languages Are Hit Hardest
The decline in activity wasn’t uniform across all topics. The study found that the impact was most pronounced for the world’s most popular programming languages, like Python and Javascript.
There’s a clear reason for this: large language models are trained on vast amounts of public data from the internet, including countless coding projects on platforms like GitHub. The more popular a programming language is, the more training data the AI has, making it a more effective and reliable substitute for human help. The researchers confirmed this by finding a strong negative correlation: the more GitHub repositories existed for a language, the larger the drop in related posts on Stack Overflow.
Interestingly, the study noted one significant exception. Posts related to CUDA, a programming interface essential for building and running AI models, saw a slight increase in activity. This anomaly highlights the dual impact of the AI boom: while it reduces the need for help on established technologies, it simultaneously sparks new questions and discussions about the tools required to power the AI revolution itself.
A Threat to Our “Digital Public Goods”?
This shift from public forums to private AI chats has broader implications beyond a single website’s traffic. Stack Overflow is a prime example of a “digital public good”—a shared, open-access knowledge resource built by a community. For years, these resources have not only helped humans learn but have also provided the essential training data for the very AI models that now threaten their existence.
The irony is potent. An AI trained on the open web is now causing users to retreat from it. Every question answered by ChatGPT is a private interaction, with the data belonging exclusively to its parent company, OpenAI. This prevents new knowledge from entering the public domain, where it could be used to train future models from competing organizations.
Researchers warn this could create a dangerous feedback loop. Training new AIs on content generated by older AIs is like “making a photocopy of a photocopy”—each iteration becomes progressively less accurate and more distorted. If human-generated data dries up, the progress of AI itself could stall, as models run out of the fresh, high-quality information they need to learn and improve.
While AI assistants offer undeniable gains in individual productivity, this study shows it comes at a cost. The vibrant, open exchange of knowledge that defined the internet for decades is diminishing. As we increasingly turn to private AI for answers, we risk starving the public commons that made these powerful tools possible in the first place.