There are concerns around how AI systems are becoming self-improving, and how this could lead to a rapid rise in AI capabilities, but an OpenAI researcher believes that this won’t necessarily be the case.
OpenAI researcher Jason Wei has said that self-improving AI hasn’t been created yet, and even if does end up being made, it won’t rapidly self-improve. Wei currently works with OpenAI and has worked on the o1 and deep research models, and has worked with Google Brain from 2020 to 2023.

“We don’t have AI that self-improves yet, and when we do it will be a game-changer,” Wei posted on X. “With more wisdom now compared to the GPT-4 days, it’s obvious that it will not be a “fast takeoff”, but rather extremely gradual across many years, probably a decade,” he added.
“The first thing to know is that self-improvement, i.e., models training themselves, is not binary. Consider the scenario of GPT-5 training GPT-6, which would be incredible. Would GPT-5 suddenly go from not being able to train GPT-6 at all to training it extremely proficiently? Definitely not. The first GPT-6 training runs would probably be extremely inefficient in time and compute compared to human researchers. And only after many trials, would GPT-5 actually be able to train GPT-6 better than humans,” he added.
“Second, even if a model could train itself, it would not suddenly get better at all domains. There is a gradient of difficulty in how hard it is to improve oneself in various domains. For example, maybe self-improvement only works at first on domains that we already know how to easily fix in post-training, like basic hallucinations or style. Next would be math and coding, which takes more work but has established methods for improving models. And then at the extreme, you can imagine that there are some tasks that are very hard for self-improvement. For example, the ability to speak Tlingit, a native American language spoken by ~500 people. It will be very hard for the model to self-improve on speaking Tlingit as we don’t have ways of solving low resource languages like this yet except collecting more data which would take time. So because of the gradient of difficulty-of-self-improvement, it will not all happen at once,” he continued.
“Finally, maybe this is controversial but ultimately progress in science is bottlenecked by real-world experiments. Some may believe that reading all biology papers would tell us the cure for cancer, or that reading all ML papers and mastering all of math would allow you to train GPT-10 perfectly. If this were the case, then the people who read the most papers and studied the most theory would be the best AI researchers. But what really happened is that AI (and many other fields) became dominated by ruthlessly empirical researchers, which reflects how much progress is based on real-world experiments rather than raw intelligence. So my point is, although a super smart agent might design 2x or even 5x better experiments than our best human researchers, at the end of the day they still have to wait for experiments to run, which would be an acceleration but not a fast takeoff,” he said.
“In summary there are many bottlenecks for progress, not just raw intelligence or a self-improvement system. AI will solve many domains but each domain has its own rate of progress. And even the highest intelligence will still require experiments in the real world. So it will be an acceleration and not a fast takeoff,” Wei said.
Wei’s statement comes at a time when self-improvement is thought to be within reach in the AI space. Matej Balog, a Research Scientist at Google DeepMind, has said that they’re seeing the first signs of self-improvement in AI systems. OpenAI CEO Sam Altman has said that he thinks that a fast takeoff is more possible than he believed two years ago. A fast takeoff implies a rapid growth in AI abilities — AI systems will not be limited by the constraints on memory and time like human researchers, and will likely be able to brute force new breakthroughs. But not everyone believes that a fast takeoff will happen. Microsoft CEO Satya Nadella has said that he thought a fast takeoff was unlikely, largely because human intervention in societal matters will naturally decelerate AI progress. And with an OpenAI researcher now saying that a fast takeoff is unlikely for technical reasons as well, it appears that there are both sides — with solid arguments each — to the fast takeoff debates.