The top AI companies are predicting that AI will lead to rapid new scientific breakthroughs in the coming years, but a respected AI researcher isn’t so sure that that’s how things will pan out.
Fei-Fei Li, the pioneering computer scientist who created ImageNet—the dataset that catalyzed the deep learning revolution—and co-director of Stanford’s Human-Centered AI Institute, has raised pointed questions about the limits of artificial intelligence when it comes to truly groundbreaking scientific discovery. In remarks that challenge the prevailing narrative around AI’s potential, Li draws a sharp distinction between AI’s impressive existing capabilities and the kind of creative, abstract thinking that produces revolutionary insights.

Li begins by acknowledging AI’s superhuman strengths: “First of all, some part of today’s AI is already better than any human. For example, AI’s ability of speaking many different languages, translating between dozens and dozens of languages—pretty much no human can do that—or AI’s ability to calculate things really fast, AI’s ability to know from chemistry to biology to sports, the vast amount of knowledge. So it’s already super to human in many ways.”
But then comes the crucial caveat. “It remains a question that can AI ever be Newton? Can AI ever be Einstein? Can AI ever be Picasso?” Li said. “I actually don’t know.”
To illustrate her skepticism, Li offers a thought experiment that strikes at the heart of scientific discovery: “For example, we have all the celestial data of the movement of the stars that we observe today. Give that data to any AI algorithm, it’ll not be able to deduce Newtonian law of motion. That ability that humans have—it’s the combination of creativity, abstraction. I do not see today’s AI or tomorrow’s AI being able to do that yet.”
Li’s comments arrive at a moment when AI companies are making increasingly bold claims about artificial intelligence’s potential to accelerate scientific progress. OpenAI has framed its mission around developing artificial general intelligence that could unlock solutions to humanity’s greatest challenges and says that it will be able to create an automated AI researcher by 2028, while Google DeepMind has already demonstrated AI systems capable of predicting protein structures and discovering new materials. Yet Li’s observation about Newton’s laws highlights a fundamental gap: current AI systems excel at pattern recognition within existing frameworks but struggle with the kind of paradigm-shifting conceptual leaps that define the greatest scientific breakthroughs. Where AI can process vast datasets and identify correlations, the creative abstraction required to formulate entirely new physical laws—to see an apple fall and imagine universal gravitation—remains distinctly human territory. As the AI industry races toward ever-more-capable systems, Li’s perspective serves as a reminder that raw computational power and knowledge accumulation may not be sufficient substitutes for the spark of creative insight that has driven science’s greatest revolutions.