AGI Is More Of A Marketing Term Than A Scientific Term: Fei-Fei Li

AGI is thrown around liberally by AI leaders, with entire deals being centered around when it will be reached, but the term might not mean very much in scientific terms.

That’s according to renowned AI researcher Fei-Fei Li, who argues that the distinction between AI and AGI (Artificial General Intelligence) may be more about marketing than meaningful scientific differentiation. Li, who served as director of Stanford’s AI Lab and chief scientist of AI at Google, has been one of the most influential voices in the field for decades—her work on ImageNet helped spark the deep learning revolution that transformed modern AI. Now, she’s questioning whether the industry’s obsession with AGI represents a genuine scientific milestone or simply clever branding.

“I don’t know if anyone has ever defined AGI,” Li said. “There are many different definitions, including some kind of superpower for machines all the way to can machines become economically viable agents in the society. In other words, making salaries to live. Is that the definition of AGI?”

The ambiguity, she suggests, undermines the term’s usefulness. Li emphasized her scientific approach to the field: “As a scientist, I take science very seriously and I entered the field because I was inspired by this audacious question of can machines think and do things in the way that humans can do. For me, that’s always the north star of AI.”

From that foundational perspective, the AI-versus-AGI distinction becomes murky. “I don’t know what’s the difference between AI and AGI,” Li said. “I think we’ve done very well in achieving parts of the goal, including conversational AI. But I don’t think we have completely conquered all the goals of AI.”

She invoked one of the field’s pioneers to make her point: “I think our founding fathers—Alan Turing—I wonder if Alan Turing is around today and you ask him to contrast AI versus AGI, he might just shrug and say, ‘Well, I asked the same question back in the 1940s.'”

Li’s conclusion was unequivocal: “I don’t want to get onto a rabbit hole of defining AI versus AGI. I feel AGI is more a marketing term than a scientific term as a scientist and technologist.”

Her comments arrive at a moment when AGI has become central to corporate positioning and even contractual agreements in the AI industry. OpenAI’s partnership with Microsoft reportedly includes provisions for when AGI is achieved—defined as systems that can generate $100 billion in profits—while CEO Sam Altman has suggested that OpenAI knows what to do to build AGI. Meanwhile, DeepMind and other labs have published roadmaps with various AGI milestones, and the term has become a rallying cry for billions in investment and valuations.

Yet Li’s skepticism highlights a deeper tension in the field: as AI systems become more capable across narrow domains, the goalposts for “general” intelligence keep shifting. What once seemed like AGI—machines that can converse, reason, generate creative content—is now dismissed as “just” AI once achieved. This definitional flexibility may serve corporate narratives about being on the cusp of breakthrough after breakthrough, but it does little to advance scientific clarity. For researchers like Li, who helped build the foundations of modern AI, they’d want the focus to remain on the concrete challenges ahead rather than chasing an ill-defined marketing buzzword.

Posted in AI