Ilya Sutskever Had Once Burnt An Effigy To Show That OpenAI Must Destroy Its Own AI Models If They Could Harm Humanity

Ilya Sutskever isn’t only one of the most sharpest minds in AI, but it seems that he has a flair for the dramatic as well.

Former OpenAI Chief Scientist Ilya Sutskever had once burnt an effigy among OpenAI employees to demonstrate how they had a duty to destroy any models that they built which ended up being harmful to humanity. As per the book Empire of AI by Karen Hao, in September 2022 — two months before ChatGPT was released to the world — OpenAI had held an off-site for its technical leadership at Tenaya Lodge, a remote luxury resort in Sierra Nevada. On the first night of the offsite, OpenAI employees had gathered around a firepit on the rear patio of the hotel. Senior OpenAI scientists were dressed in bathrobes and and were standing around the fire in a semicircle.

Ilya Sutskever, who was at that point the Chief Scientist at OpenAI, then emerged. In the pit, he placed a wooden effigy that he’d commissioned from a local artist. He then began explaining how the effigy represented a good and aligned AI that OpenAI had built, only to discover that it was actually lying and deceitful. Sutskever then told the assembled tech leadership that it was OpenAI’s duty to destroy it. To drive the point home, Sutskever doused the effigy in lighter fluid, and set it on fire. The researchers reportedly silently watched the effigy burn, with ancient redwood trees standing behind them.

It must’ve been a dramatic scene with scientists in bathrobes huddled around a fire watching a burning effigy, but it shows that even before ChatGPT’s release, senior researchers like Ilya Sutskever were were aware of the capabilities of what they’d created. When ChatGPT was released two months later, it went viral, but its capabilities at that point wouldn’t have immediately rung alarm bells over AI safety. But AI has progressed at breakneck speed ever since, and as of 2025, there are real worries among the general public that AI systems might one day turn rogue. Ilya Sutskever is no longer at OpenAI, but one would hope that his message of AI safety will be internalized by all frontier AI labs — while governments and regulators will likely play their part, the responsibility of ensuring AI alignment would first and foremost lie on the shoulders of the researchers building these very powerful models.

Posted in AI