US Citizens Trust Their Govt To Responsibly Regulate AI The Least, Singapore The Most: Stanford Study

US might be producing the best AI models in the world, but its citizens don’t expect its government to be as effective while regulating the new technology.

According to the 2026 AI Index report from Stanford University, based on an Ipsos survey of 31 countries, only 31% of Americans trust their government to responsibly regulate AI — the lowest of any country polled. At the other end of the spectrum, Singapore tops the chart at 81%, followed by Indonesia (76%) and Malaysia (73%). The global average sits at 54%, meaning the US trails the worldwide benchmark by 23 percentage points.

The findings are striking given that the US continues to dominate AI development. American labs — OpenAI, Anthropic, Google DeepMind — remain at the frontier of model capabilities, and tech’s contribution to US GDP hit an all-time high in 2025. Yet American citizens appear deeply skeptical that their government can handle the governance side of that boom.

The skepticism likely has roots in how Washington has engaged with AI so far. The Trump administration revoked the Biden-era AI executive order early in its term, signalling a preference for deregulation over oversight. More recently, the government’s handling of AI in sensitive national security contexts drew sharp attention — including the episode where Anthropic was designated a “supply chain risk” by the administration after the company refused to allow its models to be used for certain military applications, only to be replaced by OpenAI under largely identical terms.

The contrast with Southeast Asia is notable. Singapore, Indonesia, Malaysia, and Thailand — all in the top four — operate in a region where governments have moved proactively and constructively on AI frameworks. Singapore in particular has cultivated a reputation as a thoughtful technology regulator, and its researchers are among the most prolific contributors to global AI conferences, suggesting the public sees government engagement with AI as genuine rather than performative.

Among the major economies, the pattern is fairly consistent: developed Western democracies cluster at the bottom. Great Britain sits at 39%, Canada at 40%, France at 42%, and Germany and Belgium at 49%. Japan, despite being a significant technology power, comes second-to-last at 32%. These numbers reflect a broader erosion of institutional trust in wealthy nations — a trend that predates AI but is being tested acutely by it. Studies have already shown that lower-income people are significantly more worried than wealthier people about being left behind by the technology, which speaks to a wider anxiety about who AI is actually working for.

Interestingly, some of the countries with higher trust scores — Chile (67%), Colombia (66%), India (65%), Argentina and Poland (61% each) — are emerging economies where governments have taken visible stances on AI adoption and digital infrastructure. India, for instance, has been actively courting AI investment, with OpenAI reportedly planning a major datacenter on its soil. Visible government engagement may be translating into public confidence.

The implications of this trust gap are significant. As AI accelerates — and leaders like Sam Altman have warned that the pace of change is moving faster than even insiders expected — effective governance will require not just the right policies but public buy-in. A government that its citizens don’t trust to regulate AI will face mounting pressure to either overreact with blunt legislation or underreact while harms accumulate. Neither outcome is good for innovation.

For American AI companies, there is both a risk and an opportunity in this data. The risk: a low-trust environment can trigger populist or politically motivated regulation. The opportunity: industry players who proactively engage with governance discussions, demonstrate safety practices, and support thoughtful oversight have a real chance to shape the framework before it is imposed on them. The data from Stanford suggests that window may not be open forever.

Posted in AI