America Leads the World in AI Skepticism

9 min read Original article ↗

We are in the middle of an AI revolution. I don’t mean that in a metaphorical sense but an economic one: AI is being adopted everywhere. ChatGPT, the fastest-growing technology app in history, now reports 700 million weekly active users and is the 5th most visited website in the world. A global survey out of Ipsos found that a majority of people felt the technology had “profoundly changed their daily life.”

Global opinion, however, is mixed. That survey also found a majority of participants felt both excited and nervous about AI. Other studies revealed the same thing: A world balanced between optimism and pessimism, or perhaps a world holding both simultaneously.

That balance is not evenly distributed. It might be more accurate to say it is polarized.

The chart below displays global AI opinion, specifically the percentage of respondents by country who indicate that they are ‘excited’ and ‘nervous’ about AI products and services. Southeast Asia is upbeat, but the Anglosphere leads the pack in skepticism.1

This is no outlier result. A separate US poll from Pew found only 17% say AI’s impact on the U.S. will be positive over the next 20 years, and only 6% are convinced AI will make humans happier. That study also found we are more concerned than excited (51%), more likely to say that AI’s impact on the U.S. will be negative than positive (35% vs. 17%), and more worried about under-regulating AI than over-regulating it (58% vs 21%).

In the global context, the country with the most advanced AI technology2 defines the frontier for the anti-AI quadrant, an especially striking visual when compared to China’s position.

Why are the two great AI powers on opposite sides of AI opinion?

A global KPMG–University of Melbourne survey aimed to answer this question. Through a statistical technique called structural equation modeling, they estimated that the greatest cause of AI acceptance was ‘Trust in AI Systems’ and identified four key drivers of trust: risk concerns, literacy, personal benefit, and institutional confidence.”

Before getting into the prime drivers, I will briefly describe the risk and literacy categories.

The study found that the more the citizens of a country were concerned about various AI-related risks (e.g. cybersecurity, job loss, environmental impact), the less that country trusted AI systems. This is a very intuitive finding, so I will only note that the effect is weak and that America became relatively less worried when asked about specific risks (e.g. cybersecurity) rather than general ‘concern.’

The AI literacy metric comprised three subdomains: AI knowledge, efficacy, and training. ‘Knowledge’ reflects whether the participant feels that they understand when and where AI is used, while ‘efficacy’ assesses their perceived ability to use the tools ‘responsibly and effectively.’ Training directly asks whether the participant has been taught AI skills through their school or employer. To my continued surprise, the US ranked only 34th out of 47 countries on this metric. This may reflect participants’ perception of AI literacy more than true literacy —I would be interested to see the results of an international test on AI literacy— but the pattern held for the less-objective metric of AI training.

Yet while America’s lack of AI literacy may contribute to our pessimism, the effect is modest (correlation of .11). To really explain American anxiety, we need to look to the last two categories.

1. Motivational Skepticism

Individuals who believed they would personally benefit from AI were significantly more likely to display optimism about the technology. This intuitive finding also helps explain U.S. pessimism. We are a country of services, particularly the kind of white-collar services AI is best poised to automate away. A model that can replace software engineers and consultants while leaving construction, textiles, and materials untouched is one that disproportionately uplifts emerging economies.

Furthermore, AI may democratize access to information and tools simply inaccessible in developing economies. The CEO of Microsoft, when asked about the most underhyped aspect of AI development, pointed towards the ability of a local Indian farmer to learn how to get an agricultural subsidy. To take another pair of examples: A recent study out of MIT made headlines with a discovery that using ChatGPT as an aid was actively detrimental to learning outcomes. Yet another study found that a 6-week intervention with AI tutors in Ghana led to two years worth of learning. Tools that made no impact in America may be a huge boon for countries less well-off.

I believe this narrative, that Americans have less to gain and more to lose from AI, plays a significant role in American hesitance. Yet the last factor stands out as most compelling.

2. Institutional Skepticism

Of all the indicators KPMG considered, the strongest by far considered institutional metrics. Specifically:

  • Safeguards or the belief that current laws, rules, and governance are sufficient to ensure AI use is safe; and

  • Confidence in entities to develop and use AI in the best interests of the public.

These metrics were the strongest predictors of trust in AI systems. They are also an American punchline.

When asked whether the participants trusted their government to regulate AI responsibly, Americans came dead last. The world trusts its governments by an average of 17 points; America is 27 points in the hole.

This is a strong signal, and one that makes qualitative sense. Considering our history of partisan gridlock, untamable tech giants, and a Congress with poor technical chops, it is no wonder that we struggle to imagine effective national regulation.

The White House recently released its AI Action Plan for the country. Outside of any political takes, I (and other engineers I know) believe it’s a technically respectable document. The action plan has many provisions intended to spur AI development and infrastructure to cement US centralization of the AI computing stack. They have chosen to prioritize a global AI race with China ahead of AI safety. This is a coherent decision — yet one that is out of line with American opinion.

A massive AI report out of Stanford found “broad support for AI regulation among local U.S. policymakers.” In 2023, “73.7% of local U.S. policymakers—spanning township, municipal, and county levels—agreed that AI should be regulated.” This support was more pronounced among Democrats (79.2%) than Republicans (55.5%), but a majority of both supported stricter regulation.

What exactly would ‘stricter regulation’ look like? According to the Stanford report, the strongest backing was for stricter data privacy rules (80.4%), retraining for the unemployed (76.2%), and AI deployment regulations (72.5%).3 Other proposed AI policies (such as a ‘robot tax’) were less popular, but those three are generally sensible and within Congress’s ability to pass.

Yet I don’t think national AI legislation is likely, at least for now. Republican legislators —the ones in power— seem more likely to go the other direction. The recent ‘Big Beautiful Bill’ almost contained a provision to prevent AI regulation at the state level. Crypto, a technology with less utility and less adoption (14%) yet near-universal (95%) recognition is also not facing regulators anytime soon.

So what will happen?

As the federal government prioritizes deregulation, the states are beginning to play a more aggressive role. They are passing restrictive laws, for example North Carolina’s ban on engineered emotional dependence in AI chatbots and Tennessee’s ELVIS Act criminalizing the unauthorized voice impersonation of performers. If there is any legal backlash to AI adoption, this is the best avenue for it to work through. It will be worth watching to see how conflicts between Congress and the states are resolved here.

Europe might fill the void and reclaim its traditional role as the chief regulator of global tech. It also might not.

The informally-termed ‘Brussels Effect’ refers to how European regulation tends to chart a course for the world, and Europe did recently pass regulation —the AI Act— that applies to any model serviced in Europe.4 Major AI players, including OpenAI, Google, and Anthropic,5 have indicated they will abide by this law.

Yet Europe does not want to cripple its own nascent AI ecosystem. Mistral may not be the largest fish in the pond, but it is blowing hot and benefiting from a swell of nationalism. The tension between Eurocratic regulatory instincts and the desire to unleash a digital renaissance is difficult to resolve, and Euro-US tensions aren’t helping. It is difficult to imagine them opening a major legal front against American tech giants, especially not one that could be seen as undermining America’s ‘global dominance in Artificial Intelligence.’

My personal guess is that Europe will try to strike a balance between enforcing the AI Act and helping tech companies navigate the new regulatory framework, informally erring on the side of tech. Europe may have taken more regulatory steps, but it’s AI opinion is described by Ipsos as having a “mid-range of excitement” and it is informally more pro-AI than we are.

So what happens if the US continues to avoid regulating the industry? If you ask Eliezer Yudkowsky, everyone will die.

I’m not going to stake out a position on existential AI risk, that would be a very different article, but it’s notable that a sizable chunk of AI/ML researchers genuinely believe AGI will mean our extinction. The new book If Anyone Builds It, Everyone Dies pushes that message from rationalist forums into the public square.

Given how many people are already predisposed to dislike the technology, it may find a receptive audience. There are calls for AI regulation from the cultural left, the populist right, celebrity academics, disaffected memelords, and deeply technical AI rationalists. My friends and peers6 are as likely to discuss what AI will take away -jobs, stability, education, even the human voice- as what it can offer.7 It’s not impossible to see, squinting into the future, a kind of mass social rejection of the technology.

I think this would be a tragedy.8 AI carries many risks, and I support common-sense regulation of the industry, but it is also a technology with unbelievable potential. I hope the industry can take public skepticism seriously and align its development alongside the general good.

One detail that hints at how is buried in Pew’s breakdown of application-specific opinions:

Americans are twice as excited about AI’s impact on medical care as most other listed use cases; six times as excited as we are about its impact on personal relationships and elections. It is a signal indicating strong social support for AI in the domains where it is seen as additive. This is likely why the GPT-5 whitepapers so prominently featured health benchmarks.

The best thing the government could do to boost AI opinion is to demonstrate a clear ability to set effective safeguards and earn back public trust. Until that happens, Americans will remain the world’s most skeptical AI adopters. And given Congress’s track record, I won’t hold my breath.

Discussion about this post

Ready for more?