Americans Have Mixed Views of AI – and an Appetite for Regulation

8 min read Original article ↗

AI is sweeping the American economy. A majority of Americans are either using AI tools or have tried them. But AI use is only half the story – a supermajority of Americans say that AI should be regulated to protect privacy and ensure safety. And many remain worried about potential job losses stemming from the emerging technology.

It’s time we did more to understand the national attitudes around AI use, regulation, and favorability.

Who uses AI, and for what?

Most of our respondents had used AI at least once: 58% report using or trying AI, specifically tools like ChatGPT or Claude, divided evenly between fairly regular users (30% use at least a few times a month) and more infrequent users (29% have used AI, but only once a month or less). Nonusers are more likely to be older (62% of people over 65 have never used AI), not have gone to college (47%), or to work in service jobs (35%). Only 18% of white-collar workers say they have never used AI.

Among those who do use AI, 63% have at least tried it out for work purposes, and 34% report using it at work consistently (a few times a month or more). Work usage, like AI usage overall, is more common among white-collar workers, a majority of whom (55%) use it consistently. Personal use of AI is more common: 91% have at least tried using an AI chatbot or writing tool, and 54% report consistently using it in their personal lives. Gen Z turns to AI more often than its older counterparts, with 68% regularly engaging AI for personal use (compared to just 40% among Boomers).

Personal AI use encompasses a bunch of different types of application, but by far the most common is information gathering and answering questions (63%). Frequent AI users are especially likely to use AI as an alternative to traditional Google search or other research tools (68%). If this use case continues, messaging and communication on anything from public health to election campaigns will be filtered through AI models before reaching people. This could potentially change how messages are interpreted (particularly given the use of AI summaries), or significantly alter their reach in difficult-to-measure ways. Those interested in talking effectively to Americans should consider how their messaging strategy would work with these AI intermediaries, how their messages will be interpreted by AI, and if what they say will still reach their intended audience.

How Americans compare AI to other technologies

While they know the names of the tools (ChatGPT, etc.), respondents don’t know much about the companies responsible for AI products. This is in contrast with older tech companies: Only 5% haven’t heard of Google, and 79% are favorable toward the company. For Amazon, favorability is at 80%. When asked about OpenAI, 42% hadn’t heard enough to form an opinion. For Anthropic, this figure hit 81%.

Americans don’t view AI as favorably as other emerging technologies from the past quarter century. Cell phones (76% total positive; net +68), the internet (75% total positive; net +66), and solar energy (72% total positive; net +65) are all viewed as having a very positive impact on society. AI (38% total positive; net +8) is more in line with social media (41% total positive; net +7). Most Americans aren’t sure how exactly AI will develop or change society. The one technology viewed less favorably than AI is cryptocurrency, with only 22% saying it has a positive impact on society, for a net favorability of -16.

When Americans are asked to think about potential future impacts on society, opinions of AI remain mixed, at a nearly even split of respondents between positive, negative, and uncertain impacts. Respondents believe that things like solar energy (70%), personalized medicine (57%), and nuclear energy (43%) will have positive future impacts, but are more skeptical of cryptocurrency (24% positive, 40% say it will have a negative impact) and self-driving cars (23% positive, 50% negative).

We also wanted to get a sense of how important Americans believe certain technologies to be. To provide some sense of perspective, we asked them to compare these new technologies to some they’re more familiar with. “Extremely important” corresponded to the steam engine and electricity, “moderately important” corresponded to the smartphone, and “not very” corresponded to the digital camera.

With these comparisons in mind, the median respondent puts AI on par with the invention of the smartphone. Only 7% believe that it’s more important than any other technology, with 16% of frequent AI users believing this, compared to just 5% of infrequent/nonusers.

Broad belief that AI will change the nature of work

Despite uncertainty about AI’s overall importance, 70% of Americans say this technology will dramatically transform work, though they’re much less certain about the exact nature of this transformation. A plurality believes that it will make work easier (49%), but a majority believes that AI will bring down wages (55%), including pluralities or majorities across education, race, and partisanship. More respondents think it will hurt the economy (37%) than think it will decrease growth (29%), probably reflecting concern about wages. The perception that AI will replace human workers or outcompete them for jobs is a common one: 51% think AI will replace work done by humans, versus 33% who believe AI will supplement the work humans do. Service workers are especially convinced AI will replace (59%) rather than supplement (30%).

A majority of Americans (56%) think that within 10 years, AI will be capable of performing most tasks that most people do at work. However, this drops significantly when respondents are asked about when AI will be able to do most tasks in their own job or field, with 43% thinking this will happen within 10 years, with no notable differences by educational attainment or work type. Customer service representatives are generally agreed to be replaceable within 10 years (64%), followed by accountants (56%) and manufacturing workers (54%). Fewer than 1 in 3 Americans think electricians, truck drivers, or doctors could be replaced by AI in the next 10 years.

Americans overwhelmingly support AI regulation

Around two-thirds of Americans (67%) are more concerned about the government doing too little to regulate the dangers of AI, versus doing too much and stifling progress (12%). At the same time, they don’t support a full ban on AI advancement, preferring progress to continue with requirements for safety testing (62%). Americans’ preference for safety regulations over the fastest possible progress holds even when they’re presented with the idea that regulating AI would put the U.S. behind China: Only 15% prefer a world where the U.S. government doesn’t regulate AI at all to compete with other countries like China over one where AI research continues under regulation, even if it develops slower than other countries (67%).

Even in the most extreme case — presenting the choice between no further progress in AI at all and totally unregulated AI — only 34% would support unregulated AI development, versus 30% who think the government should ban research to improve AI, with a 36% plurality saying they’re unsure.

In terms of perceived dangers from AI, Americans’ biggest concern is AI taking jobs and causing unemployment (42%), followed by concerns about privacy (35%) and misinformation (33%). These are also the top three areas Americans prioritize for regulation. Respondents are not especially concerned about AI wiping out humanity (12%), but express greater concern about the use of AI resulting in loss of human control (32%).

What AI is good at

While respondents see AI as better than humans at being “efficient” by a wide margin (+44 points), AI has virtually no advantage on “being convenient to those they’re trying to help” (+2). Humans are still seen as wildly better in terms of being moral (+53), making complex decisions (+30), protecting privacy (+28), and being transparent (+18).

We also asked about the potential for replacing humans in specific governmental tasks, since this has been talked up by some AI proponents. There are only two specific tasks (from our list tested in this survey, at least) where AI is seen as better than human government employees. One is identifying trends in data (+18) and the other is correctly verifying the information in forms (+8).

On anything less concrete and data-focused, respondents strongly prefer humans. Human government employees are preferred over AI by 77 points for judging criminal trials, by 59 points for conducting airport screenings, and by 55 points for answering questions about government services like Social Security. Public interest in AI replacement of humans in government and other tasks is minimal. 

Methodology

This survey was run by Tavern Research via an online sample of 2,301 American adults fielded over web panels from August 1, 2025 to August 6, 2025. The margin of error is +/- 3%.

NOTE: Because this is a web-based survey, we expect the total usage of AI to be slightly higher than the numbers found in non-web-based surveys. This should not affect the other conclusions.

Dustbin

A question that was interesting, but didn’t lead to a larger conclusion, was asking what actually happens when you ask a tool like ChatGPT a question. 45% think it looks up an exact answer in a database, and 21% think it follows a script of prewritten responses.