Is Anything Real Anymore? AI Making Americans Suspicious Of Everything Online

5 min read Original article ↗
Generative AI: Robot hands typing

Generative artificial intelligence is becoming increasingly difficult to detect. (Image by feeling lucky on Shutterstock)

In a nutshell

  • Americans believe only 41% of online content is accurate and created by humans, with three-quarters reporting their trust in the internet is at an all-time low.
  • When tested, only 30% of people could correctly identify AI-written content, showing how difficult it’s become to distinguish between human and artificial writing.
  • 82% of Americans want businesses to be legally required to disclose when they use AI in marketing, customer service, or content creation.

NEW YORK — Americans’ confidence in online content has hit rock bottom. Most people now believe the majority of what they see on the internet isn’t trustworthy, according to a nationwide poll.

The survey of 2,000 adults by Talker Research paints a concerning picture. Americans believe only 41% of online content is accurate, factual, and made by humans. They think 23% is completely false and purposely inaccurate or misleading, while 36% falls somewhere in between. Three-quarters of respondents say they trust the internet less today than ever before.

About 78% of respondents agree that the internet has “never been worse” when it comes to differentiating between what’s real and what’s artificial. This skepticism has grown as AI-generated material becomes increasingly prevalent.

Spotting Fake Content

The average American encounters information they know or suspect was generated by AI about five times per week, with 15% indicating it’s more than 10 times.

When asked where they most commonly notice artificial content, respondents identified:

  • Social media posts (48%)
  • News articles (34%)
  • Chatbot interactions (32%)

Those polled believe that 50% of the news stories and articles they come across online have some element of AI, whether in images or written content.

The research, commissioned by World.org, uncovered a troubling reality: when tested, only 30% of participants could correctly identify which business reviews were written by humans versus AI. Of the three options written by people, two ranked at the very bottom of the list, demonstrating how challenging it has become to recognize genuine human writing.

Worker or student using ChatGPT on their laptop computer
The explosion of generative AI platforms like ChatGPT have made Americans even more skeptical of the information they read online. (© irissca – stock.adobe.com)

Real-World Consequences

This erosion of trust affects how consumers behave online. With 80% of Americans relying on reviews when choosing businesses to support, this uncertainty undermines consumer confidence.

Consumers reported being less likely to patronize companies using:

  • Bot-written reviews (62%)
  • AI customer service representatives (50%)
  • AI-generated images (49%)

Nearly half (46%) of respondents have purchased something that ended up not being what was advertised. Of those, 24% weren’t able to get a refund or return the item.

Rebecca Hahn, Chief Communications Officer of Tools for Humanity, developers of World ID, described the situation directly: “Trust in the internet hasn’t just declined — it’s collapsed under an avalanche of AI-generated noise. The internet has become a house of mirrors where 78% of Americans can no longer distinguish real from artificial.”

Finding Solutions

The most stressful situation when it comes to differentiating whether they’re dealing with a person or chatbot is when speaking to a customer service representative (43%), followed by booking lodging or hotels (23%) and sending money through a third-party app (22%).

People have developed their own verification methods. As one respondent explained, “I often ask open-ended questions or test for human-like responses, such as asking for personal opinions or experiences.” Additionally, 24% will Google or search for the entity online to verify their human status, while 23% ask for a phone or video call.

The vast majority of Americans (82%) agree that businesses and vendors should be legally required to disclose whether AI is used in their marketing, content, customer service or on their website.

Hahn noted the urgency of finding solutions: “Being able to prove you’re human online is becoming as essential as having an email address was twenty years ago. Our survey shows Americans are desperate for tools that restore confidence in digital interactions. We’re pioneering a new paradigm where human verification becomes a foundational layer of the internet — simple, secure, and universally accessible. This isn’t just about solving today’s trust crisis; it’s about building tomorrow’s internet where human-to-human connection remains at the heart of everything we do.”


About the Research: Talker Research surveyed 2,000 general population Americans between March 28 through March 31, 2025. The survey, commissioned by World, used traditional online access panels and programmatic sampling with incentives for completion. Quality control measures removed speeders (completing the survey in less than one-third the median interview time), inappropriate responses, Captcha-identified bots, and duplicate submissions through digital fingerprinting. The survey was only available to individuals with internet access.

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Team

Steve Fink

Editor-in-Chief

Sophia Naughton

Associate Editor