AI Slop Is Ruining Reddit for Everyone

5 min read Original article ↗

A Reddit post about a bride who demands a wedding guest wear a specific, unflattering shade is sure to provoke rage, let alone one about a bridesmaid or mother of the groom who wants to wear white. A scenario where a parent asks someone on an airplane to switch seats so they can sit next to their young child is likely to invoke the same rush of anger. But those posts may trigger a Reddit moderator’s annoyance for a different reason—they are common themes within a growing genre of AI-generated, fake posts.

These are examples that spring to mind for Cassie, one of dozens of moderators for r/AmItheAsshole. With over 24 million members, it's one of the biggest subreddits, and it explicitly bans AI-generated content and other made-up stories. Since late 2022, when ChatGPT first launched to the public, Cassie (who wanted to be referred to by first name only) and other people who volunteer their time to moderate Reddit posts have been struggling with an influx of AI content. Some of it is entirely AI-generated, while other users have taken to editing their posts and comments with AI programs like Grammarly.

“It’s probably more prevalent than anybody wants to really admit, because it’s just so easy to shove your post into ChatGPT and say ‘Hey, make this more exciting,’” says Cassie, who thinks as much as half of all content being posted to Reddit may have been created or reworked with AI in some way.

r/AmItheAsshole is a pillar of Reddit culture, a format that has inspired dozens if not hundreds of derivatives like r/AmIOverreacting, r/AmITheDevil, and r/AmItheKameena, a subreddit with over 100,000 members described as “Am I the asshole, but the Indian version.” Posts tend to feature stories about interpersonal conflicts, where Redditors can weigh in on who is wrong (“YTA” means “You’re the asshole,” while “ESH” means “Everyone sucks here”), who is right, and what the best course of action to take is moving forward. Users and moderators across these r/AmItheAsshole variants have reported seeing more content they suspect is AI-generated, and others say it's a sitewide issue happening in all kinds of subreddits.

“If you have a general wedding sub or AITA, relationships, or something like that, you will get hit hard,” says a moderator of r/AITAH, a variant of r/AmItheAsshole that has almost 7 million members. This moderator, a retiree who spoke on the condition of anonymity, has been active on Reddit for 18 years—most of its existence—and also had decades of experience in the web business before that. She views AI as a potential existential threat to the platform.

“Reddit itself is either going to have to do something, or the snake is going to swallow its own tail,” she says. “It’s getting to the point where the AI is feeding the AI.”

In a response to a request for comment, a Reddit spokesperson said: “Reddit is the most human place on the Internet, and we want it to stay that way. We prohibit manipulated content and inauthentic behavior, including misleading AI bot accounts posing as people and foreign influence campaigns. Clearly labeled AI-generated content is generally allowed as long as it’s within a community’s rules and our sitewide rules.” The spokesperson added that there were over 40 million “spam and manipulated content removals” in the first half of 2025.

Vibe Shift for the Worse

Ally, a 26-year-old who tutors at a community college in Florida and spoke using her first name only for her privacy, has noticed Reddit “really going downhill” in the past year because of AI. Her feelings are shared by other users in subreddits like r/EntitledPeople, r/simpleliving, and r/self, where posts in the last year have bemoaned the rise of suspected AI. The mere possibility that something could be AI-generated has already eroded trust between users. “AI is turning Reddit into a heap of garbage,” one account wrote in r/AmITheJerk. “Even if a post suspected of being AI isn’t, just the existence of AI is like having a spy in the room. Suspicion itself is an enemy.” Ally used to enjoy reading subreddits like r/AmIOverreacting. But now she doesn’t know if her interactions are real anymore, and she’s spending less time on the platform than in years past.

“AI burns everybody out,” says the r/AITAH moderator. “I see people put an immense amount of effort into finding resources for people, only to get answered back with ‘Ha, you fell for it, this is all a lie.’”

Detecting AI

There are few foolproof ways to prove that something is AI or not, and most everyday people are relying on their own intuition. Text can be even harder to evaluate than photos and videos, which often have pretty definitive tells. Five Redditors who spoke to WIRED all had different strategies for identifying AI-generated text. Cassie notices when posts restate their title verbatim in the body or use em dashes, as well as when a poster has terrible spelling and punctuation in their comment history but posts something with perfect grammar. Ally is thrown off by newly created Reddit accounts and posts with emojis in the title. The r/AITAH mod gets an “uncanny valley” feeling from certain posts. But these “tells” could also be present in a post that isn’t using AI at all.

“At this point, it’s a bit of a you-know-it-when-you-see-it kind of vibe,” says Travis Lloyd, a PhD student at Cornell Tech who has published research into new AI-driven challenges that Reddit moderators face. “Right now, there are no reliable tools to detect it 100 percent of the time. So people have their strategies, but they’re not necessarily foolproof.”