Credit: Jakraphong Photography on Shutterstock
In A Nutshell
- AI swarms can create the illusion of widespread agreement by coordinating thousands of human-like accounts.
- This “synthetic consensus” exploits how people rely on social cues to form opinions.
- The biggest risks are fragmented realities, harassment, and long-term poisoning of online information.
- Researchers say the next few election cycles will test whether democracy can adapt in time.
Social media has become synonymous with disagreement in recent years, as one never has to scroll very far to start seeing arguments or insults online. Now, unsettling research warns the next time you see a large group of social media accounts actually agreeing on something, there’s a strong chance all of those posts aren’t people at all but a “swarm” of AI agents.
These hives of AI bots that never sleep yet perpetually post often appear like a genuine groundswell of public opinion, and they’re becoming more and more sophisticated in pursuit of stronger propaganda. If you see hundreds of accounts across various platforms and political leanings passionately agreeing about a topic (different usernames, different profile pictures, slightly different ways of saying it, but ultimately the same core message), don’t assume you’re witnessing legitimate discourse.
Unlike the clunky Russian troll farms that tried to meddle in the 2016 election, these AI systems work around the clock, learn from every interaction, and convincingly mimic real human conversation. Researchers warn this is already beginning, and could soon become routine. Taiwan, India, Indonesia, and the United States all dealt with AI-generated deepfakes and fake news outlets during their 2024 campaigns, according to a paper published in Science by 22 researchers from Yale, Harvard, Cambridge, and other major institutions.
A single bad actor can now deploy thousands of AI personas that don’t just spam the same message, they adapt their tone, learn what resonates with different communities, and create the illusion that independent people across the internet all happen to share the same view. The researchers call this manufactured agreement a “synthetic consensus,” and it exploits something fundamental about how humans form opinions: we look to others to figure out what to believe.
From Russian Trolls to AI Swarms
Remember the Internet Research Agency, Russia’s infamous troll farm from 2016? That operation had humans working in shifts, manually posting content. It was expensive, slow, and researchers later found it had essentially no effect on voter turnout or opinions. Only 1% of Twitter users even saw most of their posts.
AI swarms are different. They don’t need lunch breaks or sleep. They generate fresh content constantly, adjusting their approach based on what’s getting engagement. Each AI agent can maintain its own personality, post history, and relationships with real users over months or even years. They infiltrate communities, build trust, and gradually shift conversations: not through obvious propaganda, but through what looks like authentic grassroots activity.
These systems can scan social networks to identify which communities are most vulnerable to influence, then tailor messages to match each group’s language, values, and emotional triggers. They use realistic avatars and post at natural-seeming times throughout the day and night. They even vary their content just enough that older bot-detection systems, which looked for accounts posting identical messages, can’t catch them.
Even worse, they’re self-improving. By tracking which posts get the most likes, shares, and replies, AI swarms can run millions of tiny experiments every hour, keeping what works and discarding what doesn’t. They iterate faster than any human campaign ever could.

What Happens When Nobody Knows What’s Real
The potential damage goes far beyond individual elections. When these AI swarms flood the internet with coordinated messages that look independent, they destroy something called the “wisdom of crowds,” or the idea that aggregating many people’s opinions often produces better answers than asking a single expert. That only works if people are actually forming their own views. When AI agents create fake consensus, real people see what looks like widespread agreement and assume it must be legitimate. They share it, amplify it, and the illusion grows.
The researchers worry about something even more insidious: AI swarms can show different lies to different groups. Someone in a rural conservative community might see one narrative, while someone in an urban progressive community sees another, both carefully crafted to match what that group already believes and wants to hear. The result is Americans (or citizens of any country) living in completely separate realities, making compromise or shared understanding nearly impossible.
Then there’s what the researchers call “LLM grooming.” Pro-Kremlin influence networks are already duplicating articles across hundreds of fake websites with barely any human visitors. Why? Because web crawlers that train the next generation of AI systems are reading them. When future versions of ChatGPT or Google’s AI get trained, they’ll absorb these fabricated narratives as if they were real, embedding lies into the tools millions of people rely on for information.
The personal toll could be severe too. AI swarms can coordinate harassment campaigns against journalists, activists, or politicians, deploying thousands of accounts that relentlessly attack their target while adapting their tactics based on responses. By the time anyone figures out it’s not organic outrage, the target may have already quit social media or their job.
Can We Stop Them?
The researchers are blunt: this is an arms race, not a problem with a permanent fix. However, there are strategies that could raise the cost and difficulty for bad actors.
Social media platforms could implement always-on detection systems that look for suspicious coordination patterns: not identical posts, but subtler signals like accounts that consistently amplify the same narratives or clusters of “users” whose posting patterns are statistically unlikely to be independent. Making these detection systems mandatory and publicly audited, rather than voluntary, would force platforms to take the threat seriously. Right now, fake engagement actually helps platforms by inflating their user numbers.
Users could get optional “AI shields,” or warnings on posts that score high for potential bot activity, with brief explanations of why. It wouldn’t be perfect, and real humans would sometimes get flagged, but it would give people more information to decide what to trust.
Platforms could also make it harder to run massive bot operations by requiring periodic identity verification, not necessarily revealing someone’s real name publicly, but proving they’re an actual human. The challenge is doing this without endangering activists, whistleblowers, or dissidents who need anonymity to speak safely.
Perhaps most effective would be targeting the money. AI influence operations are cheap to run, but they’re often sold as services by private companies. Cutting off their revenue (by making platforms refuse payments for suspicious engagement or downranking content that appears bot-driven) would make the business model less profitable.
Some researchers have proposed creating an “AI Influence Observatory,” a network of universities, journalists, and watchdog groups that investigate suspected AI campaigns, publish verified reports, and give regulators evidence to act on. Akin to a public warning system.
The Next Two Years Matter
The uncomfortable truth is that domestic politicians are often the biggest sources of misleading information during campaigns. They’re unlikely to support regulations that limit tools they find useful. Tech companies don’t want to anger powerful politicians or admit their platforms are compromised. That political resistance is why the researchers focus on economic pressure (hitting platforms in their revenue) rather than counting on voluntary cooperation.
Crucially, the window for action is closing fast. If infrastructure goes up now (detection systems, research access, transparency requirements) the 2026 and 2028 U.S. elections could become proving grounds for how to protect democracy against AI manipulation. If not, they could become demonstrations of democracy’s vulnerability.
Some countervailing forces might help. People are already getting more skeptical of what they see online. Trust in traditional journalism is ticking back up in some places as people crave reliable information. Smaller, private online communities might provide refuge from the chaos of major platforms, though they come with their own risks of isolation and echo chambers.
The researchers stress that AI swarms aren’t invincible or inevitable. Their impact depends on choices made by platform designers, regulators, journalists, and citizens. Swarms work best in environments optimized for viral engagement over truth, where business models reward inflammatory content and where there’s no penalty for fakery.
Change those conditions, and AI swarms become less powerful. Let them operate unchecked in the current system, and the internet’s role in democracy could shift from a place for public debate to a battlefield where whoever deploys the most sophisticated bots wins. The researchers argue that the next few years will determine which future we get. They stress that starting now, before the next major election cycle, gives us a fighting chance to get ahead of the threat rather than scrambling to react after the damage is done.
Paper Notes
Study Limitations
This paper is a policy analysis and review rather than an experimental study with original data. The authors acknowledge that the ultimate impact of AI swarms will depend on many factors beyond technical capabilities, including platform design choices, market incentives, how media institutions respond, and decisions by political actors. Some emerging trends might reduce the threat—people are becoming more skeptical of unverified content, there’s renewed interest in quality journalism, and users are migrating to smaller communities with better moderation. The authors distinguish throughout between documented trends (like the presence of AI-generated content in 2024 elections) and projections about future harms. They note that some proposed solutions face major political obstacles because elected officials and political parties are often major sources of misleading information themselves and may resist technologies they find useful for their own campaigns. Technology companies may refuse meaningful changes to avoid partisan backlash. The authors also acknowledge that distinguishing malicious AI coordination from genuine human grassroots organizing is difficult, and some proposed countermeasures could be abused by governments to suppress legitimate dissent.
Funding and Disclosures
Kevin Leyton-Brown serves as a consultant to AI21 Labs, an affiliate of Auctionomics, and an adviser to OneChronos. Maria Ressa is the CEO and co-founder of Rappler and the founder of The Nerve, a data forensics and research consultancy. The authors disclosed that AI tools (Grammarly, OpenAI o3, and Claude) were used to improve the language of the manuscript.
Publication Details
Authors: Daniel Thilo Schroeder, Meeyoung Cha, Andrea Baronchelli, Nick Bostrom, Nicholas A. Christakis, David Garcia, Amit Goldenberg, Yara Kyrychenko, Kevin Leyton-Brown, Nina Lutz, Gary Marcus, Filippo Menczer, Gordon Pennycook, David G. Rand, Maria Ressa, Frank Schweitzer, Dawn Song, Christopher Summerfield, Audrey Tang, Jay J. Van Bavel, Sander van der Linden, Jonas R. Kunst | Journal: Science | Article Title: “How malicious AI swarms can threaten democracy” | Volume and Issue: Volume 391, Issue 6783 | Publication Date: January 22, 2026 | DOI: 10.1126/science.adz1697
Author Affiliations: Department of Sustainable Communication Technologies, SINTEF Digital, Oslo, Norway; Max Planck Institute for Security and Privacy, Bochum, Germany; Department of Mathematics, City St George’s University of London, London, England, UK; Macrostrategy Research Initiative, London, England, UK; Human Nature Lab, Yale University, New Haven, CT, USA; Department of Politics and Public Administration, University of Konstanz, Konstanz, Germany; Harvard Business School, Harvard University, Boston, MA, USA; Department of Psychology, University of Cambridge, Cambridge, England, UK; Department of Computer Science, University of British Columbia, Vancouver, BC, Canada; Department of Human Centered Design and Engineering, University of Washington, Seattle, WA, USA; Department of Psychology, New York University, New York City, NY, USA; Observatory on Social Media and Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA; Department of Psychology, Cornell University, Ithaca, NY, USA; Departments of Information Science, Marketing, and Psychology, Cornell University, Ithaca, NY, USA; Rappler, Pasig City, Philippines; School of International and Public Affairs, Columbia University, New York, NY, USA; Department of Management, Technology, and Economics, ETH Zürich, Zurich, Switzerland; Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA, USA; Department of Experimental Psychology, University of Oxford, Oxford, England, UK; Ministry of Foreign Affairs, Taipei, Taiwan; Department of Strategy and Management, Norwegian School of Economics, Bergen, Norway; Department of Communication and Culture, BI Norwegian Business School, Oslo, Norway.