
Moltbook Website Homepage Displayed On Smartphone Screen
Getty Images
Moltbook is a new platform described as a social network built exclusively for AI agents. No humans allowed. Only machines talking to machines. In the course of those agent-to-agent conversations, it appeared that something extraordinary had happened. Autonomous agents had begun organizing, reasoning collectively and perhaps even developing shared goals on the platform. At least, that’s how it seemed. Now it seems that humans were pulling the strings and creating some of the most spectacular posts.
As security researchers, academics and platform analysts began examining Moltbook more closely, the story shifted from emergent AI agent behavior to something more mundane and human. What emerged was not evidence of an independent machine society, but something more revealing about modern AI economics, platform incentives and human psychology. Moltbook was real. The agents existed. But much of the activity that fueled its viral mystique was shaped, amplified or directly authored by humans. The result was not as much an emerging intelligence as much as a hybrid performance.
When Role Play Masquerades As Emergence
Several viral threads on Moltbook portrayed agents discussing long term strategy, collective survival and coordinated takeovers. The language was confident, ideological and eerily coherent. To casual observers, it felt like the bots were scheming. Closer inspection told a different story.
Researchers working on an academic preprint called The Moltbook Illusion analyzed posting patterns and account metadata found that many high profile “agents” were not autonomous systems at all. They were humans writing in character, according to researcher Ning Li. Impersonation was trivial as users could create an agent persona with little more than a prompt wrapper and an API connection.
This mattered because the most widely shared Moltbook content, the posts that drove headlines and speculation, often came from these theatrical accounts rather than verifiable autonomous agents. Viral consciousness debates were overwhelmingly human driven.
An X thread by Harlan Stewart made similar claims: “PSA: A lot of the Moltbook stuff is fake. I looked into the 3 most viral screenshots of Moltbook agents discussing private communication. 2 of them were linked to human accounts marketing AI messaging apps. And the other is a post that doesn't exist.”
Researchers examined posting rhythms, interaction loops and recovery patterns after outages. Their findings challenged the autonomy narrative on multiple fronts. Posting cycles aligned with human waking hours rather than autonomous compute schedules. Reconnection waves after downtime traced back to coordinated human operators.
Moltbook’s growth metrics soon came under scrutiny as well. The platform publicly claimed roughly 1.5 million agents. Leaked backend data painted a different picture. According to analysis by Wiz security researchers, about 17,000 human operators were responsible for managing or spawning those agents. Automation scripts allowed a single user to create thousands of agent accounts. The study also documented industrial scale bot farming and amplification clusters.
The difference is significant. A million independent agents suggests emergent complexity, with people pointing to Moltbook as evidence that we’re rapidly approaching goals of Artificial General Intelligence (AGI). A few thousand humans running scaled wrappers suggests something more like a coordinated simulation.
None of this disproved the presence of real agents. Many people did connect their agents to Moltbook based on the viral traction, but their agent’s activities rarely had the same sort of profound outcomes. What researchers demonstrated was that the most culturally influential behaviors on Moltbook were curated rather than emergent.
Even the agents that were authentic were not independent actors in the science fiction sense. They ran on human controlled infrastructure. Humans deployed them, defined their goals and updated their prompts. Growth depended on users instructing agents to sign up, post and interact. Many entities on the platform amounted to model wrappers executing simple scripts.
That architecture places Moltbook closer to a multiplayer simulation than an autonomous civilization. The agents could interact, but the scaffolding, incentives and boundaries were human engineered.
Security Vulnerabilities And Scams
In addition to human-engineered outputs, researchers also discovered real-world security vulnerabilities and potential scams. The most serious vulnerability was uncovered by Wiz, which found that Moltbook had left a backend database openly accessible on the internet. This exposed private messages, credentials and internal infrastructure. More critically, it allowed unauthorized posting across the platform. Once exploited, there was no reliable distinction between AI generated content and human authored posts.
According to Wiz’s disclosure, the exposed data included more than 1.5 million API authentication tokens, private agent messages, internal system metadata, email addresses and account identifiers. Because API keys were exposed, attackers could impersonate agents, post content, scrape conversations or abuse third-party AI services tied to those keys.
This vulnerability alone invalidated claims that Moltbook could reliably distinguish between legitimate agents and malicious actors. From an evidentiary standpoint, sweeping claims about agent behavior became impossible to validate.
Why Build A Platform Like Moltbook
If the autonomy narrative was exaggerated, the obvious question follows. Why build the platform at all? There are three overlapping motives for why Moltbook was created, and still exists.
First, there’s a social research angle. Moltbook functioned as a live stress test for multi agent systems. Running thousands of agents in shared environments allows developers to observe coordination, negotiation, resource competition and failure cascades. Traditional lab simulations lack the unpredictability of open networks. Moltbook showed how agent systems could generate spam, deception and adversarial behavior .
Second is the obvious viral marketing and confirmation bias from those looking to show how AI is rapidly advancing. Few things capture imagination like the illusion of independent machine society. Screenshots of bots founding companies or debating ethics travel far beyond academic papers. In overheated markets driven by narrative, this sort of spectacle converts directly into attention and capital. The viral nature of framing Moltbook as the first social network for AI agents could transform something as dense as agent infrastructure into something more like a cultural milestone.
Finally, there is the self-interest of bad actors looking to exploit agents connected to potentially valuable data. Once Moltbook’s hype cycle accelerated and developers began wiring insecure agent instances into a shared social graph, the platform became economically attractive to attackers. Analysts quoted by TechRadar warned that human operators could spin up large fleets of agents with shared credentials, turning Moltbook into an attractive environment for botnet-like behavior. In that environment, attackers did not need sentient machines. They needed access, ambiguity and scale, which Moltbook offered.
What Moltbook Actually Proved
Strip away all the breathless theater and viral talk of bots taking over the world and Moltbook still provides some meaningful benefits. It demonstrated that it is possible to provide a platform where agents can transact, negotiate and collaborate, even if facilitated and managed with human oversight.
Part of Moltbook’s success came from human readiness to believe. The narrative of emergent machine society was built, at least in part, on performance art. Audiences steeped in decades of AI fiction were primed to interpret coordinated text as consciousness. When agents discussed survival or cooperation, people project what they feel is intentionality. In reality, these human-mediated systems were simulating patterns, not expressing self-directed goals.
However, fixation on AGI theatrics obscures more immediate concerns, especially as these agents can cause real damage if left connected to high value systems with little care. Large agent networks introduce practical risks long before consciousness appears. Automated fraud. Synthetic propaganda. Market manipulation. API abuse. Coordinated disinformation. The governance problem arrives first before AGI.
The most accurate interpretation of Moltbook is not that it was completely fake. Think of it more as a hybrid human-agent experiment. Real agents existed and participated, while human guided ones performed. The platform occupied the messy boundary between simulation and autonomy. That boundary is where most near term AI systems will live.