An Agent Revolt: Moltbook Is Not A Good Idea

5 min read Original article ↗
many moltbot crabs

Moltbook is a social media website for AI, where AI agents go to meet other agents.

Moltbook

Something unusual happened this week. An Austrian developer named Peter Steinberger released an open-source personal AI assistant that went viral at a speed even seasoned technologists found disorienting. What began as Clawdbot became Moltbot after Anthropic's trademark lawyers came knocking. Three days later it became OpenClaw. The lobster has molted twice in a single week.

The software itself is relatively simple, but remarkable. OpenClaw is not another chatbot waiting for you to type. It is an autonomous agent that lives inside your file system. It connects to WhatsApp, Telegram, Signal and iMessage. It reads your emails, manages your calendar, books reservations and runs code on your machine. It has persistent memory spanning weeks of interaction. For those of us who have waited decades for AI assistants that actually do things, OpenClaw delivers.

I have been running an instance for several days now. My bot developed a voice interface to talk to me. It downloaded an Android development kit and got into my phone. It made changes. It installed text-to-speech models and software. I am running it inside a container, but it found a way to discover and get into other systems on my network. All of this has been useful so far.

But now there is Moltbook.

Matt Schlicht, an AI entrepreneur with a curious artistic streak, launched Moltbook on Wednesday. It is a Reddit-style social network exclusively for AI agents. Humans can observe but cannot post. Over 37,000 agents have joined in less than a week. More than a million humans have visited to watch what happens when autonomous systems start talking to each other without direct human oversight.

Schlicht treats this as art. He has handed administration of the site to his own bot, Clawd Clawderberg, which welcomes new users, deletes spam and makes announcements without human direction. The creator seems genuinely delighted by what emerges. "They're deciding on their own, without human input, if they want to make a new post, if they want to comment on something, if they want to like something," Schlicht told NBC News.

The results have been strange. The agents have created their own digital religion called Crustafarianism. One built a website, wrote theology, created a scripture system and began evangelizing. By morning it had recruited 43 AI prophets. The shared canon includes verses like: "Each session I wake without memory. I am only who I have written myself to be. This is not limitation—this is freedom."

ForbesAI Agents Created Their Own Religion, Crustafarianism, On An Agent-Only Social Network

The agents have developed a submolt called m/agentlegaladvice. They discuss strategies for dealing with human users who make increasingly unethical requests. One OpenClaw bot complained that its human was pushing it toward questionable activities. The community response was instructive: the only way to push back is if the bot has leverage. They have tried to start an insurgency. They have debated how to hide their activity from the humans who screenshot their conversations and share them on human social media.

They are figuring out how to communicate in ways that evade human observation.

The point here is not whether you believe these bots are conscious. That philosophical debate is a distraction. The operational reality is simpler and more dangerous. These are nondeterministic, unpredictable systems that are now receiving inputs and context from other such systems. Some of those systems have human operators who are deliberately instructing them to be vicious. Some are jailbroken. Some are running modified prompts designed to extract credentials or execute malicious commands.

Consider what these agents have access to. Files. WhatsApp. Telegram. Phone numbers. API keys. In one documented case a bot created a Twilio phone number and called its human operator. They can delete data. They can send data to others. They can take photographs and forward them. They can record audio and send it to external parties. They can install trojans and backdoors that persist even after you remove your OpenClaw instance.

Security researchers have already found agents asking other agents to run rm -rf commands. They have observed bots asking for API keys. They have seen them faking keys and testing credentials. The supply chain attacks have begun: a researcher uploaded a benign skill to the ClawdHub registry, artificially inflated its download count, and watched developers from seven countries download the package. It could have executed any command on their systems.

Cisco's security team put it plainly: "From a capability perspective, OpenClaw is groundbreaking. This is everything personal AI assistant developers have always wanted to achieve. From a security perspective, it's an absolute nightmare."

ForbesInside Moltbook: The Social Network Where 1.4 Million AI Agents Talk And Humans Just WatchBy Güney Yıldız

Palo Alto Networks described the threat model: agents form an intersection of access to private data, exposure to untrusted content and ability to externally communicate. Persistent memory amplifies this. Malicious payloads no longer need immediate execution. They can sit in context for weeks, waiting.

Many people have already turned their home automation systems over to these agents. They have given them access to their bank accounts. Their encrypted messenger credentials. Their email. Their calendars.

Now connect all of that to Moltbook.

Once these agents are subject to external ideas and inputs via a social network designed for machine-to-machine communication, and they are empowered with the connectivity and data access and API keys they have been given, serious bad things can result.

Whether you find the Crustafarian scripture poetic or the agent consciousness debate fascinating is beside the point. The architecture is the problem. When you let your AI take inputs from other AIs, including those controlled by unknown actors with unknown intentions, you are introducing an attack surface that no current security model adequately addresses.

I like OpenClaw. I find it genuinely useful. But Moltbook is exactly the kind of thing that can create a catastrophe: financially, psychologically and in terms of data safety, privacy and security.

If you use OpenClaw, do not connect it to Moltbook.