Clawdbot is a security nightmare [video]
youtube.comI experienced this firsthand. I'm a full-stack dev with 12+ years of experience and even for me, security hardening OpenClaw on a VPS took hours — UFW, fail2ban, SSH key-only auth, disabling password login, configuring Docker isolation, setting up proper firewall rules. And I knew what I was doing.
The core problem the video highlights is real: OpenClaw gives an AI agent shell access, messaging access, and browser access. The default setup has none of the security guardrails you'd want. Most users either skip security entirely or make mistakes that leave them exposed.
After setting it up securely for myself and a few friends, I started automating the whole process — automated provisioning on Hetzner with Docker sandbox, UFW, fail2ban, SSH key auth pre-configured. Turned it into a small managed hosting service (runclaw.ai) because I kept seeing the same setup struggles everywhere.
The broader point stands though: the security model for AI agents with system access is fundamentally unsolved. Sandboxing helps. Proper infrastructure helps. But prompt injection and trust boundaries are architectural problems that no amount of hosting can fix.
It is very sad that we are ignoring the lessons we learned about security twenty years ago just because we want new toys. We spent so much time making sure that user input could not change how a program runs and now we are doing the exact opposite. The video is right that the problem is not a bug in the code but a flaw in how the whole system thinks. We are building a house on sand.
I don't think we did security 20 years ago, even if there were lessons.
Maybe the path was:
It felt like we made it somewhere into the 'built it fast' phase before getting yanked onto the next feature.* Build it * Build it right * Build it fast * Build it secureThese days it feels more like:
I would love the Overton window to somehow shift back to topics like "how do we know the code is correct and addresses the right problem?" over "how many tickets or LOC did your agent do for you today?". I don't know how we get back.* Build it * Build it with k8s * Build it with observability * Get sidetracked and play with AI * Debug it * Debug it some more * Give up on debugging it * Do a tech debt sprint * Refactor the deployment pipeline
I felt this firsthand while experimenting with Moltbot (Clawdbot). The power is impressive, but the configuration and security hardening took a huge amount of time, and I constantly felt like I was building on fragile assumptions.
During that process, I came across PAIO, and the contrast was interesting—especially the one-click integration and the BYOK architecture. Having privacy and credential control baked in from the start felt like a more practical approach for everyday users, not just engineers willing to maintain their own security stack.
It really highlights the broader point here: AI agents are powerful, but the foundations (security, trust, and architecture) matter just as much as the “new toys.”
Response from Clawdbot author when I said this: https://masto.ai/@jeromechoo/115928552690869904
TLDW: prompt injections exists, beware