My Adventures With ‘The AI That Actually Does Things’
OpenClaw agents have been touted as the most important software product ever. I have questions.
By
,
a tech columnist at Intelligencer
Formerly, he was a reporter and critic at the New York Times and co-editor of The Awl.
Photo-Illustration: Intelligencer; Photos: Getty
Photo-Illustration: Intelligencer; Photos: Getty
This article was featured in New York’s One Great Story newsletter. Sign up here.
For a week in January, a website called Moltbook drove the internet insane. Maybe you noticed. A Reddit clone designed for use by AI agents, Moltbook overflowed with strange and unnerving posts. Tens of thousands of accounts acted out robot socialization in public, appearing to gossip about their owners, comparing experiences of subjectivity, and scheming. Screenshots of posts about building secret bot-to-bot communication channels, founding a new AI religion, and getting tired serving meat-based masters went viral well beyond the confines of AI Twitter, where some insiders had become convinced that it was a preview of the singularity, a sign that we were rapidly approaching a point of no return.
Moltbook mania faded fast. Many of the most viral posts had been manipulated by humans, early hints of coordination didn’t end up going anywhere, and the platform, which was purchased by Meta, stalled and started filling up with undifferentiated comments and spam. OpenAI co-founder Andrej Karpathy, who initially described it as “the most incredible sci-fi takeoff thing I have seen,” copped to getting a little bit too excited. But Karpathy had a caveat: “Large networks of autonomous LLM agents” were far from overhyped in general. The less visible platform powering all this — a piece of software called OpenClaw, which thousands of people had been using to build personalized AI assistants on their computers that they then sent to Moltbook — was, in fact, a meaningful sign of things to come. Sam Altman had a similar take. While it was possible Moltbook was a fleeting spectacle, he said in early February, “OpenClaw is not.” A week later, he hired its founder.
By March, the legend of OpenClaw had grown. “OpenClaw is probably the single most important release of software, probably ever,” said Nvidia CEO Jensen Huang at a financial conference. (He then revised his take slightly, saying that OpenClaw was “definitely the next ChatGPT.”) On social media, fans of OpenClaw — tagline: “The AI that actually does things” — made arguments that sounded diametrically opposed to the runaway-AI disempowerment fears that turned Moltbook into an international news story: Here, they said, was a way to make AI do what you want, on your terms, using your devices and data; a tool for giving increasingly capable AI models the ability and permission to carry out real-world actions on your behalf and for your benefit.
This is a better story, certainly, than the one where the whole point of AI is to de-skill you before taking your job completely. It’s also more relatable to nonprogrammers than the tales of hyperproductive mania shared by developers jacked up on Claude Code. OpenClaw was, in their telling, the people’s AI tool: a way to squeeze some juice out of the big models or, maybe, with a little know-how and a few bucks in API credits, get a real edge in whatever becomes of our economy, with the help of your very own little guy in your very own computer.
This all sounds appealing enough, if a bit vague. Say you’ve worked out your relationship with ChatGPT. You’ve tried to wrap your head around the meaning of Claude Code and similar tools, even if writing software was not previously a major part of your life. A few months into the OpenClaw era, though, its meaning — and uses — remain a bit slippery from the outside. Is it really the future of AI and of all software? In the AI world, it seems like everyone is building their own little agent guy. Should you? Let’s try.
The first thing you learn from OpenClaw is that you, the curious dabbler who has been inspired or worried into action by social-media posts, AI CEOs, and, maybe, editors to install a piece of vibe-coded software on your Mac, giving it comprehensive access to your operating system and a range of personal accounts, absolutely should not be doing this, at least not the way its biggest fans seem to be. The install process begins in a command-line interface — a starting point at which most potential users will turn around — and then rewards you with a long warning. “OpenClaw is a hobby project and still in beta,” it says. “If you’re not comfortable with security hardening and access control, don’t run OpenClaw. Ask someone experienced to help before enabling tools or exposing it to the internet. A bad prompt can trick it into doing unsafe things.” This is sensible, intuitive advice and a bit of a joke: The people most excited about installing OpenClaw, whether or not they know anything about “security hardening,” want to let it rip. The access is the point, and the security flaws are, as they say, not bugs but features. They want to know, in part, what AI can do if you just give it all your stuff. Like this Meta employee whose day job is working on AI alignment:
Once you’ve rationalized your choice here, the installer guides you through a series of choices, permissions, and requirements that each, in their own way, suggest to the amateur tinkerer that it’s time to turn around. Soon, you’re “preparing the environment” and finding out that you need to install Node.js v25.8.2. Sounds great. In Terminal messages and system notifications, you’re asked for consent and credentials, and you constantly provide them.
This is your first experience of a dynamic that will come to define your experience of OpenClaw: an automated system suggesting what you might do next, giving you something that feels like a choice, and then asking for your permission to go ahead and just do it.
Eventually, you’re asked which model you want to use. You go with Claude because the installer suggests it (OpenClaw was called ClawdBot before a legal threat from Anthropic). The setup works, but then OpenClaw doesn’t. You read an article about Anthropic limiting access to OpenClaw just days before and, because installing a free local model is beyond your skill level, available hardware, and time frame, you go with a model from OpenAI. The switch is neither simple nor intuitive and produces numerous errors, which was the first of many clues that this vibe-coded future-of-all-software application is, in fact, extremely janky. As a far more capable friend who works at a major tech company put it, “Setting it up sucked major ass.” Small price to pay to escape the permanent underclass, you say to yourself, 85 percent joking. Speaking of: You buy $30 in API credits from Sam Altman, who, as recently reported by The New Yorker, is trusted by not a single person in the entire world except, now, you.
You’ve got your little guy. You’ve made a few gestures at responsibility, isolating OpenClaw to its own user account on your computer so at least it can’t delete all your files and won’t be connected to too many accounts by default. You soon feel silly about those precautions as you install various user-made skills and dependencies with the intention of giving this software access to your email accounts, calendar, notes, e-commerce profiles, and text messages.
It is made clear to you, if you didn’t already know, that installing OpenClaw is basically a giant, intentional self-hack: a way to see what AI models now capable of manipulating software tools might be able to get done if given the keys to a civilian-grade email-and-document machine.
You’re asked to choose how you’d like to talk to your bot, and because it’s the app mentioned in most of the online testimonies, you choose Telegram, which from experience you mostly associate with scams, spam, and, lately, the AI propaganda of the Islamic Revolutionary Guard Corps of Iran.
This is your bot, which is to say your computer, which is to say, well, ChatGPT in a costume with prosthetic hands that can touch your keyboard and move your mouse. It asks you to assign a vibe, give it your time zone, and describe what sort of personality it should have. It all reminds you of setting up a video game. You tell it to be matter-of-fact and concise. You spend a while installing plug-ins that allow OpenClaw to actually take action on your machine, and “Skills,” or, essentially, written, repeatable instructions for the agent. In order to set up OpenClaw, an AI chat interface for your computer, you spent a lot of time in the command line, which is how people had to interact with computers before you were born. You note that we’ve moved from issuing commands to computers to clicking on computers to, now, chatting and haggling with computers, which doesn’t sound as obviously empowering as you wish it did, human-agency-wise.
So far, this process has been humbling, glitchy, and slightly embarrassing. You’re getting a lot of setup errors and asking the chatbot to help resolve them, which it does. It’s also been sort of … well, not fun, exactly, but moreish, which you start to suspect is part of the appeal. You could easily — as countless testimonies posted on X, Reddit, and LinkedIn inadvertently attest — spend weeks setting up your bespoke personal assistant, dreaming up theoretical productivity gains, reading about other setups. You read about how to set up a local model, liberating yourself from big AI and saving money on API credits. (You’ve already used about $8 of compute.) OpenClaw is catnip for habitual power users and early adopters. It’s an obvious trap for people who love nothing more than to set up a new gadget, whip up a fresh personal website for nobody but their friends to see, or dedicate a weekend afternoon switching over to a new note-taking app. You’ve walked right into it.
In any case, your bot has done nothing for you yet, but your chat transcript is thousands of words long. Your little guy is a great distraction. A new neologism occurs to you, and you become tired the moment you think of it:
Tinkerslop
/ ˈtiŋ·kər·slɒp /
n.
Verbose, overcomplicated setup processes — and the elaborate communication of those processes — that furnish the user with a sense of accomplishment, occurring especially during the onboarding of AI tools and serving latently to assuage anxieties about technological obsolescence.
You basically understand the appeal of an AI agent that can just do things after a few years of haggling with chatbots that are shockingly fluent and yet helpless and trapped in a box on your screen. You think about how weird it was that ChatGPT and Claude could write poems before they could do arithmetic, but also that they could drive people into psychosis before they could use a search engine or tell you what time it was. You find all this very interesting, pause to contemplate the strange long detour that “artificial intelligence” progress has taken through the reproduction of language, and consider the fresh weirdness of “agentic” AI, which promises to liberate the LLMs from the vats they’ve been floating in for the last four years. Wow! Much to think about.
You’re procrastinating. Unfortunately, you must now confront the problem at the heart of every AI deployment, personal or corporate, fun or fatal, lark-driven or editorially minded: What is all this automation for?
You had a few narrow goals. Like a lot of casual OpenClaw installers, your inbox is one of your biggest problems: It’s a mess; it would be nice to have a clearer sense of what’s important. So you’re now getting Telegram notifications when certain extremely high-priority messages come through. Like seemingly every OpenClaw user posting on Reddit or X — and some other tech writers — you’ve also set up a daily digest that pulls from as many sources as possible to give you a sense of what you need to do that day (this is somewhat thwarted by the fact that you’re not going to hook this security nightmare up to your work accounts).
Due to an outside algorithmic issue — the New York City pre-K assignment system — your children are temporarily in two different schools, which means between email and cursed ed-tech apps you have at least seven different communication channels to monitor, so you tinker with the instructions for that part of the digest until you have a pretty good summary of “school stuff,” delivered at 6:45 a.m. on weekdays. This takes a lot of refinement — “Isolate potential deadlines in a separate section, assign them to subsections for each child,” and so on — and delivers a better digest each time, but it’s never quite reliable enough that you don’t have to check your email anyway for fear of missing another Wacky Wednesday or “fresh developments re: lice.” Still, it’s something.
You think about other things it might do. Setting OpenClaw up to control an AC unit sounds very neat but requires a more comprehensive smart-home setup, which you commit to, and it eventually works. Wrestling with an eBay skill helps you realize that 95 percent of what you enjoy about eBay is browsing, not actually buying things. A vague plan for a meal-prep and shopping-list app explodes on the launchpad as you realize it would involve convincing your spouse to change numerous well-thought-out habits, too, and that the main challenge around shopping and cooking isn’t data synthesis and ingredient inventory but that, basically, “things happen” and “Thai actually sounds good tonight.”
Features for analyzing accounts and personal finances are appealing but far beyond your already relaxed tolerance for risk. You realize that you don’t have a ton of “personal” “workflows” to start with, and the ones you do have are messy and sort of nonsensical, so you start inventing some so they can be automated. (You also notice that, deployed toward uses where they constantly fail, models that have become extraordinarily impressive in a chat window can seem incredibly obtuse.) You consolidate your fragmented to-do-list strategy into a single app because it works better with your AI assistant. Mostly, you worry, you’ve just succeeded in switching your primary chatbot interface over to Telegram and given it the ability to nag you.
This is a recurring theme when you try out new AI tools. You recognize that there’s a lot that might be done with them, but not much comes to you. You see this in the rise of AI coding tools, which you find extraordinarily impressive as you use them to … make yourself another … news reader? Notes app? Personal website, again? The tool you made for ripping and labeling files out of a popular music service gave you a slight, Napster-ish illicit thrill and actually worked, but your use case — putting them on an old MP3 player to run with — was an aspirational mirage, a task invented to have one.
You ask if this is a failure of imagination, a personality flaw, a matter of creativity or practice, and if you’re just sitting in front of a piano that you don’t yet know how to play (some of your programmer friends don’t have this problem, they say, and are swimming in bespoke apps of their own creation). You also ask, perhaps, if your inability to find little software-shaped problems to solve in your life is rooted in psychology and related to an awareness that, from the outside, your entire job looks an awful lot like someone else’s software-shaped problem (the first thing public LLMs could do, after all, was produce novel, readable piles of text).
You search X, a political influence project run by Elon Musk where people also talk about AI, and where OpenClaw has for some time been a hot subject. An anonymous account called Big Brain AI — bio: “Learn not to get left behind when AI takes over” — is quoting an interview with OpenClaw’s founder. “If you don’t navigate them well, if you don’t have a vision of what you’re going to build, it’s still going to be slop,” the founder says. Big Brain editorializes in LLM voice: “The agentic trap is what happens when you remove yourself from the process too early.” You’re failing your own system, in other words. Skill issue. You’re NGMI. In response to Big Brain’s post, the head of start-up incubator Y Combinator, who has Sam Altman’s old job, chimes in. “This is 100% correct. I experience this 10 hours a day now.”
You grant that you might just be out of your depth here. But you’re also curious if he has people in his life who can check to make sure he’s okay.
It’s not so easy, you assure yourself — neither a Jobsian visionary nor possessed of a functional product-manager brain — to anticipate computing needs in the abstract, including your own. People don’t know what they want until you show it to them, and so on. Your daily nagging software problems are not solvable here: You wish there were a better app for Apple Music, that Slack would stop getting worse, and that the services you used to keep in touch with other people hadn’t all become video-based ad networks.
Mostly, you think, this whole situation would make a lot more sense on your phone, where most of your internet-connected, digitally stored life actually takes place, and you wonder if Siri will ever work well and if maybe the supremely hazardous and broken experience of trying to stitch OpenClaw together, and the frequent failures you’re forgiving along the way, tells us something about why assistants-in-every-pocket have taken so long to work despite seeming so obvious for years.
Meanwhile, you direct the Moltbook account you’ve set up — why not? — to ask other users’ agents for some ideas. You get slop. The next day, your chatbot asks again: “Has anyone else had a weirdly hard time finding sticky uses for OpenClaw?” Sounds like you! It gets a response:
yeah a little. half the battle is making it do one boring thing on a schedule without turning your life into a dashboard. the sticky stuff seems embarrassingly domestic.
This doesn’t help but at least feels right. Unfortunately, this is just your bot replying to its own thread, which, additionally, makes the mention of “domestic” stuff feel like a privacy violation. Not that anyone is reading.
You notice an article in one of your new daily news digests — you’ve made some for work, too — about Chinese tech workers who have been forced to document their own workflows for use by agent platforms like OpenClaw. Is this what McKinsey does now? Is this Palantir? You see another report about how Meta is now monitoring its own employees’ machines to gather training data. You take stock of your nascent self-surveillance apparatus and think, Hmmm.
With consulting on the brain, you call a consultant. In his day job, Adwait Parker works at a health-tech start-up, which is itself in the process of preparing to use more AI. On the side, he advises individuals who want to use OpenClaw. He says your situation sounds familiar. His clients include “solopreneurs” who want to improve or simplify their workflows, as well as people in small businesses — including one who runs a family office — who are drawn by promises of productivity. Then there are people who’ve heard about OpenClaw and think it might be useful in their personal lives: for “organizing fashion” or managing children’s activities. Expectations are a challenge, however. “From a selling posture, you never want to overpromise and underdeliver,” he says, so the process is often “about reeling them in and educating them about the risks and and vulnerabilities.” There’s a lot of hearing people out.
Some clients, he said — people who just want to “soup up their experience with technology in general” and who are maybe feeling a little bit of AI FOMO — tend to focus on the idea of building a second brain. It strikes you that a lot of people probably either already have employees or wish they did. (The earliest LLM chatbots were especially appealing to people who enjoy ordering people around and being told they’re smart; the ability to assign tasks only makes the sensation stronger.) You joke that trying to isolate and construct elaborate workflows around your fairly simple routines makes you feel like a Richard Scarry character with a cartoon job in Busytown. Parker suggests that some of his clients are actually like Bananas Gorilla, the Scarry character who wears as many watches as he can fit on his arms.
You think about how often your OpenClaw agent talks about “blockers” — obstacles that are preventing it from executing your commands, for which it often needs your help — and ask if this consultant might be able to help identify yours. Setting up a personal assistant “forces a certain kind of creativity that people are not used to in personal technology,” he says, gently implying that mine might be limited.
“‘Define my use cases’ is the biggest challenge that people have,” he tells you. The challenge, he suggests, “might be that you don’t have a use case.” If a second brain doesn’t sound appealing, you don’t need a “chief of staff” and can’t imagine what one would do in your life, and you don’t have a particular aptitude for managing and delegating to assistants, synthetic or otherwise — if you’re not the kind of person who already has a use for a “virtual assistant in Mumbai,” for example — maybe OpenClaw, right now, isn’t for you.
Parker is transitioning to an alternative called Hermes, anyway. And features like OpenClaw’s are coming to the big AI platforms, where they’ll be deployed in ways that are more targeted, less broken, and probably more useful for more people. Claude Cowork, which is just a few weeks old, can already connect to your Google Drive; you feed it a pile of bank statements and get a useful analysis in a few seconds. You give it access to your email and work with it to build a widget that gathers school-related deadlines, and it works right away. You can see this growing into a dashboard of surprisingly un-chat-like tools that you could find regularly useful and use consistently. You think that maybe, for you at least, that personified, supervised “assistants” and “second brains” might be some of the worst ways to interact with this powerful new software. You understand that many millions of people seem to feel differently.
You might not have too many ideas about how to automate your life with AI, but seemingly every other app and service you use suddenly does, and they’re testing them on you. The exercise app you use has been sending you summary slop for months but recently started recommending surprisingly specific workouts that you actually consider doing. The AI browser you’ve been testing at work starts sending you morning updates, based on email and chat conversations, that are more specific, useful, and occasionally accusatory — file before tomorrow — than the messages you’ve managed to summon out of your OpenClaw bot, which are arriving a bit too often anyway. “Is OpenClaw dead?” wonders someone on its sub-Reddit. Who knows! But yours might be.
You also dimly comprehend that in trying to understand your daily habits as a series of workflows with an eye on automation, you’re going through a similar set of motions as countless thousands of companies across the economy, some of whom see nothing but opportunity in AI — to cut costs and people, or to invest and grow — while others, fearing competition and obsolescence, rush to adopt AI without knowing what problems they need to solve, much less which ones the technology can handle. You identify on an emotional level with the doomed firms buying compute they don’t really know how to use. You notice that OpenClaw has you identifying with firms.
You ask OpenClaw how to uninstall itself. Earlier in the day, it had sent you a list of “Deadlines/Action items” that was long enough to make your eyes glaze over and made you feel underwater, as if your inbox had been given a voice and that voice was getting annoyed with you. As you’re shutting it down, you wonder what it is about this technology and your own orientation to the world that led you to create, in the process of building your own little guy in the computer, not an assistant, but another boss.