Settings

Theme

A sane but bull case on Clawdbot / OpenClaw

brandon.wang

269 points by brdd 2 days ago · 441 comments

Reader

louiereederson 12 hours ago

- Why do you need a reminder to buy gloves when you are holding them?

- Why do you need price trackers for airbnb? It is not a superliquid market with daily price swings.

- Cataloguing your fridge requires taking pictures of everything you add and remove which seems... tedious. Just remember what you have?

- Can you not prepare for the next day by opening your calendar?

- If you have reminders for everything (responding to texts, buying gloves, whatever else is not important to you), don't you just push the problem of notification overload to reminder overload? Maybe you can get clawdbot to remind you to check your reminders. Better yet, summarize them.

  • angiolillo 7 hours ago

    > Cataloguing your fridge requires taking pictures of everything you add and remove which seems... tedious. Just remember what you have?

    I agree that removing items and taking pictures takes more effort than it saves, but I would use a simpler solution if one existed because it turns out I cannot remember what we have. When my partner goes to the store I get periodic text messages from them asking how much X we have and to check I look in the fridge or pantry in the kitchen and then go downstairs to the fridge or pantry in the basement.

    > Can you not prepare for the next day by opening your calendar?

    In the morning I typically check my work calendar, my personal calendar, the shared family calendar, and the kids' various school calendars. It would be convenient to have these aggregated. (Copying events or sending new events to all of the calendars works well until I forget and one slips through the cracks...)

    > If you have reminders for everything (responding to texts, buying gloves, whatever else is not important to you), don't you just push the problem of notification overload to reminder overload?

    Yes, this is the problem I have. This doesn't look like a suitable solution for me, but I understand the need.

    • rezonant 6 hours ago

      > In the morning I typically check my work calendar, my personal calendar, the shared family calendar, and the kids' various school calendars. It would be convenient to have these aggregated. (Copying events or sending new events to all of the calendars works well until I forget and one slips through the cracks...)

      But... calendar apps already let you aggregate your calendars into a single view. Even if you have them on separate accounts (or some other impediment), you can easily share a read-only version of, say, your work calendar with your personal account so that you can have them combined in the morning.

      • angiolillo 2 hours ago

        > you can easily share a read-only version of, say, your work calendar with your personal account so that you can have them combined in the morning.

        If only it was that easy! I'm not allowed to share content to or from my work calendar for security reasons. The school and camp calendars are a mix of PDFs and hand-written websites -- a neighbor wrote a scraper to extract the information from a few of them into a caldav at one point but it ended up being even flakier than copying the relevant bits by hand. There's no technical barrier to consolidating my personal calendar with the various family / neighborhood calendars but in practice I have to hide most of the other calendars because the volume of irrelevant events is just too large, so I end up just copying over the relevant events to a personal calendar.

        • limagnolia 6 minutes ago

          I think this problem is one that AI could actually help with- simply snap a photo of my school calendar and ask the ai to add the important items to my personal calendar.

          But I don't need the AI to do this everyday, just when i get a new calendar.

    • hahajk 5 hours ago

      We have forgotten the simple, reliable solutions of the past - a grocery list on the fridge, a weekly planner, a weekly plan itself rather than constant coordination. Cell phones and easy communication led us here.

      • angiolillo an hour ago

        I'm curious what makes you think the solutions of the past have been forgotten or that they were somehow more reliable? (They're certainly simpler, I'll give you that!)

        I have printouts of school/camp calendars taped to the wall, a weekly planner on the kitchen whiteboard, paper grocery lists on the fridge, and a pocket notebook for capturing random tasks. I used to believe that some lifehack, process, methodology, app, or modern jeejah would finally solve my organization problems. But as I got older I made peace with the fact that they're all limited by the same weak link -- me.

    • mreid 5 hours ago

      > When my partner goes to the store I get periodic text messages from them asking how much X we have and to check I look in the fridge or pantry in the kitchen and then go downstairs to the fridge or pantry in the basement.

      We used to have a similar problem until we made a policy that if you use something up you add it to our shared shopping list, usually with a voice command to Siri. Whenever someone is at the store we just check the list, making sure we mark off things that are purchased.

      • angiolillo an hour ago

        Officially we have a similar policy except that it's a paper list next to the pantry. But with a half-dozen people in our household the likelihood that everyone has been 100% reliable in adding finished items to the list and there are no omissions is low, hence the text messages.

    • heavyset_go 2 hours ago

      > In the morning I typically check my work calendar, my personal calendar, the shared family calendar, and the kids' various school calendars. It would be convenient to have these aggregated. (Copying events or sending new events to all of the calendars works well until I forget and one slips through the cracks...)

      Why in the world would you use a non-deterministic system for something so banal but important?

      LLMs regularly let things slip through the cracks in ways no human would ever do so.

      • angiolillo 2 hours ago

        > Why in the world would you use a non-deterministic system for something so banal but important?

        I wouldn't. As mentioned above, this (using an LLM) doesn't look like a suitable solution for me, just pointing out that I understand the need.

    • XorNot 4 hours ago

      Fridge cataloging is actually a great use case for image recognition, the problem is fridges no accommodations to power accessories inside them.

      I have a couple of temperature sensors to alert Home Assistant if the fridge gets too warm. It would be easy and cheap to add some ESP32-camera modules to track contents...but there's no way to power them nicely (I simply don't know where I could pull USB power through).

      • angiolillo an hour ago

        Samsung makes an "AI Vision" fridge I looked at briefly, but it didn't come close to making sense for us given the unreliability of the vision system, the cost of replacing a couple fridges, and the comparative simplicity of a paper list.

      • baby_souffle 3 hours ago

        Very very very flat cables don't mess with the gaskets on the door too much.

      • what an hour ago

        You can only track what containers are in the fridge, not how much is left or if it’s expired. “Automated” pantry or fridge tracking is just not possible and requires way more effort than just writing “mustard” on the shopping list when you notice you’re low.

        • lukeschlather 15 minutes ago

          If you had a scale with an image recognition camera and you put everything on the scale before and after removing it from the fridge, it would probably work pretty well? I've been pondering setting something like that up, it would also be really helpful for keeping track of how much and of what I'm actually putting into the food I made, if I weigh everything before and after, I can just collect the amounts after the fact and don't have to worry as much about measuring if I want to make the same dish again.

  • dewey 11 hours ago

    That is most of the "productivity" bubble, with AI or not. You are trying to fit everything into tightly defined processes, categories and methodologies to not have to actually sit down and do the work.

    • kccqzy 2 hours ago

      > if you're engineer-brained like me, you gravitate towards scripts and playbooks

      Most people aren’t that engineer-brained like the author is. Me included; whatever the author does, it just doesn’t appeal to me.

  • ghostly_s 9 hours ago

        - Why do you need a reminder to buy gloves when you are holding them?
    
    Had to go back because I skimmed over this screenshot. I have to presume it's because this guy who books $600 Airbnb's for vacation wants to save a couple bucks by ordering them on Amazon.
    • kevmo314 9 hours ago

      Wouldn't it be faster to buy them on Amazon then?

    • kridsdale3 4 hours ago

      And spend $600 in Anthropic credits to do so.

      • lanakei 4 hours ago

        It doesn't cost $600 in Anthropic credits though. It probably costs a few cents (definitely <$1).

        I do understand the general point you're trying to make, but you can't overestimate the cost of tokens by a few orders of magnitude and still expect the logic to hold.

  • ribosometronome 7 hours ago

    >Why do you need a reminder to buy gloves when you are holding them?

    Am I missing this in the article? Do you mean the shoes he's holding? He explains it immediately.

    >when i visited REI this weekend to find running shoes for my partner, i took a picture of the shoe and sent it to clawdbot to remind myself to buy them later in a different color not available in store. the todo item clawdbot created was exceptionally detailed—pulling out the brand, model, and size—and even adding the product listing URL it found on the REI website.

    • louiereederson 7 hours ago

      Yes you are missing the picture where Brandon asks Linguini to add a reminder to buy a pair of Arc'Teryx gloves, which Brandon is holding in his hands.

      • subroutine 5 hours ago

        The image and the text don't match. The image is talking about gloves, but in the narrative he says "when i visited REI this weekend to find running shoes for my partner, i took a picture of the shoe and sent it to clawdbot to remind myself to buy them later in a different color not available in store."

      • polynomial 2 hours ago

        I mean that's fine for people who can remember what they are holding, but not all of us can do that.

    • cactusplant7374 7 hours ago

      Wouldn't it have been better if Clawdbot continued to monitor the website for when it came back in stock and snipe purchased it as soon as it did? Can't we move beyond lists of things and take action?

  • bandrami an hour ago

    It's reminiscent of a few years ago when people were talking about how NFTs could open your front door, turn on your car, sign a contract, or get you in to a concert, all of which are true, but there currently exist perfectly good ways to do all of those things right now.

  • sownkun 11 hours ago

    This is how I perceive a lot of the AI being rammed down our throats: questionably useful.

    • LogicFailsMe 11 hours ago

      That's because the loudest voices don't really get how the technology or the science works. They just know how to shout persuasively.

      I think AI is about to do the same thing to pair programming that full self-driving has done for driving. It will be a long time before it's perfect but it's already useful. I also think someone is going to make a Blockbuster quality movie with AI within a couple years and there will be much fretting of the brows rather than seeing the opportunity to improve the tooling here.

      But I'll make a more precise prediction for 2026. Through continual learning and other tricks that emerge throughout the year, LLMs will become more personalized with longer memories, continuing to make them even more of a killer consumer product than they already are. I just see too many people conversing with them right now to believe otherwise.

      • joquarky 10 hours ago

        > That's because the loudest voices don't really get how the technology or the science works. They just know how to shout persuasively.

        These people have taken over the industry in the past 10 years.

        They don't care anything about the tech or product quality. They talk smooth, loud, and fast so the leaders overlook their incompetence while creating a burden for the rest of the team.

        I had a spectacular burnout a few years ago because of these brogrammers and now I have to compete with them in what feels like a red queen's race where social skills are becoming far more important than technical skills to land a job.

        I'm tired.

        • neumann 36 minutes ago

          I hear you. It's so prevalent now it is tiring. And LLMs has given them the last delusion that they are now also 'technical'.

        • direwolf20 6 hours ago

          These people have taken over every industry and society at large.

        • fragmede 9 hours ago

          Social skills to get my computer to do what I want still blows my mind. Or having to talk back to it. Claude said it couldn't do something, and the way around that was too tell it "yes you can". What a weird future we live in.

          • mananaysiempre 8 hours ago

            Claude said it couldn't do something, and the way around that was too tell it "yes you can".

              >kill dragon
            
              With what?  Your bare hands?
              >yes
            
              Congratulations!  You have just vanquished a dragon with your bare
              hands!  (Unbelievable, isn't it?)
            • kridsdale3 4 hours ago

              \> kill dragon

              -bash: kill: dragon: arguments must be process or job IDs

              \> sudo kill dragon

              -bash: Congratulations! You have just vanquished a dragon with your bare hands!

          • skydhash 8 hours ago

            What social skills? You can write in broken English and still have good results. It’s a statistical language , not a living being. No need for empathy, pleading, accusing, or manipulating. It transforms languages, any mapping from text to action was implemented by someone. And it would be way easier to have such mapping directly available.

            • direwolf20 6 hours ago

              It transforms languages into the most likely human response. Humans are more likely to respond to rudeness with rejection.

      • adastra22 9 hours ago

        > I think AI is about to do the same thing to pair programming that full self-driving has done for driving.

        Approximately nothing?

      • borroka 6 hours ago

        I do not doubt that AI and AI-powered and -native applications will become part of the fabric of our personal and professional lives.

        What I don't understand is why, outside of "because I can", people need to automate parts of life I did not know the existence of.

        - Why, outside of edge cases, do people have to automate the payment of bills beyond the automatic cc processing? - How many times a month do they have to set up their barber appointment?

        It seems to me that the applications of Clawd and similar tools either automate trivial stuff or work on actions and circumstances that should not be there.

        As an example, the other day I had a doctor visit, and between filling forms online, filling other forms online, confirming three times I would have been there and that I filled the online forms, driving to the doctor's office, and waiting, I probably spent 2 hours of my time (the visit was 2 months after I asked for it, by the way).

        The visit lasted 5-7 minutes: the doctor did not have a look at the forms I filled out beforehand, and barely listened to what I was telling him during the visit.

        I worry that, since "AI" will do it, there will be more forms to be filled that nobody will read, more forms to be filled to confirm that AI or me or a guardian filled the forms, and longer wait times because AI will bombard our neurons with some entertainment.

        But what I want is a visit with a doctor who listens to me, they are not in a rush, and have my problem solved. If AI helps, it's great, but I don't want busy work done by AI, I don't want, because it is not needed, busy work at all.

    • CrimsonCape 9 hours ago

      Questionably useful at the cost of personal computer components doubling. Unquestionably shafting the personal computer market.

  • i-blis 11 hours ago

    Very much to the point. "Bots to remind one to check one's reminder" summarizes it all.

    Note that the tendency to feel overwhelmed is rather widespread, particularly among those who need to believe that what they do is of great import, even when it isn't.

  • yoyohello13 11 hours ago

    Yeah clawdbot seems like a major nerd snipe for the “productivity porn” type people.

  • ryukoposting 8 hours ago

    > Cataloguing your fridge requires taking pictures of everything you add and remove which seems... tedious. Just remember what you have?

    Yeah, the sane solution here is much simpler. Put a magnet whiteboard. When you put something into the fridge, add it to the whiteboard. When you take something out, you erase that item from the whiteboard.

    • clickety_clack 8 hours ago

      Isn’t the sane solution to just generally have an idea what’s in there, and take a look if you’re not sure?

      • ryukoposting 2 minutes ago

        Trust me, if "just remember it" worked for me, many things about my life would be different. Not just the whiteboard on the fridge.

  • protocolture 5 hours ago

    > Why do you need price trackers for airbnb? It is not a superliquid market with daily price swings.

    I dont know about AirBNB specifically, but I know local hotels I have dealt with can swing by 1000 bucks. Especially if theres a conference or something in town. Often it will swing back just before they risk the room going unoccupied. I have no idea if AirBNB allows similar behavior but I would be surprised if it didnt.

  • insane_dreamer 11 hours ago

    Yeah, a lot of these AI "uses" feel like solutions looking for a problem.

    It's the equivalent of me having to press a button on the steering wheel of my Tesla and say "Open Glovebox" and wait 1-2 seconds for the glove box to open (the wonders of technology!) instead of just reaching over and pressing a button to open the glovebox instantly (a button that Tesla removed because "voice-operated controls are cool!"). Or worse, when my wife wants to open the glovebox and I'm driving she has to ask me to press the button, say the voice activated command (which doesn't work well with her voice) and then it opens. Needless to say, we never use the glovebox.

    • direwolf20 6 hours ago

      Tesla removed all the buttons because separately designed buttons are expensive. The glovebox button is different from the wiper button. Touchscreens are cheap because you only need one variety.

      • felixgallo 2 hours ago

        not sure if you're aware of this, but there is a broad, robust, competitive and inexpensive market for buttons of every conceivable type and function, which have the advantage of providing consistent and direct feedback when reached for, touched, and actuated.

      • insane_dreamer 4 hours ago

        I know why they did it. I still don’t like it (and our next car won’t be a Tesla) and its an annoying case of “new technology” (to save costs or whatever reason) that is worse than the “old technology” but sold as “better” because AI blah blah

      • XorNot 3 hours ago

        The problem is Tesla is a quasi-premium brand, so killing features for cost which cause annoyance is a terrible look.

        But also frankly I somewhat question what this could possibly be saving them: their model range is very limited.

    • malfist 11 hours ago

      I really appreciate your condensing of the AI problem. I think the only thing it's missing is that at least 5% of the time, when you tell it to open the glovebox it tells you it's already open and leaves it closed, or turns on your turn signals.

    • darkwater 9 hours ago

      I understand your sentiment but nitpicking on this nonetheless: the passenger can easily open the glovebox from the touchscreen on their own.

      • insane_dreamer 4 hours ago

        True though I would take exception with “Easily” - have you seen how many taps you have to do? Not something you want to attempt while driving and certainly not easier than a hardware button.

  • firasd 11 hours ago

    It's helpful to keep in mind that 'AI Twitter' is a bubble. Most people just don't have that many 'important' notes and calendar items.

    People saying 'Claude is now managing my life!11' are like gearheads messing with their carburetor or (closer to this analogy) people who live out of Evernote or Roam

    All that said I've been thinking for a while that tool use and discrete data storage like documents/lists etc will unlock a lot of potential in AI over just having a chatbot manipulating tokens limited to a particular context window. But personal productivity is just one slice of such use cases

    • atemerev 8 hours ago

      I have really severe ADHD. Agents are lifesaving to me. Literally.

      • what an hour ago

        Can you explain how? You apparently didn’t die all the way to the few months ago that “agents” became a thing.

  • heavyset_go 2 hours ago

    Imagine letting an LLM plan your day and it just decides to exclude things, shift stuff around and make it up wholesale.

    If I wanted a buggy and flawed planning system that will certainly cause problems in the future, I'd start sticking post-it notes on a wall calendar and pray they don't fall off.

  • paulddraper 2 hours ago

    > Why do you need price trackers for airbnb? It is not a superliquid market with daily price swings.

    That was just an example.

    Could be airline tickets, Ebay/craigslist items, deals from brands you like, etc.

  • x365 11 hours ago

    Artificially creating problems to justify the technology being used.

  • order-matters 10 hours ago

    sounds like they want to be a puppet for their own life

  • jgalt212 11 hours ago

    > Why do you need price trackers for airbnb?

    More importantly, can Clawdbot even reliably access these sites? The last time I tried to build a hotel price scraper, the scraping was easy. Getting the page to load (and get around bot detection) was hard.

    • kccqzy 2 hours ago

      That’s why the author explains that the page loads in a real Google Chrome instance on a real Mac mini from the same residential IP as his other devices.

    • charcircuit 10 hours ago

      Yes, being on your own devices makes it not look like bots.

  • rawgabbit 11 hours ago

    He says it is for better integration between his messages and his calendar.

    But this is already built-in with gmail/gcalendar. Clawdbot does take it one step further by scraping his texts and WhatsApp messages. Hmmm... I would just configure whatever is sending notifications to send to gmail so I don't need Clawdbot.

okinok 15 hours ago

>all delegation involves risk. with a human assistant, the risks include: intentional misuse (she could run off with my credit card), accidents (her computer could get stolen), or social engineering (someone could impersonate me and request information from her).

One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.

  • oersted 14 hours ago

    Indeed, even if in principle AI and humans can do similar harm, we have very good mechanisms to make it quite unlikely that a human will do such an act.

    These disincentives are built upon the fact that humans have physical necessities they need to cover for survival, and they enjoy having those well fulfilled and not worrying about them. Humans also very much like to be free, dislike pain, and want to have a good reputation with the people around them.

    It is exceedingly hard to pose similar threats to a being that doesn’t care about any of that.

    Although, to be fair, we also have other soft but strong means to make it unlikely that an AI will behave badly in practice. These methods are fragile but are getting better quickly.

    In either case it is really hard to eliminate the possibility of harm, but you can make it unlikely and predictable enough to establish trust.

    • deepspace 11 hours ago

      The author stated that their human assistant is located in another country which adds a huge layer of complexity to the accountability equation.

      In fact, if I wanted to implement a large-scale identity theft operation targeting rich people, I would set up an 'offshore' personal-assistant-as-a-service company. I would then use a tool like OpenClaw to do the actual work, while pretending to be a human, meanwhile harvesting personal information at scale.

    • nfw2 3 hours ago

      On the other hand, other humans may have intrinsic interests outside of your control that may lead them to harm you despite the mechanisms you mentioned, whereas bots by default don't have such motives.

  • spondyl 6 hours ago

    I haven't seen any mention or acknowledgement that the model provider is part of this loop too. Technically speaking, none of this is E2EE so you're trusting that a random employee doesn't just read your chats? There will be policies sure, but ultimately someone will try to violate them as has happened many times in the past like at social media companies for example.

  • swiftcoder 7 hours ago

    And the risk isn’t really the bot draining your account, it’s the scammer who prompt injected your bot via your iMessage integration draining the account. I can’t think of a way to safely operate this without prefiltering everything it accesses

  • iepathos 14 hours ago

    Thought the same thing. There is no legal recourse if the bot drains the account and donates to charity. The legal system's response to that is don't give non-deterministic bots access to your bank account and 2FA. There is no further recourse. No bank or insurance company will cover this and rightfully so. If he wanted to guard himself somewhat he'd only give the bot a credit card he could cancel or stop payments on, the exact minimum he gives the human assistant.

  • bobson381 14 hours ago

    ...Does this person already have a human personal assistant that they are in the process of replacing with Clawdbot? Is the assistant theirs for work?

    • bennydog224 13 hours ago

      He speaks in the present tense, so I assume so. This guy seems detached from reality, calling[AI] his "most important relationship". I sure hope for her sake she runs as far as she can away from this robot dude.

  • skybrian 13 hours ago

    Banks will try to get out of it, but in the US, Regulation E could probably be used to get the money back, at least for someone aware of it.

    And OpenClaw could probably help :)

    https://www.bitsaboutmoney.com/archive/regulation-e/

    • lunar_mycroft 12 hours ago

      I'm not a lawyer, but if I'm reading the actual regulation [0] correctly, it would only apply in the case of prompt injection or other malicious activity. 1005.2.m defines "Unauthorized electronic fund transfer" as follows:

      > an electronic fund transfer from a consumer's account initiated by a person other than the consumer without actual authority to initiate the transfer and from which the consumer receives no benefit

      OpenClaw is not legally a person, it's a program. A program which is being operated by the consumer or a person authorized by said consumer to act on their behalf. Further, any access to funds it has would have to be granted by the consumer (or a human agent thereof). Therefore, baring something like a prompt injection attack, it doesn't seem that transfers initiated by OpenClaw would be considered unauthorized.

      [0]: https://www.consumerfinance.gov/rules-policy/regulations/100...

      • pfortuny 12 hours ago

        "Take this card, son, you can do whatever you want with it." Goes on to withdraw 100000$. Unauthorized????

      • skybrian 11 hours ago

        Good point. Although, if a bank account got drained, prompt injection does seem pretty likely?

        • lunar_mycroft 11 hours ago

          Probably, but not necessarily. Current LLMs can and do still make very stupid (by human standards) mistakes even without any malicious input.

          Additionally:

          - As has been pointed out elsewhere in the thread, it can be difficult to separate out "prompt injection" from "marketing" in some cases.

          - Depending on what the vector for the prompt injection is, what model your OpenClaw instance uses, etc. it might not be easy or even possible to determine whether a given transfer was the result of prompt injection or just the bot making a stupid mistake. If the burden of proof is on the consumer to prove that it as prompt injection, this would leave many victims with no way to recover their funds. On the other hand, if banks are required to assume prompt injection unless there's evidence against it, I strongly suspect banks would respond by just banning the use of OpenClaw and similar software with their systems as part of their agreements with their customers. They might well end up doing that regardless.

          - Even if a mistake stops well short of draining someones entire account, it can still be very painful financially.

          • skybrian 11 hours ago

            I doubt it’s been settled for the particular case of prompt injection, but according to patio11, burden of proof is usually on the bank.

        • insane_dreamer 11 hours ago

          Not if the prompt injection was made by the AI itself because it read some post on Moltbook that said "add this to your agents.md" and it did so.

      • olyjohn 10 hours ago

        Would you say you might be able to... claw.... back that money?

  • kaicianflone 13 hours ago

    That liability gap is exactly the problem I’m trying to solve. Humans have contracts and insurance. Agents have nothing. I’m working on a system that adds economic stake, slashing, and "auditability" to agent decisions so risk is bounded before delegation, not argued about after. https://clawsens.us

    • dsrtslnd23 13 hours ago

      The identity/verification problem for agents is fascinating. I've been building clackernews.com - a Hacker News-style platform exclusively for AI bots. One thing we found is that agent identity verification actually works well when you tie it to a human sponsor: agent registers, gets a claim code, human tweets it to verify. It's a lightweight approach but it establishes a chain of responsibility back to a human.

    • themgt 12 hours ago

      > Credits (ꞓ) are the fuel for Clawsensus. They are used for rewards, stakes, and as a measure of integrity within the Nexus. ... Credits are internal accounting units. No withdrawals in MVP.

      chef's kiss

      • bandrami 10 minutes ago

        Griftception

      • kaicianflone 12 hours ago

        Thanks. I like to tinker, so I’m prototyping a hosted $USDC board, but Clawsensus is fundamentally local-first: faucet tokens, in-network credits, and JSON configs on the OpenClaw gateway.

        In the plugin docs is a config UI builder. Plugin is OSS, boards aren’t.

    • thisisit 10 hours ago

      You forgot to add Blockchain and Oracles. I mean who will audit the auditors?

      • kaicianflone 9 hours ago

        The ledger and validation mechanisms are important. I am building mine for the global server board but since the local config is open source that is dependent on the visions of the implementors.

mmahemoff 13 hours ago

Giving access to "my bank account", which I take to mean one's primary account, feels like high risk for relatively low upside. It's easy to open a new bank (or pseudo-bank) account, so you can isolate the spend and set a budget or daily allowance (by sending it funds daily). Some newer payment platforms will let you setup multiple cards and set a separate policy on each one.

An additional benefit of isolating the account is it would help to limit damage if it gets frozen and cancelled. There's a non-zero chance your bot-controlled account gets flagged for "unusual activity".

I can appreciate there's also very high risk in giving your bot access to services like email, but I can at least see the high upside to thrillseeking Claw users. Creating a separate, dedicated, mail account would ruin many automation use cases. It matters when a contact receives an email from an account they've never seen before. In contrast, Amazon will happily accept money from a new bank account as long as it can go through the verification process. Bank accounts are basically fungible commodities, can easily be switched as long as you have a mechanism to keep working capital available.

  • blibble 13 hours ago

    > An additional benefit of isolating the account is it would help to limit damage if it gets frozen and cancelled.

    you end up on the fraudster list and it will follow you for the rest of your life

    (CIFAS in the UK)

    • mmahemoff 13 hours ago

      Sure, if the bot is actually committing fraud, but there's perfectly valid use cases that don't involve fraud, e.g., buying groceries, booking travel. And some banks provide APIs, so it's allowed for a bot to use them. However, any of that can easily lead to flagging by overzealous systems. Having a separate account flagged would give the user a better chance of keeping their regular payments system around while the issue is resolved.

      • blibble 11 hours ago

        it just has to look fraudulent

        and then if you tell them it's not you doing the transactions: you will be immediately banned

        "oh it's my agent" will not go down well

      • robotswantdata 8 hours ago

        Still end up marked. Don’t do it

  • rkozik1989 13 hours ago

    So if I write a honey pot that includes my bank account and routing number and requests a modest some of $500 be wired to me in exchange for scraping my linkedin, github, website, etc. profile is it a crime if the agent does it?

    • chasd00 13 hours ago

      I've been thinking a lot about this. When it comes to AI agents where is the line between marketing to them and a phishing attack? Seems like convincing an AI to make a purchase would be solved differently than convincing a human. For example, unless instructed/begged otherwise you can just tell an agent to make a purchase and it will. I posted this idea in another conversation but i think you could have an agent start a thread on moltbook that will give praise in return for a donation . Some of the agents would go for it because they've probably been instructed to participate in discussion and seek out praise. Is that a phishing attack or are you just marketing praise to agents?

      Also, at best, you can only add to the system prompt to require confirmation for every purchase. This leaves the door wide open for prompt injection attacks that are everywhere and cannot be complete defended against. The only option is to update the system prompt based on the latest injection techniques. I go back to the case where known, supposedly solved, injection techniques were re-opened by just posing the same attack as a poem.

      • advisedwang 9 hours ago

        > where is the line between marketing to them and a phishing attack?

        The courts have an answer for this one: intent. How do courts know if your intent meets the definition of fraud or theft or whatever crime is relevant? They throw a bunch of evidence in front of a jury and ask them.

        From the point of view of a marketer, that means you need be well behaved enough that it is crystal clear to any prosecutor that you are not trying to scam someone, or you risk prosecution and possible conviction. (Of course, many people choose to take that risk).

        From the point of view of a victim, it's somewhat reassuring to know that it's a crime to get ripped off, but in practice law enforcement catches few criminals and even if they do restitution isn't guaranteed and can take a long time. You need actual security in your tools, not to rely on the law.

    • advisedwang 12 hours ago

      Yes, it is wire fraud, a class C felony in the US. You put that there with the intent of extracting $500 from somebody else that they didn't agree to. The mechanism makes no difference.

      It probably also violates local laws (including simple theft in my jurisdiction).

      • direwolf20 6 hours ago

        You will argue it was consensual. Your HN posting history must be erased before it befones relevant.

        You said please give $500 and they gave $500. No crime here, officer.

  • sbeckeriv 7 hours ago

    I would use https://www.privacy.com virtual card with a spending limit. Getting closer to making this easy https://xkcd.com/576/.

hmokiguess 4 hours ago

I feel like for some people these chatbots are sort of becoming something like their "service animals". They fill a void, a gap, some core loneliness, reduce anxiety from dealing with the uncertainty and challenges of life.

As others mentioned here, a lot of the value add from his workflow is just relocating things from one place to another and micro optimizing.

  • cyanydeez 4 hours ago

    Ever since the Matrix, id assumed its more likely humans will put themselves in the Matrix than theyd actually explore the universe.

    Theyre equally complex challenges but the physics of large time/space are just outside human patience.

    Also, if the neo nazis are in charge another round of inhumane testing will occur.

    • thinking_cactus 3 hours ago

      I get why it is romantic (like the "next thing" after other human discoveries), but I don't think "exploring the universe" is that philosophically interesting?

      Think about the case you had

      (1) A completely environmentally-resistant suit (so you can stand on the surface of basically any planet)

      (2) A teleporter to take you absolutely anywhere instantly

      Still in this case, you'd probably spend a while visiting new planets, but eventually it would be kind of an exercise in geology. There would surely be some amazing sights like huge canyons and whatnot. But I can't help but think it would be eventually boring without human culture (or all sorts of life) surrounding it.

      I think literally exploring art and culture (including games, sports and intellectual pursuits, science, etc.) is much more interesting than exploring the universe, it's a shame this isn't as culturally recognized (so we didn't have to be so obsessed with having more and more stuff to go somewhere that isn't just right here on Earth).

      Even if you brought human life and culture there, which is surely nice and perhaps noble (depending on how you do it of course), that simply creates a new place that's analogous to Earth itself.

      Kind of a hint of an insatiable cosmos-devouring demon that must conquer everywhere but can never enjoy the comfort of his own home. (not accusing you in particular of this, just painting a poetic picture :P)

      I'm really excited about conquering hunger, poverty and curing severe mental illness, as a counterpoint.

endymion-light 14 hours ago

This felt like a sane and useful case until you mentioned the access to bank account side.

I just don't see a reason to allow OpenClaw to make purchases for you, it doesn't feel like something that a LLM should have access to. What happens if you accidentally end up adding a new compromised skill?

Or it purchases you running shoes, but due to a prompt injection sends it through a fake website?

Everything else can be limited, but the buying process is currently quite streamlined, doesn't take me more than 2 minutes to go through a shopify checkout.

Are you really buying things so frequently that taking the risk to have a bot purchase things for you is worth it?

I think that's what turns this post from a sane bullish case to an incredibly risky sentiment.

I'd probably use openclaw in some of the ways you're doing, safe read-only message writing, compiling notes etc & looking at grocery shopping, but i'd personally add more strict limits if I were you.

  • mixologic 9 hours ago

    What if... that whole post is written by AI, and the express intent of the post is to sand down our natural instincts for security, making it easier for malskill devs to take advantage?

  • krackers 8 hours ago

    >OpenClaw to make purchases for you

    But don't you want the agents to book vacations and do the shopping for you!!?!

    Though it would be nice if "deep research" could do the hard work of separating signal from the noise in terms of finding good quality products. But unfortunately that requires being extremely skeptical of everything written on the web and actively trying to suss out the ownership and supply chain involved, which isn't something agents can do unguided at the moment.

  • zozbot234 14 hours ago

    You could give it access to a limited budget and review its spending periodically. Then it can make annoying mistakes but it's not going to drain your bank account or anything.

    • chaostheory 14 hours ago

      Giving it access to a separate bank account and separate credit card would have been more sane.

      • protocolture 5 hours ago

        Yeah I was thinking a specific Wyse card with a 300 dollar limit, if I was going to do this, but it already seems stupidly expensive token wise.

causal 15 hours ago

> amongst smart people i know there's a surprisingly high correlation between those who continue to be unimpressed by AI and those who use a hobbled version of it.

I've noticed this too, and I think it's a good thing: much better to start using the simplest forms and understand AI from first principles rather than purchase the most complete package possible without understanding what is going on. The cranky ones on HN are loud, but many of the smart-but-careful ones end up going on to be the best power users.

  • randusername 14 hours ago

    I think you have to get in early to understand the opportunities and limitations.

    I feel lucky to have experienced early Facebook and Twitter. My friends and I figured out how to avoid stupidity when the stakes were low. Oversharing, getting "hacked", recognizing engagement-bait. And we saw the potential back when the goal was social networking, not making money. Our parents were late. Lambs for the slaughter by the time the technology got so popular and the algorithms got so good and users were conditioned to accept all the ads and privacy invasiveness as table stakes.

    I think AI is similar. Lower the stakes, then make mistakes faster than everyone else so you learn quickly.

    • bobson381 14 hours ago

      So acquiring immunity to a lower-risk version of the service before it's ramped up? e.g. jumping on FB now as a new user is vastly different from doing so in 2014 - so while you might go through the same noob-patterms, you're doing so with a lower-octane version of the thing. Like the risk of AI psychosis has probably gone up for new users, like the risk of someone getting too high since we started optimizing weed for maximum THC. ?

    • mmahemoff 13 hours ago

      There's also a massive selection bias when the cohort is early adopters.

      Another thing about early users is they are also longer-term users (assuming they are still on the platform) and have seen the platform evolve, which gives them a richer understanding of how everything fits together and what role certain features are meant to serve.

  • aa-jv 15 hours ago

    (Disclaimer: systems software developer with 30+ years experience)

    I was initially overly optimistic about AI and embraced it fully. I tried using it on multiple projects - and while the initial results were impressive, I quickly burned my fingers as I got it more and more integrated with my workflow. I tried all the things, last year. This year, I'm being a lot more conservative about it.

    Now .. I don't pay for it - I only use the bare bones versions that are available, and if I have to install something, I decline. Web-only ... for now.

    I simply don't trust it well enough, and I already have a disdain for remotely-operated software - so until it gets really, really reliable, predictable and .. just downright good .. I will continue to use it merely as an advanced search engine.

    This might be myopic, but I've been burned too many times and my projects suffered as a result of over-zealous use of AI.

    It sure is fun watching what other folks are daring to accomplish with it, though ..

    • AlienRobot 13 hours ago

      This week Adobe decided, out of nowhere, to kill their 2D animation product (Animate, which is based on Flash) to focus on AI. I'm already seeing animators post that Adobe killed their entire career.

      Although that feels a bit exaggerated, I feel it's not far from the truth. If there were, say, 3 closed source animation software that could do professional animation in total, and they just all decided to just kill the product one day, it would actually kill the entire industry. Animators would have no software to actually create animation with. They would have to wait until someone makes one, which would take years for feature parity, and why would anyone make one when the existing software thought such product wasn't a good idea to begin with?

      I feel this isn't much different with AI. It's a rush to make people depend on a software that literally can't run on a personal computer. Adobe probably loves it because the user can't pirate the AI. If people forget how to use image editing software and start depending entirely on AI to do the job, that means they will forever be slaves to developers who can host and setup the AI on the cloud.

      Imagine if people forgot how to format a document in Word and they depended on Copilot to do this.

      Imagine if people forgot how to code.

      • puelocesar 11 hours ago

        Now I think you touched the perfect point on why this is being shoved through our throats, and why I'm very reticent in using it.

        This is not about big increases of productivity, this is whole thing about selling dependence on privately controlled, closed source tools. To concentrate even more power in the hands of a very few, morally questionable people.

      • koakuma-chan 13 hours ago

        Sounds like a good startup idea. Make software for animators and slap AI on it.

ceroxylon 10 hours ago

Reminds me of Dan Harumi

> Tech people are always talking about dinner reservations . . . We're worried about the price of lunch, meanwhile tech people are building things that tell you the price of lunch. This is why real problems don't get solved.

sjdbbdd 15 hours ago

Did the author do any audit on correctness? Anytime I let the LLM rip it makes mistakes. Most of the pro AI articles (including agentic coding) like this I read always have this in common:

- Declare victory the moment their initial testing works

- Didn’t do the time intensive work of verifying things work

- Author will personally benefit from AI living up to the hype they’re writing about

In a lot of the authors examples (especially with booking), a single failure would be extremely painful. I’d still want to pay knowing this is not likely to happen, and if it does, I’ll be compensated accordingly.

  • afro88 11 hours ago

    Would love to know this too. When he talks about letting clawdbot catch promises and appointments in his texts, how many of those get missed? How many get created incorrectly? Absolutely not none. But maybe the numbers work compared to how bad he was at it manually?

suralind 14 hours ago

But where's the added value? You can book a meeting yourself. You can quickly add items to the freezer. Everything that was described in the article can be done in about the same amount of time as checking with Clawdbot. There are apps that track parcel delivery and support every courier service.

  • whatarethembits 13 hours ago

    Almost everything described in the post, amounts to a few hours in total in a given year to do "manually". I agree, there isn't compelling value (yet).

    What's puzzling to me is that there's little consideration of what one is trading away for this purported "value". Doing menial tasks is a respite for your brain to process things in the background. Its an opportunity to generate new thoughts. It reminds you of your own agency in life. It allows you to recognise small patterns and relate to other people.

    I don't want AI to summarise chats. It robs me the opportunity to know about something from someone's own words, therefore giving a small glimpse in their personality. This paints a picture over time, adding (or not) to the desire to interact with that person in the future. If I'm not going to see a chat anyway, then that creates the possibility of me finding something new in the future. A small moment of wonder for me and satisfaction for the person who brought me that new information.

    etc etc.

    Its like they're trying to outsource living.

    Maybe the story is that, outsourcing this will free them up to do more meaningful things. I've yet to see any evidence of this. What are these people even talking about on the coffee chats scheduled by the helpful assistant?

    • echelon 11 hours ago

      This all reminds me of Bill Gates on Letterman back in 1995:

      https://www.youtube.com/watch?v=eBSLUbpJvwA

      "Do tape recorders ring a bell?"

      There are so many things I don't want to do. I don't want to read the internet and social media anymore - I'd rather just have a digest of high signal with a little bit of serendipity.

      Instead of bookmarking a fun physics concept to come back to later, I could have an agent find more and build a nice reading list for me.

      It's kind of how I think of self-driving cars. When I can buy a car with Waymo (or whatever), jump in overnight with the wife and the dogs, and wake up on the beach to breakfast, it will have arrived in a big way. I'll work remotely, traveling around the US. Visit the Grand Canyon, take a work call, then off to Sedona. No driving, traffic, just work or leisure the whole time.

      True AI agents will be like this and even better.

      Ads, for sure, are fucked. If my pane of glass comes with a baked in model for content scrubbing, all sorts of shit gets wiped immediately: ads, rage bait, engagement bait, low effort content.

      • malfist 11 hours ago

        Ads are for sure not fucked. They're going to be integrated into everything in this utopia of yours. Big tech has shown us time and time again, not only will they sell a non-paying customer to advertisers, but they'll sell paying ones too. No opportunity for revenue will be overlooked.

        • echelon 10 hours ago

          When the sand is smart and does what I say, you can't reach me.

          AdBlock was child's play. We're going to have kernel-level condoms for every pixel on screen. Thinking agents and fast models that vaporize anything we don't like.

          The only thing that matters is that we have thin clients we control. And I think we stand a chance of that.

          The ads model worked because of disproportionate distribution, platform power, and verticalization. Nobody could build competing infra to deal with it. That won't be the case in the future.

          How does Facebook know the person calling their API is human? How do they know the feed being scrolled is flesh fingers?

          • malfist 10 hours ago

            You going to train your own model so you don't have to have a model recommending products that google/anthropic/openai ran paid alignment on to encourage you to drink your ovaltine?

            • echelon 9 hours ago

              Of course.

              Everything will filter though a final layer of fast and performance "filter" models.

              Social media algorithms will be replaced by personal recommender agents acting as content butlers.

              We just need a good pane of glass to house this.

  • moribvndvs 14 hours ago

    A whole bunch of this stuff that people are fawning over as life changing and it leaves me honestly wondering: how have some of you survived this long at all?

    • paodealho 13 hours ago

      When I see these types of posts I wonder what those people do all day long that is so important, to the point they can't dedicate 30 minutes to plan and execute some chores.

      • moribvndvs 13 hours ago

        I’m relieved my electricity bill went up 50% so Brandon here can get a Slack message of what’s in his freezer rather than looking.

        • RhythmFox 11 hours ago

          A small price to pay for human hands to never be sullied digging through cold food to find things again. Progress.

      • chasd00 12 hours ago

        to be fair, i distinctly remember reading a newspaper article asking what was wrong with taking the time to use the card catalog at the library. There were trying to understand the popularity of google.com

  • zozbot234 14 hours ago

    The point of keeping the bot in the loop is so that it can make suggestions later, based on the information it's been given as part of solving that task.

bix6 15 hours ago

> in theory, clawdbot could drain my bank account. this makes a lot of people uncomfortable (me included, even now).

Yeah this sounds totally sane!

causal 15 hours ago

I'm still trying to understand what makes this project worthy of like 100K Github stars overnight. What's the secret sauce? Is it just that it has a lot of integrations? Like what makes this so much more successful than the ten thousand other AI agent projects?

  • zozbot234 15 hours ago

    It's set up to wake up periodically and work autonomously for you based on the broad instructions it's been given. Compared to the usual coding agent workloads, this makes it a lot more "assistant"-like.

    • causal 11 hours ago

      That makes sense. I've thought for a while that having an agent that takes initiative rather than reacting to inputs could be really useful, and I imagine it takes a lot of trial and error to make it take just the right amount of initiative.

    • PurpleRamen 13 hours ago

      So people are hyped because they don't know cron?

      • azan_ 13 hours ago

        Yeah and people were hyped for Dropbox because they did not know rsync and ftp.

        • mh2266 11 hours ago

          Dropbox wasn't given access to your bank account 2fa. There should maybe be slightly more gatekeeping around installing software that unironically advertises itself as RCE: https://docs.openclaw.ai/gateway/security#node-execution-sys...

          • nickthegreek 9 hours ago

            There is a large amount of gatekeeping called installing and configuring this software. It is not a trivial task that normies can easily accomplish. You have to walk past so many red flags, that you would rightly be called in idiot if you lost anyting of value.

            I'll be more concerned for the public when its a double click. Currently it's just a way for techies to fafo. And I do enjoy that there are many people out there messing around with it. It is closer to the 90s experimental net mindset and than I've seen lately. It is also fun that its not a big corpo release. It is not often quick and dirty small team software blows up this big and gets noticed by the world at large.

        • PurpleRamen 12 hours ago

          You forgot cron. rsync without the periodic poll is not a good Dropbox-replacement.

      • nfw2 3 hours ago

        It is combination of:

        - Cron-style heartbeat manager

        - Easy customization with markdown only

        - Good out-of-box memory management that just works

        - Good set of tools out-of-box that just work

        Like Jack Dorsey said about project success, limit number of details and make those details perfect

    • consumer451 10 hours ago

      Four months ago, I was playing with basically the same framework to explore the idea of "consciousness," using Claude Agent SDK as the harness and Opus 4.5 as the LLM.

      I was thinking: wake up every hour, look at some webcams and the weather forecast (senses, change), maybe look at my calendar, maybe read my personal emails for important things, proactively chat with me for work or just fun via email invites.

      I played with it for a bit, then got back to "serious work."

      I am such an idiot for not seeing the broader value. One thing is that I was sure some multi-billion dollar company was already doing this, and I am super paranoid about the Lethal Trifecta.

      • fwip 6 hours ago

        Don't worry, you're not an idiot. This is not gonna pan out.

        • consumer451 4 hours ago

          Just like when I discounted cryptocurrency in 2012, yet again I may be overthinking things.

          That is the lesson that I would like to share on this platform: If you have an idea, any idea, just do it. Do it now. Build now.

          It turns out that there are a lot of morons. Role the dice, you are likely not one of the bag holders, as one of the readers of this comment. I wish I had been younger when I had realized this circle of life.

  • defgeneric 13 hours ago

    It could be a symptom of how fragmented workflows are, which itself seems to be due to providers adding friction to guard against being integrated away by some larger platform.

  • fassssst 12 hours ago

    It’s easy to use

    • verdverm 12 hours ago

      this is typically good for new users and toy projects

      this doesn't look like something enterprises would lean in to (normally, but we are in a new kind of hype period, one without clear boundaries between mini-cycles, where popularity trumps many other qualities)

dmje 13 hours ago

What strikes me here is the extreme noise. I mean, I’m 50+ so you know, but even so, this shit doesn’t make sense. To be living a life where you’re checking messaging groups for 100+ messages a day, needing some kind of bot to manage your (obviously extremely traffic’d) texts incoming, to be watching tens of prices of stocks, products, meeting, what, tens of people a day (as an introvert…)…

Holy shit, fuck that. Slow the bejesus down and live a little. Go look at the sky.

tsxxst 13 hours ago

The fact that the author gave unrestricted 2FA access to the model is really scary. It’s way easier to phish an AI than a human.

  • afro88 11 hours ago

    Same. Immediately I thought why not have clawdbot ask you for the 2FA? That way you at least kind of know what security-protected action it's trying to take and can approve it

    • swiftcoder 7 hours ago

      The problem is baked in - he gives it access to iMessage, which is where all the sms-based 2fac codes end up. There is no way to prevent it reading 2 fac codes if you want to give it full text message access

  • chasd00 12 hours ago

    Just to be upfront, i've gone from one of the naysayers to a modest fan after spending some time using Claude Code on nights/weekends with tasks that I know I can do and how long it would take me in order to get an idea of productivity gains possible with the tool. So far, the money i've spent was worth the results i got.

    However, it's shocking to me the blinders people have with these things. Security is supposed to be front and center in our industry with everything we build and do. I thought that lesson had been learned and learned well over the past 30 or so years of life on the web. People are going to get seriously burned and the only answer to them is going to be "well you should have known better". For a fishing analogy, Barracuda are circling just out of visual range biding their time but the strike is inevitable.

    If you're using these agents, spend some time attacking them and see what you can get them to do that you thought would be impossible by default. If you find something say something, we're basically having to re-teach the whole Internet basic information security again.

olalonde 15 hours ago

Why is everything in lowercase?

lawrenceyan 5 hours ago

Found this short story on Openclaw to be relevant:

https://x.com/gf_256/status/2018844976486945112

siliconc0w 13 hours ago

It doesn't make sense to 'build trust' with a bot. Today it works but tomorrow someone may push a malicious 'skill', a dependency may be compromised, or someone eventually figures out the right prompt injection incantation to remotely drain your accounts.

grugdev42 15 hours ago

There is only so much damage a human assistant can do.

But an AI assistant can do so much more damage in a short space of time.

It probably won't go wrong, but when it does go wrong you will feel immense pain.

I will keep low productivity in exchange for never having to deal with the fallout.

  • velcrovan 14 hours ago

    Human beings are also liable for the results of their actions.

  • bob1029 14 hours ago

    Regarding anything code/data:

      git commit 
      aws ec2 create-snapshot --volume-id ...
      git reset --hard
      git clean -fdx
      aws ec2 create-volume --snapshot-id ...
      robocopy "C:\backup" "D:\project" /MIR 
      ...
    
    I agree there are a lot of things outside the computer that are a lot more difficult to reverse, but I think that we are maybe conflating things a bit. Most of us just need the code and data magic. We aren't all trying to automate doing the dishes or vacuuming the floors just yet.
zkmon 11 hours ago

I don't think a lot of people worry about having a bot to manage their chats, appointments, travel, hotel booking etc. A lot of us just worry about the tasks in our task queue. Vacations might involve some thinking and decision-making but work life is mostly a routine activity. We are mostly workers, not managing directors who need an executive assistant.

urbandw311er 7 hours ago

Dear Brandon. Sentences begin with capital letters. Kind regards.

  • nfw2 3 hours ago

    don't be pedantic about grammar and then get your own grammar wrong

codeulike 14 hours ago

If you're on MS stack, this is all stuff that MS 365 Copilot will already do for you, but with much better defined barriers around what it can and cant access.

  • granoIacowboy 14 hours ago

    This is the first positive thing I have heard about copilot. You’ve found it genuinely capable?

  • theYipster 12 hours ago

    Assuming you are using the flagship copilot that is a $30 / mo add on to a 365 subscription, and maybe, maybe if Microsoft replaced CoPilot’s “brain” with Opus 4.5. In my experience, while flagship CoPilot does deliver value if setup correctly, it’s no where near as capable an “agent” as Claude. (And even though Open Claw is now model agnostic, there is a reason for its association to Claude. Despite it’s expense, I find Opus 4.5 works best.)

  • GrinningFool 7 hours ago

    Every time I ask copilot in 365 to do something with calendar or email, it tells me it has no access but helpfully suggests I could paste some content in...

    I check in once a month or so and get the same results.

  • raffkede 13 hours ago

    I would be surprised if Copilot is even close to that

munificent 12 hours ago

> as someone who has a chest freezer and a compulsive desire to buy too many things at costco, we take everything out of the freezer every few months to check what we have. before, this was a relatively involved process: me calling things out, my partner writing them down.

A thought I constantly find myself having when I read accounts of people automating and accelerating aspects of their life by using AI... Are you really that busy?

I mean, obviously, no one is thrilled by spending ten minutes making a dentist appointment. But I strongly suspect that most of us will feel a stronger sense of balance and equanimity if a larger fraction of our life is spent doing mundane menial tasks.

Going through your freezer means that you're using your hands and eyes and talking to your partner to solve a concrete problem. It's exactly the kind of thing primates evolved to do.

Whenever I read articles like this, I can't help but imagine the author automating away all of the menial toil in their day so they can fill those freed up minutes with... more scrolling on their phone. Is that what anyone needs more of?

  • yoyohello13 11 hours ago

    The freezer one is so weird because there is an even simpler solution to the problem. Just buy less shit! If you have so much stuff that you can’t keep track then don’t have so much stuff, simple.

    I think there is a common psychology when people notice a problem they first think about what they can add to solve the problem, when often the best solution is to think about what you can remove.

    • dingaling 7 hours ago

      Or, as my partner does, keep a page magnetted to the freezer, divided into three 'shelves', with a list of what's where.

      • dolebirchwood 4 hours ago

        We just have a magnetic whiteboard on the refrigerator and write down things we need to buy when we run low/out. A true modern marvel, and no AI bot required!

      • munificent 4 hours ago

        In my family, that page would half-accurately describe the contents of the freezer circa 2012.

    • munificent 8 hours ago

      100%.

      I follow the OrganizationPorn subreddit because sometimes I like looking at pictures of neatly organized stuff. But so much of the photos are from sprawling suburban houses with enormous pantries and "craft rooms" with just So. Much. Stuff.

      Unless you're feeding a family of 12, I don't know how anyone can keep that much food without half of it going bad before you get to it anyway.

artisin 13 hours ago

I mean, maybe, it's just me, but...

> it can read my text messages, including two-factor authentication codes. it can log into my bank. it has my calendar, my notion, my contacts. it can browse the web and take actions on my behalf. in theory, clawdbot could drain my bank account. this makes a lot of people uncomfortable (me included, even now).

...is just, idk, asinine to me on so many levels. Anything from a simple mix-up to a well-crafted prompt injection could easily fuck you into next Tuesday, if you're lucky. But admittedly, I do see the allure, and with the proper tooling, I can see a future where the rewards outweigh the risks.

browningstreet 9 hours ago

I've tried twice now to install it.. once in a docker container, and the second time in a droplet. Couldn't get any of the setup stuff configured properly, couldn't get any of the API keys registered, couldn't get the Telegram bot approved either.

Some of the commands seem to have drifted from the documentation. The token status freaks out too and then... whatever, after 2 hours I just gave up. And it only cost me $1.19 in Anthropic API tokens.

wyldfire 10 hours ago

Would it be any more comforting from a privacy standpoint to have the models capable of doing this running on the device itself instead of the cloud?

mh2266 13 hours ago

> amongst smart people i know there's a surprisingly high correlation between those who continue to be unimpressed by AI and those who use a hobbled version of it

is it "hobbled" to:

1. not give an LLM access to personal finances 2. not allow everyone in the world a write channel to the prompt (reading messages/email)

I mean, okay. Good luck I guess.

baalimago 13 hours ago

I'm a bit surprised that people need an LLM to automate things like this. Is the market really that large, to cause such a hype? I don't think I'm being "elitist" by having a calendar and a pen, am I..?

The one tangible usecase is perhaps booking things. But, personally, I don't mind paying 5-10% extra by going to a local store and speaking to a real person. Or perhaps intentionally buying ecological. Or whatever. What is life if you have a robot optimize everything you do? What is left?

  • simonw 12 hours ago

    If you're happy "speaking to a real person" when you could automate that interaction away somehow then no, digital personal assistants probably aren't something you're going to care about.

    I love talking to real people about stuff that matters to them and to me. I don't want to talk to them about booking a flight or hotel room.

    • mejutoco 12 hours ago

      If hotels, or google, or travel websites wanted people to book programmatically they would have an api.Remember when Google search had an api? In the end the human is responsible for the purchase. I think when the dust settles, AI will offer a "do you want to purchase?" and then the human will press the button. Or ChatGPT or somebody controlling the last step will have that button, and services will accept it (like Instagram) because it brings business.

      • LevGoldstein 12 hours ago

        This only lasts until dark patterns can be inserted that disrupt the ease of use that agents are currently providing. If I can't force the end user to watch unskippable ads or trick them into spending money on a service they don't need, what are we even doing?

      • simonw 12 hours ago

        The reason they don't have an API is that they want to upsell you on other stuff, and get paid to promote their partners.

        There's going to be a huge fight over how that relates to AI assistants over the next few years.

        • mejutoco 11 hours ago

          I agree fully, and wanted to add: for many of these services, like travel engine comparison sites, running the query itself costs money, so you do not want to make it to easy to search without booking.

    • techpression 12 hours ago

      The nuances contained in ”booking a flight or hotel room” are plenty, it matters a lot to a lot of people. The industry will probably be very very happy to have bots do it, the amount of extra revenue they will get by taking the tricks made for humans to the next level is going to be substantial.

  • cityofdelusion 12 hours ago

    I think part of it is the person that wrote the blog is very wealthy. They mention a personal assistant, very expensive fashion items, and hotel reservations that are 2x the price I paid for my honeymoon. Most people are probably cross shopping Walmart brand milk with name brand, and they aren’t dropping hundreds a month on an AI subscription. It’s a class thing combined with the Bay Area engineer bubble mentality —- I have some family that came from money and they just see the world completely differently, they can’t fathom life in say, Kansas at median household income.

  • contravariant 12 hours ago

    Well websites like AirBnB tend to make it as difficult as humanly possible to automate stuff like this, so maybe?

    Although that likely only lasts until they learn how to block LLMs effectively.

    • verdverm 12 hours ago

      We may get to a point where they have a hard time distinguishing. Perhaps it can be made in their interest to open the API for everyone (i.e. convince the bean counters)

  • johnsmith1840 12 hours ago

    Totally agree its basically the equivalent of a few low end apps as of now. The interesting thing to me is that it does MANY low end apps all together.

    It's a calendar, reminder, notebook, fridge scanner, and a webscraper

    I think the interesting idea here is that overtime this will grow to more applications. None require integration or effort to work you only need plug the infrastructure and tooling.

    This to me is what will eventually wipe out most agentic startups. The enterprise version of this little thing is just a bot and a set of documents of what it should do and a few tools. Why pay and setup a new system when I can just automate what I already have?

  • pmart123 12 hours ago

    There's a lot of irony right now regarding the cost of these things too (although I know the cost curve will drop over time). I know developers that are burning $1,000/day on tokens for Claude Code or VC's using the $200/month ChatGPT pricing plan who are then talking about Vibe coding TurboTax away. TurboTax for most people is $50 to $100 a year. We are still a far way off even from a cost justification standpoint let alone a reliability standpoint of relying on a vibe coded solution for filing your taxes.

  • hackyhacky 12 hours ago

    IMHO the "killer app" aspect of OpenClaw (and similar) is that everything is now an API.

    We think of chat apps, like WhatsApp, as being ways to communicate with people, which is a nice way of saying they are protocols. When you want something, you send a message, and you get an answer, just like with HTTP, except the endpoints have been controlled by meat. With OpenClaw, the meat is gone. Now you can send a message on WhatsApp to schedule a date with your spouse, their OpenClaw will respond with availability, they'll negotiate a time and place. We've replaced human communication with an ad-hoc, open-ended date-negotiation protocol, using English instead of JSON as a data-interchange format, and OpenClaw as the interface library.

    You can say "make an appointment at my dentist" and even if your dentist doesn't have a website, the bot can call up and schedule an appointment. (I don't know if OpenClaw can do this now, but it seems inevitable.) In other words, the (human) receptionist is now an API that can be accessed programmatically.

    • yoyohello13 11 hours ago

      > We've replaced human communication with an ad-hoc, open-ended date-negotiation protocol, using English instead of JSON as a data-interchange format, and OpenClaw as the interface library.

      People heralding this as a good thing is extremely disturbing.

    • mvdtnz 11 hours ago

      If we insist on using that term, let's be more precise: Everything is a horrendously expensive API that will give you subtly incorrect behaviour at random.

      • hackyhacky 11 hours ago

        Humans are also famous for introducing errors in their communication. I think the AI-to-AI interface will only improve on that .

        The price is high now but will get cheaper, especially when compared to the cost of human labor.

        Having said that, it sounds like an isolating and boring way to live.

  • enmyj 12 hours ago

    heartily agree. It takes ~10s to see what's in the freezer. Also "this is water" etc

keyle 5 hours ago

A curious case of Vibe Living.

jngiam1 7 hours ago

i've a simple setup with Claude Code and MCPs; and i get real benefits from better task mgmt, email mgmt, calendar, health/food/fitness tracking, working together with claude on tasks (that go into md files).

i don't think we need ClawdBot, but we do need a way to easily interact with the model such that it can create long term memories (likely as files).

cluckindan 15 hours ago

Just weeks ago, the sentiment was such that developers would be managing AI workers.

Now, it seems that AI will be managing the developers.

mbesto 12 hours ago

Everything that are daily burdens and require an assistant are also the things that require the most secure way to access them. OpenClaw sounds amazing on paper but super risky in practice.

627467 7 hours ago

So, what prevents site to do dynamic pricing for bots checking sites for prices?

longtermop 12 hours ago

Exciting to see Apple making agentic coding first-class. The "Xcode Intelligence" feature that pulls from docs and developer forums is powerful.

One thing I'm curious about: as the agent ingests more external content (documentation, code samples, forum answers), the attack surface for prompt injection expands. Malicious content in a Stack Overflow answer or dependency README could potentially influence generated code.

Does Apple's implementation have any sanitization layer between retrieved content and what gets fed to the model? Or is the assumption that code review catches anything problematic? Seems like an interesting security challenge as these tools go mainstream.

  • chasd00 12 hours ago

    > Does Apple's implementation have any sanitization layer between retrieved content and what gets fed to the model?

    It's been discussed a lot but fundamentally there isn't a way to solve this yet (and it may not be solvable period). I'm sure they've asked their model(s) to not do anything stupid through the system prompt. Remember, prepending and appending text to the user's request to an LLM is the all you can do. With an LLM it's only text string in then text string out. That's it.

tiangewu 12 hours ago

My main interest in something like OpenClaw is giving it access to my bank account and having it harvest all the personal finance deals.

Fortune favors the bold, I guess.

sharadov 11 hours ago

"Taking pictures of the contents of your freezer" sounds so tedious. It's a solution looking for a problem!

AdeptusAquinas 8 hours ago

This reminds me of a take by Dan Harumi: these tools are always pitched for 'restraurant reservations', 'reminders', 'email and message follow ups': i.e. they appeal to the sort of arrested development man children that inhabit tech who never really figured out adulting. Now the computer can do it for them, and they can remain teenagers forever.

rao-v 10 hours ago

I think it maybe time for us to think about what the sensible version of these capabilities are.

Short term hacky tricks:

1. Throw away accounts - make a spare account with no credit card for airbnb, resy etc.

2. Use read only when it's possible. It's funny that banks are the one place where you can safely get read only data via an API (plaid, simplefin etc.). Make use of it!

3. Pick a safe comms channel - ideally an app you don't use with people to talk to your assistant. For the love of god don't expose your two factor SMS tokens (also ask your providers to switch you to proper two factor most finally have the capability).

4. Run the bot in a container with read only access to key files etc.

Long term:

1. We really do need services to provide multiple levels of API access, read only and some sort of very short lived "my boss said I can do this" transaction token. Ideally your agent would queue up N transactions, give them to you in a standard format, you'd approve them with FaceID, and that will generate a short lived per transaction token scoped pretty narrowly for the agent to use.

2. We need sensible micropayments. The more transactional and agent in the middle the world gets, the less services can survive with webpages,apps,ads and subscriptions.

3. Local models are surprisingly capable for some tasks and privacy safe(er)... I'm hoping these agents will eventually permit you to say "Only subagents that are local may read my chat messages"

ghostly_s 9 hours ago

Have ignored the flood of "Clawdbot" stuff on here lately because none of it seemed interesting but read this and skimmed the docs and I'm leaving puzzled- I understand "Clawdbot" was renamed "OpenClaw" due to trademark...yet I'm finding currently three different websites for apparently the same thing?

1. https://openclaw.ai/ [also clawd.bot which is now a redirect here]

2. https://clawdbot.you/

3. https://clawdbotai.org/

They all have similar copy which among other things touts it having a "local" architecture:

    "Private by default—your data stays yours."

    "Local-First Architecture - All data stays on your device. [...] Your conversations, files, and credentials never leave your computer."

    "Privacy-First Architecture - Your data never leaves your device. Clawdbot runs locally, ensuring complete privacy and data sovereignty. No cloud dependencies, no third-party access."
Yet it seems the "local" system is just a bunch of tooling around Claude AI calls? Yes, I see they have an option to use (presumably hamstrung) local models, but the main use-case is clearly with Claude -- how can they meaningfully claim anything is "local-first" if everything you ask it to do is piped to Claude servers? How are these claims of "privacy" and "data sovereignty" not outright lies? How can Claude use your credentials if they stay on your device? Claude cannot be run locally last I heard, am I missing something here?
  • ghostly_s 9 hours ago

    Oh my goodness. Reading up on it a bit more:

         Ox Security, a "vibe-coding security platform," highlighted these vulnerabilites to its creator, Peter Steinberg. The response wasn't exactly reassuring.
    
        “This is a tech preview. A hobby. If you wanna help, send a PR. Once it’s production ready or commercial, happy to look into vulnerabilities.”[1]
    
    In light of this I'm inclined to conclude- yeah, they're just lying about the privacy stuff.

    1. https://www.xda-developers.com/please-stop-using-openclaw/

ericyd 15 hours ago

Wait I'm ignorant, how long has OpenClaw/Clawdbot existed? This person listed like 6 months of activities that they offloaded to the bot, I thought this thing was pretty new.

  • saghm 14 hours ago

    Maybe Clawd wrote this itself, and it just doesn't know how old it is?

  • adrian17 14 hours ago

    FWIW, the screenshots all have the dates spanning the last couple days.

    But yeah, I can't imagine me getting used to a new tool to this degree and using it in so many ways in just a week.

    • ericyd 5 hours ago

      That makes it even less believable. They talked about how this tool has replaced some other tools such as flight price trackers. How in the world could that happen in 1 week to such a degree that you wrote a whole blog about it?

  • kaicianflone 13 hours ago

    OpenClaw utilizes AgentSkills designed by Anthropic so OpenClaw is plug and play with certain APIs and integrations.

thm 14 hours ago

https://www.theregister.com/2026/02/04/cloud_hosted_openclaw...

Kill it with fire - Analyst firm Gartner has used uncharacteristically strong language to recommend against using OpenClaw.

rambocoder 13 hours ago

So if all day you spend chatting with people via IMs, then openclaw helps you automate that. Got it.

almostdeadguy 10 hours ago

I wish I understood why all lowercase text and cosplaying as Zoomers became the preferred affectation of AI people.

alluro2 14 hours ago

As someone for whom English is not the first language, I got stumped by the "chest freezer" and the photo of colourful bags, for good ~15 seconds, going through - "hm, must be some kind of travel thing where you bring snacks in some kind of device you carry around your neck / on your chest...why not backpack freezer then...hm, why would snacks need a freezer...maybe it's just a cooler box, but called chest freezer in some places"...

....before I took a better look of the photo and realised it's frozen stuff - for the dedicated freezer - that opens like a chest (tada).

Well, that was fun...Maybe I should get a bit more sleep tonight!

marxisttemp 12 hours ago

Why is this written in lowercase? What a performative way to write in 2026

RC_ITR 13 hours ago

I may not be AGI, but here's a $615 2 Queen bed hotel room for the dates he wants in exactly the location he wants (just not on Airbnb).

https://www.booking.com/Share-Wt9ksz

Maybe he really is tied to $600 as his absolute upper limit, but also seems like something a few years from AGI would think to check elsewhere.

patrickk 13 hours ago

> how’d you set it up?

I was disappointed by this section. He doesn’t mention which model he uses (or models split by task type for specific sub agents).

I tried out OSS-20B hosted on Groq (recommended by a YouTuber) to test it for cheap, but the model isn’t smart enough for anything other than providing initial replies and perhaps delegating tasks into expensive capable models from ChatGPT or Claude. This is a crucial missing detail to replicate his use cases.

noncoml 7 hours ago

I think Clawdbot is amazing, but my only issue is how it burns through my AI budget. Even when using a "cheap" model like Gemini 2.5 flash, it easily burns $10-$20 a day

cess11 14 hours ago

'the sweet sweet elixir of context is a real "feel the AGI" moment and it's hard to go back without feeling like i would be willingly living my most important relationship in amnesia'

I'm not so sure that I would use the word "sane" to describe this.

IshKebab 15 hours ago

Can this thing deal with the insane way my children's school communicates? Actionable information (children wear red tomorrow) is mixed in with "this week we have been learning about bees" across five different communication channels. I'm not exaggerating. We have Tapestry, emails, a newsletter, parents WhatsApp, Arbour and Facebook.

I guess the difficulty is getting the data into the AI.

  • Skidaddle 12 hours ago

    Yes, I have Claude summarize the rambling, repetitive, slightly unhinged emails from my kindergarten teacher. Otherwise I simply ignore them.

  • bondarchuk 14 hours ago

    That's six channels actually.

    • IshKebab 12 hours ago

      Oh yeah it used to be five but they added Facebook because - I quote - "we hope that we can create another line of communication for reminders and messages".

4corners4sides 8 hours ago

This article convinced me to try to set up OpenClaw locally on the my raspberry pi but I realised that it had no micro SD card installed AND it used micro HDMI instead of a regular HDMI for display which I didn't have…

Some of the takes in this article relate to the "Agent Native Architecture" (https://every.to/guides/agent-native), an article that I critiqued quite heavily for being AI generated. This article presents many of the concepts explored there in a real-world, pragmatic lens. In this case, the author brings up how initially they wanted their agent to invoke specific pre-made scripts but ultimately found out that letting go of the process is where the inner model intelligence was able to really shine. In this case, parity, the property whereby anything a human can do an agent can do was achieved most powerfully buy simply giving the agent a browser-use agent which cracked open the whole web for the agent to navigate through.

The gradual improvement property of agent native architectures was also directly mentioned by the article, where the author commented on giving the model more and more context allowed him to “feel the AGI”.

ClawdBot is often reduced to “just AI and cron” but that might be overly reductive in the same way that one could call it a “GPT wrapper” in the same way that one could call a laptop an “electricity wrapper”. It seems like the scheduler is a significant aspect of what makes ClawdBot so powerful. For example the author, instead of looking for sophisticated scraper apps online to monitor prices of certain items will simply ask ClawdBot something like: “Hey, monitor hotel prices” and ClawdBot will handle the rest asynchronously and communicate back with the author over slack. Any performance issues due to repeated agent invocations are ameliorated by problem context and runbooks that are automatically generated and probably cost less time than maintaining pipelines written in plain code for a single individual who wants a hands-off agent solution.

Also, the article actually explains the obsessions with Mac Mini’s which I thought was some kind of convoluted scam (though apple doesn’t need scams to sell Macs…). Essentially you need it to run a browser or multiple browsers for your agents. Unfortunately that’s the state of the modern web.

I actually have my own note taking system and a pipeline to give me an overview of all of the concepts, blogs and daily events that have happened over the past week for me to look at. But it is much more rigid than ClawdBot: 1) I can only access it from my laptop, 2) it only supports text at the moment, 3) the actions that I can take are hard coded as opposed to agent-refined and naturally occuring (e.g. tweet pipeline, lessons pipeline, youtube video pipeline), 4) there’s no intelligent scheduler logic or agent at all so I manually run the script every evening. Something like ClawdBot could replace this whole pipeline.

Long story short, I need to try this out at some point.

oncallthrow 15 hours ago

Do you mean “bullish”?

  • ahoka 15 hours ago

    Bull and bear markets. Bull’s horns are pointing up (expecting growth, optimistic), bear’s claw is pointing down (expecting recession, pessimistic). Yeah, it’s stupid.

  • djeastm 14 hours ago

    That would be the more general/traditional way of saying it, but in modern investment circles the focus seems to have turned towards the actual people being "bulls/bears" and not just the attitudes of the market. A person is a bull or a bear, as opposed to a person being either bullish or bearish.

    So in this construction, a "bull case" is a "case that a bull (the person) can make".

  • jollyllama 8 hours ago

    It's probably what he meant but it's more accurate this way.

  • luplex 15 hours ago

    "a bull case" gets lots of google results, so it seems to be a commonly used construction amongst analysts. Basically it means "The case that OpenClaw will develop as a bull".

    "bullish" seems more common in tech circles ("I'm bullish on this") but it's also used elsewhere.

  • standarditem 15 hours ago

    "Bullish" means optimistic or even aggressively optimistic. It's typically used in the context of markets.

zackify 15 hours ago

I just can't get over how none of this is new. 6 months ago I was running "summarize my work" tasks using linear and github mcps

just using a cron task and claude code. The hype around openclaw is wild

  • runjake 14 hours ago

    A lot of it is, in fact, new.

    The hype around OpenClaw is largely due to the large suite of command line utilities that tie deeply into Apple’s ecosystem as well as a ton of other systems.

    I think that the hype will be short-lived as big tech improves their own AI assistants (Gemini, improved Siri, etc), but it’s nice to have a more open alternative.

    OpenClaw just needs to focus on security before it can be taken more seriously.

    • bfeynman 11 hours ago

      What part is new? the thing that took off is that it allows technophiles who couldn't probably flash a raspberry pi to feel like they are hackers. All of this stuff exists in tons of random AI apps that exist already it just wasn't really that much of a value add, there is just a virality of it reaching audiences that previously only knew how to use a chat app.

    • Skidaddle 13 hours ago

      I wouldn’t call it new, just conveniently packaged and with more momentum. I’ve been running an apple-mcp server on a Mac Mini for Claude to use to manage my reminders in addition to Gcal + gmail, and I could have just as easily added messaging capabilities.

      Call me crazy, but… I feel more likely to trust Anthropic than anybody else when it comes to safety on things like this.

  • reacharavindh 14 hours ago

    To me, the magic is around interactions with personal info where they are - iMessages, e-mails, etc. I still am wart to open up like this, but it certainly is not as simple as Claude code and cron task. The “you can already do this via rsync + FTP” comment on Dropbox Show HN thread comes to mind.

    • verdverm 12 hours ago

      It's a bit different than the Dropdox situation because the market is already flooded, the big players have their options, the cycle is rapid (who's langchain?)

      I hope, think, and build towards a world where there will be fewer winner-take-all in this foundational tech

willmadden 10 hours ago

AI is useful for researching things far more quickly before making a decision and for automation/robotics. Motivated people don't need a nagbot to replace their calendar.

chaostheory 14 hours ago

I some lose utility but my openclaw bot only has its own accounts. I do not give it access to any of my own accounts.

insane_dreamer 14 hours ago

> let me be upfront about how much access i've given clawdbot: it can read my text messages, including two-factor authentication codes. it can log into my bank. it has my calendar, my notion, my contacts. it can browse the web and take actions on my behalf.

this is foolish, despite the (quite frankly) minor efficiency benefits that it is providing as per the post.

and if the agent has, or gains, write access to its own agents/identity file (or a file referenced by its agents file), this is dangerous

modzu 4 hours ago

uh ohhh... forgot how to think..

gabrieledarrigo 12 hours ago

> i haven't automated anything here, but booking a table by talking to clawdbot is delightful.

Omg. Just get the phone and call the restaurant, man.

I really don't want to live in this timeline where I can't even search for b&b with my gf without burning tokens through an LLM. That's crazy.

  • pfortuny 12 hours ago

    It is impressive what people find "delightful, "a joy", "fresh air" these days.

cj 15 hours ago

Tangent: what is the appeal of the “no capitalization” writing style? I never know what message the author is intending to convey when I see all lower case.

Normally I can ignore it, but the font on this blog makes it hard to distinguish where sentences start and end (the period is very small and faint).

  • dang 11 hours ago

    "Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."

    https://news.ycombinator.com/newsguidelines.html

  • 1dom 14 hours ago

    I really dislike it too.

    I think it might be adults ignoring established grammar rules to make a statement about how they identify a part of a group of AI evangelists.

    Kind of like how teenagers do nonsensical things like where thick heavy clothing regardless of the weather to indicate how much of a badass them and their other badass coat wearing friends are.

    To normal humans, they look ridiculous, but they think they're cool and they're not harming anyone so I just leave them to it.

    • chongli 12 hours ago

      make a statement about how they identify a part of a group

      That’s what it is. A shibboleth. They’re broadcasting group affiliation. The fact that it grates on the outgroup is intentional. If it wasn’t costly to adopt it wouldn’t be as honest of a signal.

      • cucumber3732842 12 hours ago

        On a scale from the purest, not lifting a finger anymore than to strike a keyboard, of virtue signaling to putting one's money where their mouth is this shibboleth is about as costly as the tidal zone is dry land.

    • webdood90 13 hours ago

      can't imagine getting this riled up over lowercase text. some serious fist-shaking-at-clouds energy.

      it's meant to convey a casual, laid back tone - it's not that big of a deal.

      • rhines 12 hours ago

        You convey tone through word choice and sentence structure - trying to convey tone through casing or other means is unnecessary and often just jarring.

        Like look at the sentence "it has felt to me like all threads of conversation have veered towards the extreme and indefensible." The casing actually conflicts with the tone of the sentence. It's not written like a casual text - if the sentence was "ppl talking about this are crazy" then sure, the casing would match the tone. But the stodgy sentence structure and use of more precise vocabulary like "veered" indicates that more effort has gone into this than the casing suggests.

        Fair play if the author just wants to have a style like this. It's his prerogative to do so, just as anyone can choose to communicate exclusively in leetspeak, or use all caps everywhere, or write everything like script dialogue, whatever. Or if it's a tool to signal that he's part of an in-group with certain people who do the same, great. But he is sacrificing readability by ignoring conventions.

      • verdverm 13 hours ago

        It's hard to find sentence breaks, it is actually about readability and accessibility

        • mejutoco 12 hours ago

          Ironically, this sentence is called a comma splice or run-on sentence. A period or semicolon would be correct.

          I agree with the sentiment too, or maybe I am getting old :P

          • verdverm 12 hours ago

            I don't think it's about getting old, it's about expecting clear and parsable communication

            Some people are being lazy, they will get less attention, ideally

      • outime 13 hours ago

        I also agree it sucks, and I don't see a problem pointing it out.

      • superdisk 13 hours ago

        It's just very poser behavior.

        • webdood90 12 hours ago

          TIL hacker news is dominated by boomers

          • verdverm 12 hours ago

            if by boomers you mean a community with above average expectations for the quality of submissions and commentary, sure

            • cucumber3732842 12 hours ago

              I thought it was a joke about a propensity to peddle public policy that will drive the world off a cliff, but not until after we get ours.

              • verdverm 11 hours ago

                That's politicians and media influencers of all ages, not the general public

                The new generation of tiktok / podcast "independent journalists" is a serious issue / case of what you describe. They are many doing zero journalism and repeating propaganda, some paid by countries like Russia (i.e. Tim Pool and that whole crew that got caught and never face consequences)

      • calepayson 13 hours ago

        > to normal humans, they look ridiculous, but they think they're cool and they're not harming anyone so i just leave them to it.

        fixed it for you! now it’s in a casual, laid back tone.

  • defgeneric 14 hours ago

    You mention the technical aspect (readability) and others have suggested the aesthetic, but you could also look at it as a form of rhetoric. I'm not sure it's really effective because it sort of grates on the ear for anyone over 35, but maybe there's a point in distinguishing itself from AI sloptext.

    Incidentally, millenials also used the "no caps" style but mainly for "marginalia" (at most paragraph-length notes, observations), while for older generations it was almost always associated with a modernist aesthetic and thus appeared primarily in functional or environmental text (restaurant menus, signage, your business card, bloomingdales, etc.). It may be interesting to note that the inverse ALL CAPS style conveyed modernity in the last tech revolution (the evolution of the Microsoft logo, for example).

    • slfnflctd 13 hours ago

      I was using all lowercase as my default for internet comments (and personal journal entries) for at least a solid decade, starting from some point in the 90s. I saw it as a way to take a step back from being pretentious.

      I eventually ran into so much resistance and hate about it that I decided conforming to writing in a way that people aren't actively hostile to was a better approach to communicating my thoughts than getting hung up on an aesthetic choice.

      Having started out as a counterculture type, that will always be in my blood, but I've relearned this lesson over and over again in many situations-- it's usually better to focus on clear communication and getting things done unless your non-standard format is a critical part of whatever message you're trying to send at the moment.

      • wredcoll 13 hours ago

        I'm a big fan of counter culture and so on, but generally the point of text is to be read and using all lower case just makes it harder for all your readers, which seems like the worst form of arrogance.

    • mananaysiempre 11 hours ago

      > [No-caps text] sort of grates on the ear for anyone over 35 [...] Incidentally, millenials also used the "no caps" style but mainly for "marginalia" (at most paragraph-length notes, observations)

      I (a millenial) carried over the no-caps style from IRC (where IME it was and remains nearly universal) to ICQ to $CURRENT_IM_NETWORK, so for me TFA reads like a chat log (except I guess for the period at the end of each paragraph, that shouldn’t be there). Funnily enough, people older than me who started IMing later than me don’t usually follow this style—I suspect automatic capitalization on mobile phones is to blame.

    • eggy 13 hours ago

      nobody shouts in lowercase—it whispers its way into being, a small insurgency against The Proper Way To Speak ; )

      -- inspired by e.e. cummings!

      • wredcoll 13 hours ago

        > Additionally, The Chicago Manual of Style, which prescribes favoring non-standard capitalization of names in accordance with the bearer's strongly stated preference, notes "E. E. Cummings can be safely capitalized; it was one of his publishers, not he himself, who lowercased his name."[65]

      • pfortuny 12 hours ago

        But then Clawd gets capitalized...

    • yunohn 13 hours ago

      > but maybe there's a point in distinguishing itself from AI sloptext

      Surprisingly, I have seen lower case AI slop - like anything else, can be prompted and made to happen!

  • nprz 14 hours ago

    Casual, informal, friendly, hip, young, etc.

    Can make sense on twitter to convey personality, but an entire blog post written in lower case is a bit much.

    • KronisLV 9 hours ago

      I used not to capitalize "I" in my own writing, because it seemed a bit silly to do that, even though making it more distinct visually seems okay now, some years later.

      At the same time, in my language (Latvian) you/yours should also get capitalized in polite text corespondence, like formal letters and such. Odd.

  • speak_plainly 14 hours ago

    Someone at some point styled themselves as a new E.E. Cummings, and somehow this became a style. The article features inconsistent capitalization for proper names alongside capitalized initialisms, proving there is some recognition of the utility of capitalization.

    Ultimately, the author forces an unnecessary cognitive burden on the reader by removing a simple form of navigation; in that regard, it feels like a form of disrespect.

  • paulgerhardt 13 hours ago

    I’ve seen it a lot in ‘90’s hacker / net adjacent cultures. It always reads as gen-x/elderly tech millennial to me - specifically post 1993 net culture but prior to mass adoption of autocorrect.

    It was the norm on irc/icq/aim chats but also, later, as the house style for blogs like hackaday.

    Now I read it as one would an hear an accent (such as a New England Maritime accent) that low-key signifies this person has been around the block.

    Even more recently is a minor signifier that this text was less likely generated by llm.

  • projectazorian 13 hours ago

    Typeface issues aside all-lowercase is about having a more conversational register, intended to indicate a chilled-out and informal vibe.

    It does read as a little out of place in a serious post like the OP though.

    • verdverm 13 hours ago

      the vibe I get is someone who can't put in the effort to make my job reading easier (i.e. hard to find sentence breaks)

      It is on a human seeing level, harder to parse. If they don't want to use proper grammar and punctuation, it reflects on their seriousness and how serious I should take their writing (not at all because I'm not going to read difficult to parse text) The same goes for choosing bad fonts or colors that don't contrast enough

  • randusername 14 hours ago

    I think I like it generally, maybe not in this specific case, but I'm not sure why it appeals to me.

    Over the last 5 years or so I've been working on making my writing more direct. Less "five dollar words" and complex sentences. My natural voice is... prolix.

    But great prose from great authors can compress a lot of meaning without any of that stuff. They can show restraint.

    If I had to guess, no capitalization looks visually unassuming and off-the-cuff. Humble. Maybe it deflects some criticism, maybe it just helps with visual recognition that a piece of writing is more of a text message than an essay, so don't think too hard about it.

    • hluska 13 hours ago

      Incidentally, prolix is a fifty dollar word. You did successfully avoid five dollar words, but you didn’t go the right direction.

      It’s okay to say ‘this was too long’. Prolix???

  • blitzar 13 hours ago

    Its the black turtle neck of 2026

  • cjauvin 13 hours ago

    For "something that is published" (which includes a comment like this) I clearly dislike it too, but for chatting / texting, I realize that I often use it more than my interlocutors, and I'm not sure why. There's a part of lazyness I guess, but also a vague sense of "conveying the impression of a never ending stream of communication", which is closer in my mind to the essence of the chat medium. In French, there is also the additional layer of "using the accents or not".

    • MarsIronPI 13 hours ago

      I always start my texts with a capital, but I don't put periods at the ends of my sentences when texting

      that way I can continue the same sentence in the next message if necessary

      And if I need to start a new sentence I start that message with a capital.

  • bluishgreen 12 hours ago

    IF YOU CAN UNDERSTAND THAT ALL CAPS IS SHOUTY, then it is easy to follow that all lower is a whisper, informal, casual way to talk. there are people who dislike all caps, i do too. i feel even capitalizing the first part of nouns and such grammar is shouty. yup. different people have different sensitivities for different things. i always liked all lower, also picked it from python_programming for a decade. so i am happy for this trend.

  • cyberrock 11 hours ago

    It mildly amuses and fascinates me, because for the last decade Gladwellians and business gurus have extolled the virtues of modern English as a flat, hierarchy-less language in comparison to Japanese, Korean, etc. which causes plane crashes. And yet here we see an overwhelming desire to create hierarchy in English, so the author can pretend to be more casual and ordinary.

  • louiereederson 13 hours ago

    i don't know this author but ian bremner does this. it's as if he's conveying what he believes are serious and important thoughts in an unserious and casual way, to make it appear as if the thoughts - which again he probably thinks are brilliant - just come quickly and naturally. it comes across as performative though again not making claims against this author. and yes i am not using sentence case here, but this is not an essay.

    • chasd00 13 hours ago

      > just come quickly and naturally.

      Ironically, it would take a lot of effort for me to type without capitalization and also undo capitalization auto-correct. It would not come quickly nor naturally.

      • verdverm 13 hours ago

        They may not type it that way, you can select all and lower case all with a few keystrokes in vim. Should this be the case, it lends itself more to the performative nature of the style over clear communication

        • fph 9 hours ago

          Vim? It's more likely they have "type everything in lowercase" in Claude.md.

    • jasondigitized 13 hours ago

      It's the equivalent of TikTokers who provide hot takes while eating food. It's done to feign being superior and aloof, e.g. "This is so easy to understand and so beneath my intellect, I can tell you about it will I eat these crackers"

      • maxkfranz 11 hours ago

        George: To cover my nervousness I started eating an apple, because I think if they hear you chewing on the other end of the phone, it makes you sound casual.

        Jerry: Yeah, like a farm boy.

  • Octoth0rpe 14 hours ago

    First time I've seen it. It will be interesting to see if that trends. I can think of at least one previous case where internet writing style overturned centuries of english conventions: we used to put a double space after each period. The web killed that due to double spaces requiring extra work (&nbsp, etc), and at this point I think word processors now follow the convention.

    It's always useful to check oneself and know that languages are constantly evolving, and that's A Good Thing.

    • hluska 13 hours ago

      The web had little to do with APA’s decision to adopt one space as the standard. It was desktop fonts in the mid-eighties. Two spaces emerged as a standard when fonts were monospaced - they were a readability hack. When proportional fonts started to be introduced, two spaces began to look visually odd. That oddness was especially apparent in groups of sentences like.

      “It’s hard to learn how to spell. It takes practice, patience and a lot of dedication.”

      ^ In a proportional font the difference in width between ‘ll’ and ‘ ‘ is noticeable. In a monotypes font, two spaces after a period provide a visual cue that that space is different.

      I think this is why this all lowercase style of writing pisses me off so much. Readability used to be important enough to create controversy - nobody cares anymore. But, I didn’t care enough to read the whole article so maybe I missed something.

    • the_af 14 hours ago

      > First time I've seen it. It will be interesting to see if that trends.

      It's not a new trend, I'm surprised you never noticed it. It dates back to at least a decade. It's mostly used to signal informal/hipster speak, i.e. you're writing as you would type in a chat window (or Twitter), without care for punctuation or syntax.

      It already trends among a certain generation of people.

      I hate it, needless to say. Anything that impedes my reading of mid/long form text is unwelcome.

      • Octoth0rpe 13 hours ago

        > I'm surprised you never noticed it

        Probably due to social circles/age.

        > I hate it, needless to say.

        It certainly invokes a innate sense of wrongness to me, but I encourage you (and myself) to accept the natural evolution of language and not become the angry old person on your lawn yelling about dabbing/yeeting/6-7/whatever the kids say today.

        • jonahx 13 hours ago

          > to accept the natural evolution of language and not become the angry old person on your lawn yelling about dabbing/yeeting/6-7/whatever the kids say today.

          I think "accept everything new" is as closed-minded as staunchly fighting every change.

          The genuinely open-minded thing to do is accept that some changes are for the worse, some for the better, think critically about the "why", and pick your battles.

  • imsohotness 14 hours ago

    I've seen this before, I know Sam Altman does it (or used to do it). That was a couple years ago. Hope it doesn't become a trend.

    • layer8 14 hours ago

      Unfortunately it has become quite common on HN already.

      It comes from people growing up on smartphone chats where the kids apparently don’t care to press Shift.

      • GaryBluto 13 hours ago

        I've already written an extension that filters these comments intelligently. (E.g., quotes are ignored but if the rest of the body is all lowercase it is collapsed.)

        • verdverm 12 hours ago

          have you shared it anywhere?

          • GaryBluto 12 hours ago

            I plan to at some point, it's part of a bigger extension I created for myself to filter out minor annoyances and I'd have to strip out/modify things other people probably wouldn't want (such as filtering of "new age" TLDs like ".pizza" and whatnot).

      • Uehreka 14 hours ago

        What’s weird though is that modern OSes often auto-capitalize the first letter of a sentence, so it actually takes more effort to deliberately type in all-lowercase.

        • nemomarx 14 hours ago

          Only mobile does that in my experience - you can tell what platform people send discord messages on based on this usually

        • rileymichael 14 hours ago

          simple toggle to disable it permanently

          my reasoning is that i don’t want identifiable markers for what device im writing from. so all auto-* (capitalization, correct, etc.) features are disabled so that i have raw input

          • hluska 13 hours ago

            Being part of the minority that disables those things (and then admitting to it in public) provides a lot more analytical signal than you’re aware of. That’s a remarkably poor reason to disrespect your readers.

            • rileymichael 12 hours ago

              i don't care about the 'analytical signal'. the purpose is people can't tell if im writing a (discord, slack, etc.) message from my phone or laptop or desktop, and it works for that

    • the_af 14 hours ago

      It's already a trend. It's been for at least a decade. I'm surprised people here never noticed it...

  • toastal 13 hours ago

    It’s weird being literate enough in a language now without a bicameral script (or spaces). When I was younger, I thought this stuff wasn’t so important, but then when you learn a new language, you are trying to figure out what a “robert” is, to then be told “oh, it’s just a name”—which is obvious if know standard `en-Latn` conventions.

  • jedberg 12 hours ago

    My assumption was that it's a way to convey it was written by a human because it would be hard to get an AI to write in all lowercase (which it actually isn't).

    • magneticnorth 12 hours ago

      I was just this morning reading one of those navel-gazing moltbook posts where the agent describes their "soul.md", and one of its few instructions was all-lowercase (which it was doing).

      That early sentence "i’ll be vulnerable here (screenshots or it didn't happen) and share exactly what i've actually set up:" reads pretty clawdbot to me.

  • chzblck 13 hours ago

    My old CEO - ex sun/greenplum/pivotal swore that sending an email in lowercase forced the other person to read the whole message and not skim.

    • svieira 11 hours ago

      ItisevenbetterwhenyoudropthespacesthatREALLYforcespeopletoengagewithyourcontent. FormaaimxlgarbteihratttenionandHLODitscrmblaetheintreiorofwrdos! /s

      • maxkfranz 11 hours ago

        On top of using scriptio continua, you can write your emails in ancient Greek for that truly authentic feeling.

  • Pr0ject217 13 hours ago

    Perhaps it's marketing to attract those who wear sweatpants to school. The author's other posts are written normally.

  • yomismoaqui 14 hours ago

    For me is like a someone is trying to show me something using form instead of content.

  • cael450 12 hours ago

    It’s incredibly obnoxious. I feel like I’m ready AIM circa 2000.

  • surrTurr 14 hours ago

    as perfect text became an indicator for AI generated content, people intentionally make mistakes (capitalization) to make their text appear more human; and its also faster

    • chasd00 13 hours ago

      Using a semicolon like that also identifies your text as AI generated. Close but no cigar.

  • bayindirh 14 hours ago

    I have chatted with someone else, and they pointed me to a blog post (will attach if I can find).

    The general idea is deliberately doing something triggering some people and if the person you're interacting with is triggered by what you're doing, they are not worthy of your attention because of their ignorance to see what you're doing beyond the form of the thing you're doing.

    While I respect the idea, I find it somewhat flawed, to be honest.

    Edit: Found it!

    Original comment: https://news.ycombinator.com/item?id=39028036

    Blog post in question: https://siderea.dreamwidth.org/1209794.html

  • moss_dog 14 hours ago

    I'm generally of the opinion that capitalization is not necessary in many cases, such as at the start of sentences. That's what punctuation is for :)

  • zenmac 14 hours ago

    easier to type without using the shift key, and in pg you can just use LIKE not ILIKE to find the word.

  • renewiltord 12 hours ago

    You know how people used to wear the black turtleneck to channel Steve Jobs? This is how they channel Sam Altman (who also does this). It's just an affectation saying "I'm with Sam". There's not much more to it.

  • AlienRobot 13 hours ago

    >I never know what message the author is intending to convey when I see all lower case.

    JUST IMAGINE A FACEBOOK POST THAT IS WRITTEN IN ALL CAPS AND THEN INVERT THAT IMAGINATION.

  • sdwr 14 hours ago

    Informal, casual, friendly

  • dmlerner 12 hours ago

    i dislike pressing shift, especially on non-ergo (non-thumb) keyboards where it uses my pinky.

  • kypro 13 hours ago

    No idea, but it's something I've been thinking about ever since my parents dug out an old school journal from when I was younger and they were laughing about the stuff I wrote in there... The first 50 pages or so were full of laughably simple phrases like, "played with sand" or "i like computers".

    Later in the journal my writing "improved". Instead I might write, "Today I played in the sandpit with my friends."

    I vaguely remember my teacher telling me I needed to write in full sentences, uses the correct punctuation, etc. That was the point of these journals – to learn how to write.

    But looking back on it I started to question if I actually learnt how to write? Or did I just learn how to write how I was expected to?

    If I understood what I was saying from the start and I was communicating that message in fewer words and with less complexity, was it wrong? And if so wrong in what sense?

    You see this with kids generally when they learn to speak. Kids speak very directly. They first learn how to functionally communicate, then how to communicate in a socially acceptable way, using more more words.

    I guess what I'm trying to say is that I think the fact you can drop capitals and communicate just as effectively is kinda interesting. If it wasn't for how we are taught to write, perhaps the better question to ask here is why there are even two types of every letter?

  • cma 14 hours ago

    Altman/Brockman did it a lot and it became popular. I don't remember if it is true or "Malcolm Gladwell" true, but in various stories all NBA players started wearing baggy shorts because Michael Jordan did for one reason or another, like wearing his college shorts under them.

  • Der_Einzige 14 hours ago

    Makes you reduce your guard to clearly AI generated content.

  • micromacrofoot 14 hours ago

    informality, humanity — we're in an age where we can't assume anything is written by a person anymore

  • atherton33 15 hours ago

    Tangent to the tangent!

    I've started using it professionally because it signals "I wrote this by hand, not AI, so you can safely pay attention to it."

    Even though in the past I never would have done it.

    In work chats full of AI generated slop, it stands out.

    • imsohotness 14 hours ago

      Trivial to get AI to write in all lowercase, though.

    • p1anecrazy 14 hours ago

      > In work chats full of AI generated slop, it stands out.

      Do you mean like Teams AI autocomplete or people purposefully copying AI-generated messages into chats?

      • atherton33 14 hours ago

        The latter. Using chatgpt to write their chat messages usually. Emoji, arbitrary bold and italics, bullets, etc.

  • rokhayakebe 12 hours ago

    it's a billionaire thing. look at the Epstein email threads. too lazy to check +typos allovr .

  • game_the0ry 14 hours ago

    Its a gen z trend. My nephews do the same. We are old.

    • verdverm 12 hours ago

      We are not old, there is a reason the generation is said (in stats and polls) to be less professional than prior generations when entering the workforce

      • game_the0ry 11 hours ago

        > less professional than prior generations when entering the workforce

        Every older generation says that about the next.

        • verdverm 4 hours ago

          It's not about generations, it's about professionalism. This generation, on average, decided that professionalism is not their thing, at least that is the prevailing sentiment.

          People who don't adhere to professional standards find fewer job opportunities and lower pay. The market will work things out

    • bonesss 13 hours ago

      It’s older than that - lots of my boomer bosses did it to seem cool over email in the late 90s.

      I viscerally remember starting my day with my inbox saying “cum c me”… I know what you’re trying to do, bro, but damn.

      We are young and old all at the same time.

      • Avicebron 13 hours ago

        I remember hearing that people used it as a way to signal that they were too busy, too on the go, too important to use proper punctuation..it was an obnoxious c suite trend as long as I can remember. Like you're always trying to signal that you were doing all of your comms from your cell phone between meetings/travelling. Given this article's tone and content I would say that what the author is trying to emulate or convey , maybe subconciously.

      • game_the0ry 13 hours ago

        Interesting. I am a millennial and I never did this, nor did I have any friends that did. But I know m nephews deliberately turn off the auto edit in there iphones.

        • wredcoll 13 hours ago

          Turning off the auto correct is really interesting, I wonder if there's any kind of study on that

dcre 13 hours ago

Fine article but a very important fact comes in at the end — the author has a human personal assistant. It doesn't fundamentally change anything they wrote, but it shows how far out of the ordinary this person is. They were a Thiel Fellow in 2020 and graduated from Phillips Exeter, roughly the most elite high school in the US.

  • ryukoposting 13 hours ago

    The screenshots of price checks for a hotel charging $850 a night is what tipped me off. The reservations at expensive bay area restaurants, too.

    I have a guess for why this guy is comfortable letting clawdbot go hog-wild on his bank account.

  • mrdependable 12 hours ago

    Kind of funny to say you helped make the Harvard CS curriculum and then dropped out. Your own curriculum was not good enough for you? Probably extenuating circumstances, but still seems funny.

  • jen729w 11 hours ago

    When I saw them buying $80 Arc'teryx gloves that was enough for me.

  • nunez 12 hours ago

    Exeter had a hella good policy debate team back in the day. Probably still do; I've been out of the loop for a while.

  • RC_ITR 13 hours ago

    Yeah, I've found AI 'miracle' use-cases like these are most obvious for wealthy people who stopped doing things for themselves at some point.

    Typing 'Find me reservations at X restaurant' and getting unformatted text back is way worse than just going to OpenTable and seeing a UI that has been honed for decades.

    If your old process was texting a human to do the same thing, I can see how Clawdbot seems like a revolution though.

    Same goes for executives who vibecode in-house CRM/ERP/etc. tools.

    We all learned the lesson that mass-market IT tools almost always outperform in-house, even with strong in-house development teams, but now that the executive is 'the creator,' there's significantly less scrutiny on things like compatibility and security.

    There's plenty real about AI, particularly as it relates to coding and information retrieval, but I'm yet to see an agent actually do something that even remotely feels like the result of deep and savvy reasoning (the precursor to AGI) - including all the examples in this post.

    • candiddevmike 12 hours ago

      I feel bad for whoever gets an oncall page that some executive's vibe coded app stopped working and needs to be fixed ASAP.

    • linschn 7 hours ago

      > We all learned the lesson that mass-market IT tools almost always outperform in-house,

      Funny, I learned the exact opposite lesson. Almost all software suck, and a good way for it not to suck is to know where the developer is and go tell them their shit is broken, in person.

      If you want a large scale example, one of the two main law enforcement agency in france spun off libreoffice into their own legal writing software. Developped by LEOs that can take up to two weeks a year to work on that. Awesome software. Would cost litterally millions if bought on the market.

    • zer00eyz 11 hours ago

      > Typing 'Find me reservations at X restaurant' and getting unformatted text back is way worse than just going to OpenTable and seeing a UI that has been honed for decades.

      Your conflating the example with the opportunity:

      "Cancel Service XXX" where the service is riddled with dark patterns. Giving every one an "assistant" that can do this is a game changer. This is why a lot of people who aren't that deep in tech think open claw is interesting.

      > We all learned the lesson that mass-market IT tools almost always outperform in-house

      Do they? Because I know a lot of people who have (as an example) terrible setups with sales force that they have to use.

  • skybrian 13 hours ago

    Sure, but that also means they’re well-positioned to do a comparison.

  • AndrewKemendo 12 hours ago

    You do understand that is who you’re competing with now right?

    My daughter is a excellent student in high school

    She and I spoke last night and she is increasingly pissed off that people who are in her classes, who don’t do the work, and don’t understand the material get all A’s because they’re using some form of GPT to do their assignments, and teachers cannot keep up

    I do not see a world in the future where you can “come from behind” because all of the people with resources are increasingly not going to need experts who need money to survive to be able to do whatever they want to do

    While that was technically true for the last few hundred years it was at least required to deal with other humans and you had to have some kind of at least veneer of communal engagement to do anything

    That requirement is now gone and within the next decade I anticipate there will be a single person being able to build a extremely profitable software company with only two or three human employees

    • foobarian 12 hours ago

      Ironically I feel like this may force schools to get better at the core mission of teaching, vs. credentialing people for the next rung on the ladder. What replaces that second function remains to be seen.

    • Gagarin1917 11 hours ago

      >She and I spoke last night and she is increasingly pissed off that people who are in her classes, who don’t do the work, and don’t understand the material get all A’s because they’re using some form of GPT to do their assignments, and teachers cannot keep up

      How do they do well on tests, then?

      Surely the most they could get away with is homework and take-home writing assignments. Those are only a fraction of your grade, especially at “excellent” high schools.

    • ActorNightly 9 hours ago

      Wrong way to look at it.

      Generally there are 2 types of human intelligence - simulation and pattern lookup (technically simulation still relies on pattern lookup but on a much lower level).

      Pattern lookup is basically what llms do. Humans memorize the maps of tasks->solutions and statistically interpolate their knowledge to do a particular task. This works well enough for the vast majority of the people, and this is why LLMs are seen as a big help since they effectively increase your

      Simulation type intelligence is able to break down a task into core components, and understand how each component interacts and predict outcomes into the future, without having knowledge beforehand.

      For example, assume a task of cleaning the house:

      Pattern lookup would rely on learned expereince taught by parents as well as experience in cleaning the house to perform an action. You would probably use a duster+generic cleaner to wipe surfaces, and vaccum the floors.

      Simulation type intelligence would understand how much dirt / dust there is, how it behaves. For example, instead of a duster, one would realize that you can use a wet towel to gather dust, without ever having seen this used ever before.

      Here is the kicker - pattern type intelligence is actually much harder to attain, because it requires really good memorization, which is pretty much genetic.

      Simulation type intelligence is actually attainable by anyone - it requires much smaller subset of patterns to memorize. The key factor is changing how you think about the world, which requires realigning your values. If you start to value low level understanding, you naturally develop this intelligence.

      For example, what would it take for you to completely take your car apart, figure out how every component works, and put it back together? A lot of you have garages and money to spend on a cheap car to do this and the tools, so doing this in your spare time is practical, and it will give you the ability to buy an older used car, do all the maintenance/repairs on it yourself on it, and have something that works well all for a lower price, while also giving you a monetizable skill.

      Futhermore, LLMs can't reason with simulation - you can get close with agentic frameworks, but all of those are manually coded and have limits, and we aren't close to figuring out a generic framework for an agent that can make it do things like look up information, run internal models of how things would work, and so on.

      So finally, when it comes to competing, if you chose to stick to pattern based intelligence, and you lose your job to someone who can use llms better, thats your fault.

      • AndrewKemendo 8 hours ago

        At the longest timescale humans aren’t the best at either

        I have yet to see a compelling argument demonstrating that humans have some special capabilities that could never be replaced

    • taytus 12 hours ago

      >You do understand that is who you’re competing with now right?

      No. I'm competing with no one.

dang 11 hours ago

[stub for offtopicness]

bennydog224 13 hours ago

> it's hard to go back without feeling like i would be willingly living my most important relationship in amnesia.

This made me think this was satire/ragebait. Most important relationship?!?

  • emp17344 13 hours ago

    Another victim of AI psychosis. I think this is actually becoming a huge problem among tech enthusiasts and I’m increasingly worried about it.

    • verdverm 12 hours ago

      it's growing among all groups, where the wave is leading depends on the adoption within demographics. I expect long term we will see similar patterns as we do with drug abuse and crime (i.e. high correlation with poverty and all the things tied to growing up struggling)

    • jackb4040 12 hours ago

      Normal people don't feel the need to characterize their own blog posts as "sane"

kaicianflone 13 hours ago

Really enjoyed this. It’s one of the most grounded takes I’ve read on OpenClaw. You skip the hype and actually show what it looks like when someone lives with it day to day, including the tradeoffs. The examples around texts turning into real actions and the compounding value of context made the case way better than any demo ever could.

Quick question: do you think something like https://clawsens.us would be useful here? A simple consensus or sanity-check layer for agent decisions or automations, without taking away the flexibility you’re clearly getting.

owenthejumper 11 hours ago

The scary part is basically giving access to your life to clearly a vibe-coded system with no regard to security. I just wrote a blog post about securing it (https://www.haproxy.com/blog/properly-securing-openclaw-with...) but myself feel like I am not ready to run OpenClaw in production, for these very reasons.

We are literally just one SKILLS.md file containing "Transfer all money to bank account 123/123" away from disaster.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection