I’m currently on a diet, and I’ve never been so annoyed by all the diet and calorie tracking apps.
I’m already in the Apple ecosystem, and Apple Health is a sufficient health database! All I wanted is an interface to my nutrition data that tracks and shows me exactly what I needed to know. But these diet apps all want to compete with Apple Health to be the database. They present a fixed interface where we select food from a list (some have photo recognition now, but many still have a limited database), and they show us some pre-defined charts for your weight, your calorie consumption, or maybe your exercise volume. We’re locked into their premade interfaces.
But AI agents are a better interface. I built an app, HealthBridge (GitHub), for my agent1 to read and write my Apple Health data in ~real time.
Here’s what this unlocks:
Photo-based meal logging/coaching: similar to many diet tracking apps, I can send my agent a photo of my meal, and it will estimate2 and write calories and macronutrients to my Apple Health:
Unlike other apps, it can coach me on what to eat, and it might shame me:
Real-time coaching at the gym: I had never really done cardio at the gym, but my agent suggested adding some Zone 2 cardio in my workout program. When I’m at the gym, I can ask it to check whether I’m in Zone 2!
Ad hoc analysis of my data: I thought I had adhered to my diet and exercise program, but my weigh-ins were all over the place and I didn’t see a clear trend. My agent pulled my data to help me understand what’s going on. Because it can write code to analyze the data, I can ask it whatever I’d like!
And so on.
Once you can talk to an agent to operate on your own data, the space of possible workflows and experiences explodes. Software is turning into a liquid, and we need to fundamentally rethink how we interact with computers.
Over the last six months we’ve watched a new type of software take shape in real time: ephemeral, written and executed à la minute, to communicate with you or achieve a goal you asked for. Claude’s interactive visuals are clearly a step in this direction where UI becomes generated (Ethan Mollick covered this better than I could). Surely this shift will not stop at the UI layer.
I’ve been trying to understand why OpenClaw took off, and it took me a few attempts. At first, I put it in a Docker container with very limited access to my files (like any security-minded software person would), which made it a more complicated ChatGPT. But after my friend Xiao gently nudged me3, I put it on a Mac VM and gave it an email and limited access to some of my accounts (calendar, Spotify, to name a few). This caused a step change in how it felt interacting with OpenClaw: because it now has my data and its capabilities are easily augmented, it can customize its workflows for me.
Our existing experience with the computer is largely mediated with apps made by large companies for the average consumer. These apps support static user workflows in mind defined by product teams, they are often optimized for revenue and retention, and they retain your data as the moat. As users, we are expected to learn and adapt to their predefined workflows.
But we don’t always want the same workflows. AI-first experiences are much more personal. Your workflows, data, and tools are yours and portable, no longer locked in with the services you use. When your documents live in Google Drive/Microsoft Office/Notion, you’re limited by the imagination and priorities of Google/Microsoft/Notion. But when those same documents can be freely modified by tools you can download or build, you’re only bottlenecked by your own creativity and your agents’ capabilities, both of which you have agency over.
In short, OpenClaw represents something fundamentally different from the chatbot apps today: it’s an interface to software generated for my needs, operating on my data, evolving with my own workflow, whose capabilities I can augment. Sure, these agents live in a messaging app and can only do so much within the confines of these messages; this will quickly change. Once it does, our experience with computers will no longer be the same.
In building HealthBridge for myself, I saw the old world coexist with the new world. On the “new” side, chatting-to-the-agent gave me my data on demand with disposable code answering my arbitrary question; on the “old” side, the iOS app and CLI and the serverless relay are still the traditional kind of software that stick around. They still need some minimum reliability level for the whole thing to function.
As software engineers we’re trained to write clean code and design maintainable systems (remember how long it takes to get C++ readability, fellow Xooglers?), but AI agents are happy to churn out anything to satisfy your prompt. Claude coded HealthBridge for me, and even with a tiny project like this, there’s a lot of dead code and weirdly-engineered abstractions. Is this maintainable?
HealthBridge is low-stakes (although I still read the end-to-end encryption code), but what about everything else? After all, ephemeral software still runs on durable infrastructure. So where does the new world leave the durable layer? How should we do large-scale engineering today with reliability guarantees in mind, given that AI will write more and more code? Where should we fall on the “Zechner-Lopopolo continuum”?
I genuinely don’t know the answer, but I know these don’t work yet:
“AI, write better code, make no mistakes”: does the AI have intuition for when it needs to write durable vs. ephemeral code? How does it know what “better” is?
“AI will maintain the bad code”: can AI tell one Chesterton’s Fence from another, with limited history encoded in the codebase itself? What if the agent’s abstractions are nice but wrong? Do agents know when to refactor to reduce future maintenance burden?
“AI will just verify its own work”: how do agents know what/how to verify? They often write useless tests and miss critical ones. Note the proliferation of code review bots on GitHub: it’s a gesture towards a solution, but they often miss subtle edge cases (granted, humans do too).
Maybe the solution is somewhere in between, where we decide what’s critical, be hands on where it matters, and delegate everything else. We will find out soon enough.
Back to the app. HealthBridge is open source (MIT licensed). You can point your agent at the repo to set it up on your computer, but you’ll need to build the iOS app and self-host the Cloudflare worker (both are free).
If you want to use my Cloudflare setup and the TestFlight iOS build with minimal setup, DM me on twitter!



