Zen and the Art of Hand-written Code

8 min read Original article ↗

In light of the recent(ish) boom of AI agents and large orchestration layers of robot minions, I wanted to take some time to write out my thoughts on some questions that have arisen in my mind from this trend, most of which owe to what I’ve learned from one of the best books I read last year: Zen and the Art of Motorcycle Maintenance.

You learn less about coding and software engineering when purely building with AI-augmented coding techniques. This is something that most of us have intuitively gotten after building a feature with AI, only to later have to answer how it works, at which point we realize that we don’t understand the code like we thought we did. The easy fix for this is to intentionally prompt with statements like “show me step by step how you did this” or just following up with targeted questions after reading the AI-written code. However, the loss that I’m more concerned about is in our ability to judge Quality.

In Zen and the Art of Motorcycle Maintenance, the author’s alter ego, Phaedrus, becomes obsessed in his pursuit of what he calls “Quality” (when I refer to this from now, I will always capitalize). For Phaedrus, Quality is that unknown known objectively subjective reality that is part of every judgement, but is most visible in the arts. What does it mean for a painting to be beautiful? What does it mean to call a piece of writing good? For Phaedrus, you can start to identify some aspects of what makes these “good”, but you can not reduce our judgement of it being good to those aspects. And what do we mean by “good” anyway? There is an objective reality to what makes great art (techniques and form), but there is also an inescapable subjective part. You could mash together all of the techniques for “good” writing into one piece and still come out with trash. Quality is some higher shifting reality that all subjectiveness must bow to. To call a piece of work “good” is to say that it has some sufficient amount of Quality. Brian Kihoon Lee describes this concept in a more digestible way in what he calls “taste”.1

The interesting thing about Quality is that it’s not something you learn by just hearing about it or being taught it. You can only produce work in line with Quality when you’ve experienced it yourself or engage with other communities and work that have Quality. It’s built from all those “ah-HA!” moments you have, when your work finally clicks and everything feels right with the world. Quality stands silently at the door of those who seriously present themselves to their work and knocks when it so chooses. It’s the result of engaging with your work in an embodied way and going forward, you remember what it feels like and try to produce it again in different scenarios.

But what happens when we are no longer directly engaged in the nitty-gritty of our work? We’ll lose our creativity. Creativity to me is the process of generating fresh Quality attuned insights based off your past experiences of creating great work. It’s a cycle. The greatest music artists sat with the works of those who came before them, but they didn’t just replicate that work. They built on it and let what was good and came before inform a completely new and awesome work. Creativity grows in the soil of the mundane. You need to be engaged in the unexciting unglamorous parts of your work, day in and day out, in order to have those moments of great insight and Quality.

There’s a great scene in The Art of Motorcycle Maintenance where the main character suggests fixing their friend’s bike with the aluminum from a beer can rather than grabbing the appropriate tool from the store. AI doesn’t fix bikes with aluminum from a beer can, unless it’s read how many other humans did this and were successful. It goes for the simple and “best” path from its’ data, without the ability to create new insight.2 And this is because that kind of insight only comes from the experience of being engaged with your whole person in a piece of work. AI can’t learn Quality in my opinion, it only tries to emulate the Quality work it sees, but suppose it can? What kind of people will we become if we’re no longer interested in developing a sense for Quality? We won’t even be able to judge well the work that these AI’s put out. We won’t be able to judge anything well. To borrow a quote from the book, we’ll be come more like the author’s friend John:


This old engine has a nickels-and-dimes sound to it. As if there were a lot of loose change flying around inside. Sounds awful, but it’s just normal valve clatter. Once you get used to that sound and learn to expect it, you automatically hear any difference. If you don’t hear any, that’s good.

I tried to get John interested in that sound once but it was hopeless. All he heard was noise and all he saw was the machine and me with greasy tools in my hands, nothing else.

I already started toward this question at the end of the last section, but this question is more broad than just a loss in learning. While discussing the virtue of AI use, many people are debating about whether or not AI-augmented coding techniques actually improve productivity, but there’s a different question that I believe also deserves attention. What kind of people are we being molded into through our rampant AI use?

I was reading a completely unrelated article that put forth a definition of virtue that I think is helpful for this discussion.3 Virtue is how we relate to goods and the world around us. While food is always a good thing (it sustains our life), how we engage with food can either make us better or worse. Healthy or gluttons. Virtuous or un-virtuous. There are things that are good or even neutral, but how we interact with these good things may not be. Who we become after interacting with them may not be. This is certainly true with AI.

I read another article recently about an interview with Steve Yegge on AI and the future of software engineering.4 While he is someone who is extremely bullish on AI, he had to admit that excessive AI use is “draining”. There’s good science to explain this. The process of hand-writing code has a slow dopaminergic process like reading or finishing a hard work out. You slowly make progress toward some goal and then when you finally get to the end, your brain rewards you and you feel good. AI-augmented work has a fast dopaminergic process like scrolling on social media or excessively checking your phone. You get fast dopamine hits every time you prompt the AI and things go well. The problem is that your brain has a finite amount of dopamine for a given time period (think of it like a teacher only having so many candies to hand out in class). So when you get too many large dopamine hits too fast, your brain decreases the amount of feel good chemical it’s sending to you so that it can recover it and you literally crash (thus the draining feeling of not wanting to do anything after scrolling and also after excessive AI use). By the time you finish work on a long feature, you don’t feel rewarded, you feel exhausted.

And that’s just the tip of the iceberg. I myself have experienced increased impatience, a lack of a desire to try problems myself first, increased isolation, heightened suspicion when reading code (or anything online really), and it goes on. I’m someone who loves creating complex processes to improve my life (often to the point of over-complication) and AI has presented itself as a unique vice for me. I often spend hours using AI to automate tasks that take thirty minutes, only to later throw away the AI solution after and not even do the original task because now I’m tired. It’s a mess. But it’s all in how I interact with the tool.

I am not the “AI bad”, “Analogue good” return to monke man. I very much like AI. I’m working on setting up OpenClaw right now on a raspberry pi so I can continue overcomplicating simple life tasks for myself (I told you it’s a problem). But we ought to spend more time engaging seriously with how this highly effective tool is shaping our thinking and our lives. We might find that there’s a cost much higher than what we pay in tokens.

video on why boredom and the mundane is good from a Harvard professor

Shorter video on why boredom is good (with monkey puns)

video on how dopamine drains your brain

Discussion about this post

Ready for more?