How AI Saved Me 30 Minutes

4 min read Original article ↗

This title may seem tongue-in-cheek in a world where seldom a day goes by without hearing a friend, colleague, or a viral post touting how AI saved them hours, if not days, of engineering work. In fact, just this week a friend humbly bragged to me about how he used AI to implement a self-hosted DNS server in Go, and he didn't even know Go!

Alas, the title is accurate. Ironically, I've been too busy to play around with AI. Moreover, the use of AI requires a measure of 'letting go' which is difficult for me to stomach. In the rare moments I've decided to test AI, I've been left unimpressed and concluded that I shouldn't have bothered with it the first place. I blamed AI for not being ready for my needs. However, as I keep hearing others' positive testimonies, I'm changing my tune: maybe I'm the one not ready for AI.

In this post I want to share how I took a small step towards improving my relationship with AI. More crucial than saving 30 minutes, I developed a little more trust in AI output and also got better at prompting.

I've realized that learning how to use AI is like climbing with a new climbing partner. We can't expect to climb K2 together in the first year. That's setting the relationship up for a failure. Maybe my initial tests with AI fell into this trap. Instead we need to first build trust on easier climbs, learn how the other communicates, before embarking on more challenging routes.

What I describe below is an 'easy climb'.


On Monday evening, I deployed what I thought would be an innocuous change and called it a day. The following morning, I noticed an unusual increase in 500s during my routine observability checks. It wasn't high enough to alert me, but ~200 users had been affected over 12 hours. I quickly deployed the fix. Some head-shaking may have been involved.

As is my custom whenever something like this happens, I send the affected users an email apologizing for the technical issue and ask them to try again whatever they were doing before. This is where I decided to play with AI. Based on my previous experiences, I wasn't hopeful.

The climbing K2 version of my prompt to AI would've been: 'figure out the users who've been affected by this bug and send them an apology email'. Disappointment guaranteed. Instead, I broke things down and used AI for well-defined, constrained tasks.

First, I manually grabbed all the requests from Newrelic which errored out and converted it to a JSON payload which contain the individual request details like the URI, etc...

Not every request URI gave me information about the affected user. The LLM assisted me in filtering out events in the JSON which didn't give me any user information. The LLM was also helpful in parsing the ids from the JSON. Some examples of prompts where the LLM did things right:

  • Remove events where the URI doesn't contain an integer or a 7 digit alphanumeric code
  • Remove events with URI starting with /book or /author
  • Fix JSON
  • Parse out the user id or 7 digit alphanumeric code from the URI and give me a comma separated list with the resulting identifiers
  • Parse ids from the events list where the URI is of the form /book-review/*/add

The last prompt didn't work as expected at first. The LLM response missed some ids. I told it so. To my pleasant surprise, the LLM responded that it didn't have a full view of the file content. Indeed my IDE was not sharing the full JSON file. When it did, the LLM performed as expected. Moments like this won my confidence. I especially liked how the LLM shared its thought process with me. This made troubleshooting easier.

The LLM also generated the code that queried my database for the target users and enqueued the emails to them. What impressed me was that it was familiar with my ORM syntax, models, relationships between the models, and model attributes. It also added print statements and comments for easier readability. Save for one minor issue, the code worked on the first attempt.

Seasoned AI users may read the above and may react like Mark Hanna in Wolf of Wall Street: 'You gotta pump those numbers up. Those are rookie numbers in this racket'. But this was my first taste of success with AI. I cannot wait to use it again for something a bit more challenging next time.

Meanwhile, I hope this post convinces more people to give AI another chance for the 'easy climbs'.