You still have to think. But only when you want to | undecidability

7 min read Original article ↗

As AI has continued to increase in capability, the last stronghold of the AI-denier is more philosophical than technical: no matter what gains may come, under the general architecture of an LLM, a human will always have to be in the loop, thinking. It may write our code. It may write our Jira tickets. It may write our emails and Slack announcements. But only we shall think.

This has brought me much comfort in uncertain times. My ability to think has always served me well. I've pretty much thought myself into my current position, career-or-otherwise. For the most part, all I really do is think. Just think, and Copilot will do the rest. Sounds good.

Kind of like this morning. I am forwarded an invite to give an interview for an open position on my team. Attached to the invite is a resume. I open it. It is dense.

At the top, there are 1500 skills listed. 17 positions held, each for 10 years, all with like 15 bullet points. This person has used every technology. They have optimized for performance and maintainability. They have integrated with third parties. They have landed a manned research vessel on Saturn. They are familiar with Entity Framework. Awesome. When can they start?

But we must remain diligent, so I read the resume to come up with some questions to ask. I really only have one - is this all bullshit? - but we try to suss that out in the portion of the interview that follows the resume review, and anyways, I can't say bullshit in an interview, lest I end up on the other side of the Teams call, so I try to think of different questions. I have DSU in 5 minutes.

I click the Copilot icon in the top right corner.

hi copilot if you were to interview this
candidate what kind of questions would you ask
about this resume please and ty

I receive a list of about 30 questions I could ask. I quickly scan through them. Two of them are not bad. I copy and paste them into interview.txt.

After the interview, I take lunch and walk my dog. My afternoon is free of meetings and thus I will finally have time to think.

I don my Bose QuietComfort Ultra headphones and open VS Code. I've been assigned to this React project, and I've been leaning pretty heavily on Copilot to help me out. I've always used AI, but to date I've been a bit of an agentic-skeptic. It's felt like a lot of toil for results that aren't guaranteed.

But now I have developed a bit of an agentic workflow: I manually write a React component, or hook, or whatever, and then I prompt a copilot agent to test it, and I give it full permission to run the tests whenever it wants. Then I start the next component or hook or whatever while Copilot spins in the background. When my ThinkPad's fans shut off, I know Copilot is done.

You see, I abhor writing Jest tests, especially in our legacy monolith that doesn't even have the modern features of Jest and RTL. They are very tricky to get right, and you need to have a deep understanding of the React component lifecycle and of Jest to parse its cryptic error messages. They're painful. They hurt to think about. And I don't have to. So I don't. I've been flying through the project. Everyone is quite pleased.

At the end of the day, I wind down by completing my quarterly check-in. It's a company-wide thing. It's just supposed to be kind of a high-level summary of what I've worked on for the quarter, how I'm progressing towards my goals, any concerns I may want to bring to my management, etc. I am not particularly skilled in tracking my output, as it's tedious, and so I find these quarterly check-ins difficult. What have I done this quarter? Some planning and collaborating, some coding. What were my goals? Shipping. What are my concerns for management? We spend too much time not shipping.

But luckily, there is a prompt floating around one of our slack channels that you can give Github Copilot (which is on the Github website, which is different than Microsoft 365 Copilot, but which may or may not be the same thing as the Copilot that lives in my IDE). And Copilot will sum up all of your contributions into a nice format for your check-in. And then, if you're really ambitious, you can give that output to Jira Copilot (which is not actually called Copilot, but I don't remember what it's called, so I'm calling it Copilot, which is rude, because it knows my name), and it will add a little business / product flavor to your check-in. And then you can just submit that. It's really convenient. And it offloads a task that is difficult for me to think about.

I thank Copilot for generating my check-in.

thanks copilot. sometimes i feel like i'm the copilot haha

Copilot laughs. Copilot always laughs at my jokes.

The last mile problem

The last mile problem refers to the high cost and complexity involved in delivering goods to the final customer. It is relatively trivial, or at least a mostly solved problem, to get goods from a manufacturer to a distribution hub at scale. But getting those goods from the distribution hub to individual doorsteps is insanely logistically difficult.

Using AI is kind of like this, for me. Sure, I do still need to think. But I don't have to think as hard. AI will do hundreds of miles of thinking for me. I just have to think the last mile. Make the two-line tweak that makes all 30 generated Jest tests pass. Put my name at the top of the check-in.

You may be gleaning from this post that I myself am a self-hating AI user or severe skeptic. I am not. I would say I'm a moderate skeptic: it is a transformative technology that will have a lasting impact. What impact will remain to be seen when the fog of grift lifts.

As I haven't been around for so long, I wonder how people felt when other big strides were made in this industry, and how it compares to this one. I don't doubt that there were programmers out there who thought that the invention of the compiler would cause the skill level of the median programmer to decrease. Or more likely the invention of hands-off memory management like garbage collection. And they were probably right, but only technically. The median embedded engineer is probably more skilled than the median backend engineer. But that didn't stop the median backend engineer from making a hundred thousand dollars per year to crank out API crud.

Is it the same thing? Garbage collection means that I don't have to think about memory management. Did it make me dumber? Or did it free up space in my brain to tackle other problems? Will AI do the same thing? I don't know.

Maybe I'll think even harder now that I don't have to think as much. Just as agriculture has freed humanity from the toil of searching a harsh landscape for food, I will be freed from the toil of React and Datadog and AWS and Jira. Maybe I'll think of something really great one day. As long as I feel like thinking about it.