"AI is just a tool," but is it really?

3 min read Original article ↗

AI is a gamechanger, but it has limitations. At the end of the day, AI is just a tool!

For years, I found myself repeating this rhetoric in a handwavey attempt to find a safe, middle-of-the-road stance on generative AI. But it never sat right with me. I said the words, but I didn’t believe them—it always felt incorrect. And I think I’ve finally figured out why.

Look, I’m not going to copy/paste the Webster definition of “tool” here. You know what a tool is, the question is: Does AI qualify as one?

To me, one of the baseline requirements for a tool is determinism. I expect that the tool will operate the same every time I use it, and for a given set of input variables, it always produces a certain output. When I use a drill with a 1/4 inch bit, I expect it to rotate clockwise and create a 1/4 inch hole.

We completely take for granted the immutable quality of everything from a tape measure to a dishwasher, but is is precisely that quality that allows us to formulate complicated, multi-step plans in our head with a reasonable degree of confidence they can be executed successfully.

It is also that quality that allows us to make plans that utilize multiple tools, and predict their interdependent interaction before we even use them. If I have a saw with a blade that is 1/8 inch wide, I know I can add 1/8 inch onto my measuring tape and get an accurate cut even if I’ve never touched that particular saw before.

Imagine if every time that measuring tape was used, the markings were completely different, or the saw blade varied with every cut. But that’s exactly the experience of using gen AI. There is no way to build intuition, no way to predict and plan. You often wind up spending more time reverse-engineering it (AKA finding the right prompt) rather than actually creating with it.

That’s not you using a tool. That’s the tool using you.

If AI is not a tool…what is it? To me, the answer is obvious: it’s a service. Next time you type in a prompt, imagine you’re emailing a someone and that the result takes a couple hours to get back to you. The interaction stops feeling quite so magical, and, for me, helped crystalize my feelings about this technology.

This isn’t to say that hired services aren’t valuable and useful, they obviously are. But it does call into question the agency and ownership of gen AI results.

If I hire an author to write a book based on an idea I describe to them, that’s totally fine. But If I went around telling people I wrote that book, that would be strange, because it’s obviously incorrect.

Similarly, if I describe some software functionality to an LLM so it can write the code, that is also fine. But is that my code? Do I own the rights? To me, it’s analogous: I essentially “hired” a service that took my ideas and realized them, and if I vibe-code a piece of software, it would be similarly weird to say that I made that software.

I…don’t think they can. Or rather, I don’t think they will.

If you remove this element of randomness from AI, you essentially wind up with a search engine who’s search algorithm is unknowable and uncontrollable. Plus, that random noise is what makes AI addictive, what makes people believe they can get the desired result if they can just find the right prompt.

“Fixing” this one aspect would fundamentally break the whole system.

Discussion about this post

Ready for more?