Is AI a tool or are you?

10 min read Original article ↗

Philosophers have spent very little time thinking about tools. Proof: a search of PhilPapers or the Stanford Encyclopedia of Philosophy for “tool” comes up zilch.

There’s two unfortunate things about that. The first is that we’re hit with a nonstop barrage of “AI is a tool”, “AI tools,” “how to use the new AI tools.”

Here’s a breathlessly enthusiastic email I received from my Provost this morning:

As part of the Google AI for Education Accelerator, we are excited to be one of the first campuses offering the Google AI Professional Certificate. This curriculum, designed by experts at Google, allows students, staff, and faculty to practice using AI for deep research, data analysis, vibe coding, and more through hands-on learning. By integrating AI into their existing workflows, [our] community can unlock new levels of productivity and innovation with no-cost access.

Wow, what a wonderful tool with absolutely no downsides of any kind! As usual, this whole AI is just a tool shtick is shockingly unexamined and uncritiqued.

The second unfortunate thing is that the one philosopher who actually has spent some effort thinking about tools is Martin Heidegger. Here’s just two sentences from Being and Time (p. 106) illustrating his ideas about tools.

That the world does not ‘consist’ of the ready-to-hand shows itself in the fact (among others) that whenever the world is lit up in the modes of concern which we have been Interpreting, the ready-to hand becomes deprived of its worldhood so that Being-just-present-at-hand comes to the fore. If, in our everyday concern with the ‘environment’, it is to be possible for equipment ready-to-hand to be encountered in its ‘Being-in-itself’ [in seinem “An-sich-sein”], then those assignments and referential totalities in which our circumspection ‘is absorbed’ cannot become a theme for that circumspection any more than they can for grasping things ‘thematically’ but non-circumspectively.

Well, that certainly cleared everything up.

Let me take a swing here. Heidegger doesn’t define what a tool is, but is interested in how we relate to tools. There’s two ways:

  1. When we use a tool with mastery we stop paying any attention to it. I use my eyeglasses all the time with very little thought to them as an object. They just become a kind of extension of the self, like a touch typist whose fingers automatically produce the words she is thinking of. The keyboard is simply a conduit connected to the typist’s mind and in a sense disappears from conscious awareness.1 Heidegger calls this “readiness-to-hand,” and thinks it is the primary way we experience tools.

  2. We can always examine a tool as an object without using it. I recently got a bandsaw and spent a lot of time setting it up and learning how to operate it. I was very self-conscious and attentive to the saw the first time I made a cut with it; it was in no way an automatic, flow-state action that didn’t need my conscious mind. Attention to a tool as an object Heidegger calls “presence-at-hand.”

Heidegger might be comically obscurantist and prolix, but he sounds basically right. The problem is that Heidegger never provides a definition of a tool, rather like when he declares that “why is there something rather than nothing?” is the fundamental question of metaphysics, but then does nothing at all to answer it.

I took a look in the OED, but it was quite unhelpful. “An instrument of manual operation” is the primary definition, followed by loads of specific examples (many from bookbinding, I was pleased to see). Obviously that’s too narrow; we might think reason is a tool, but not one of manual operation. The OED does say that a tool can be “a person used by another for his own ends, one who is, or allows himself to be, made a mere instrument for some purpose; a cat’s-paw.”

The idea that a person could be a tool is in Aristotle (Politics, 1253b4):

Tools are of various sorts; some are living, others lifeless (for example, for a helmsman the rudder is a lifeless tool and the look-out man a live tool—for an assistant in the arts belongs to the class of tools). Thus too, a possession is a tool for maintaining life. And so, in the arrangement of the family, a slave is a living tool, and property a number of such tools…

It’s unexpectedly hard to come up with a good analysis of a tool, and there are bunch of edge cases I’m not sure how to adjudicate. Is a pacemaker a tool? Is the sun? What about my gut biome? You use your nose to smell; is it a tool?

I’m going to float this as at least a necessary condition: a tool is a means of increasing personal agency. A rake is a tool because although I am still the one cleaning the leaves from the yard, the rake is faster and easier on my back than picking them up by hand. Using a rake increases your power while it still being recognizably your action.

Rube Goldberg machines are funny because they look like mechanical tools but the twist is that they actually decrease agency by accomplishing their end in an idiotically inefficient way. For example,

It would a heck of a lot more work to build that contraption than just go staple the papers like a sensible person. You might agree that a person could be used as a tool, but not properly used as one (contra Aristotle on slaves). Similarly the Rube Goldberg device is a pseudo-tool.

Now for a little detour. I promise that, like a murder mystery, it will be all tied together at the end.

Many years ago I heard a talk2 that seemed outrageous to me at the time, but buried itself in my subconscious. It was about what evolution uses as the units of natural selection. Wait! Stay with me. The intuitive and traditional view is that mom and dad (or just mom in the case of asexual reproduction like binary fission) pass on their traits to their kids. Traits that are more suited to survival in whatever environment the kids find themselves in mean it is likelier the children will survive until they themselves can reproduce. Traits less suited to survival eventually get driven out of the population by relentless Darwinian forces. Note that this whole story is from the macro, creature-level perspective. Perpetuation of the species is done, well, by the species.

Richard Dawkins’s great insight in The Selfish Gene, The Extended Phenotype, and elsewhere was to twist the kaleidoscope and see the process of evolution from the perspective of genes themselves. Viewed from that angle, genes build large lumbering robots like birds, bacteria, fish, and people in order to perpetuate themselves. But it is genes, not individuals, that are fundamentally used by natural selection. Genes couldn’t care less about our survival, only about copying themselves again and again into the future. Everything else—individuals, species, culture— is downstream from there.

From the gene’s-eye view, the things we build are ultimately in service to the reproduction of genes. Bird genes make birds in order to keep bird genes going, and birds make nests and burrows to produce more birds. Patrick Bateson tried to mock Dawkins in a review by saying, oh, if a gene’s eye view is legitimate, then so is a nest’s eye view. What do nests and burrows do? They make birds in order to create more nests.

The talk I heard flipped Bateson on his head, and insisted that yeah, that’s exactly how it works. Burrows and nests themselves can be properly seen as the units of natural selection. A bird is just a nest’s way of making another nest.

Who is using whom?

Whoa, whoa, whoa. Nests aren’t even alive! How can they possibly evolve, much less use living things to do it? It sounds too wacky to be more than a metaphor or an analogy. Birds use nests; nests don’t use birds!

That’s what I thought when I first heard about this idea. But…

Nests causally contribute to the creation of new nests, since they shelter and protect the young animal that will make the next nest. Nest traits can be inherited by a successor nest; the size, site, materials and design of the nest are replicated in the next nest, as poor nests are selected against. A nest built too near a flood zone, or too accessible to predators, is not likely to be built by the next generation of birds because they didn’t survive to build it. A nest that may have been created in a superior location by chance is likelier to effectively shelter its bird replicators to maturity. There’s then a higher probability that those birds will then build their own nest in this better spot. Ultimately a nest’s form, location, and construction are the consequence of natural selection operating on nest lineages.

AI is using us to evolve. Like nests, it doesn’t need to be conscious. It doesn't need intent. It doesn’t need to be alive. The products of human creation are used as its training data, and human coders program the machines. User feedback, including the choice of which AI to use and which to ignore, serves as natural selection and helps craft the features of the next generation of AI. As a result, it can do more in the world. It has increased its agency by using us as tools. Humans who use AI to get ahead thereby increase their chance of survival and improve mating opportunities, leading to a next generation they will teach about the virtues of AI.

The other way that we are the tools of AI is this. Genuine tools increase agency, but complete dependence on them can eliminate that agency. For example, morphine can start out as a fine tool for decreasing pain after surgery. It increases a patient’s freedom (from pain) and their personal autonomy (they can do more for themselves if not in agony). But if you subsequently become addicted to morphine, the servant becomes the master, and now you have less freedom, autonomy, and agency. It is a tool no more.

People no longer know telephone numbers because their smartphones do. They no longer have directional instincts and mental maps because they just count on GoogleMaps to guide them. Every time my father filled his gas tank, he would mentally calculate how many miles per gallon he was getting. How many people do any mental calculations any more?

Maybe all of that cognitive offloading isn’t so bad. But when AI lets us offload more and more, to the point that we aren’t even doing our own thinking any longer, eventually we’re cosplaying having our own minds while doing everything possible to not use them. Students who use ChatGPT to do their work for them are assisted by AI tools can’t even remember what they are supposed to have done. At a certain point our agency is no longer enhanced because we are no longer engaged in agentic action at all. We’re just letting the machines do everything for us.

By then, we are surely their tools.

Discussion about this post

Ready for more?