A dilettante’s philosophy of mind - Matthew T. Mason

7 min read Original article ↗
For 50 years I have tried to hide from the philosophy of mind. I failed. It’s time for a brain dump.

Way back in 1974, as an undergrad majoring in computer science, I was lucky enough to get a research position at the MIT AI Lab. I was excited about AI (Artificial Intelligence) and especially excited to join the AI Lab.

Within a few days, as I was walking to the AI Lab, I ran into an old friend, somebody I knew from my freshman dorm. I have forgotten his name, let’s call him Joe. When Joe heard I was in the AI Lab, he was interested. He was studying philosophy, and he believed AI was destined for failure. He threw a couple of questions at me that I could not answer nor even understand.

How strange. Philosophers have opinions about my new job? That didn’t happen when I worked at McDonald’s.

I tried to put this incident out of my mind. I did not want to spend my time reading philosophy. But, it is hard to ignore. As a researcher, as an academic, am I not obligated to engage with others and discuss the ideas on which my career is built? Actually the answer is no, I am not obligated, but I just couldn’t stay away.

So, over the years I tried to ignore the philosophers and failed. I cannot help thinking about it, and occasionally I even have to read a paper or a book, or attend a lecture. I sought ignorance, but I might have risen to the level of dilettante.  I know a little. I have ideas. Perhaps they are misguided or obvious. I invite you to disengage — stop reading. But, I am now ready to engage with Joe, if I can find him. Joe, if you are reading this, if you recognize yourself, send me a note. Here is my answer to your questions — a dilettante’s philosophy of mind.

The mind-body problem

Philosophers have wondered and speculated about the mind for, well, if I were an expert I would know how long. But I do know that one of the interesting puzzles is the “mind-body problem” first posed by Descartes. Descartes proposed two kinds of stuff, which we can call mind-stuff and body-stuff. The philosophers have a cool name for this: Cartesian dualism. Body-stuff is all of the material things around you. A mug, a pencil, a desk, and so on. Body-stuff has mass. Body-stuff obeys the laws of physics.

But how much does an idea weigh? How much does a word weigh? Words and ideas are mind-stuff. No weight. The laws of physics do not apply.

Now, suppose I ask you to pick up a pencil. My words travel to your ear and then (if you are in an agreeable mood) lead to muscular action to lift that pencil. Somehow, my idea turned into your physical action. Mind-stuff affected body-stuff. If the mind-stuff has no weight and no force, how on earth can it interact with body-stuff? The laws of physics offer no role for the mind-stuff.

The answer: computation. At least, that’s what I think. Which is perhaps irrelevant because as a philosopher I am a dilettante. But some real philosophers agree, calling themselves computationalists.

Computationalism

I don’t know if I should explain further, but I will try. Does a computer live in the mind-stuff world, or the body-stuff world? Both. You can send it words or symbols, and it will send you words and symbols back, so it operates in the mind-stuff world. But it is built of physical stuff: transistors, wires, capacitors, and so on. The computer is a device that obeys the laws of physics, brilliantly engineered to make a good show of processing symbols.

Is it really processing symbols? That is a matter of perspective. When you speak aloud, are your words really words? Or are they just vibrations in the atmosphere? When you write a number on a piece of paper, is it really a number, or is it just a pattern of graphite? Either view is defensible; it is your choice. You can choose to interpret these phenomena as just physical body-stuff, but life will be tough with such a grumpy attitude. Life is better if you interpret those vibrations as words, and those graphite smears as numbers, and if you interpret a computer (meaning a box of electronics) as a computer (meaning a symbol processor).

(Actually, you have to switch perspectives from time to time, like if your computer is glitching and you give it a whack, you have switched to the body-stuff perspective for a moment.)

(And isn’t it unfortunate that we have only one word for a computer, whether we view it as a box of electronics or as a symbol processor? That seems to be true of all devices that straddle the boundary between mind and body: the abacus, the calculator, … are there others? One device is an exception: the brain / mind, if by “brain” you mean the gooey stuff in your head, and by “mind” you mean the putatively computational process implemented by that gooey stuff. We had to have two words, since for millennia we did not know that brain and mind are actually just two different views of the same machine.)

Okay, where was I? Ah, yes, as a computationalist, my view is that the brain is a computer, and the mind is the symbol processing function implemented by the brain.

Here is an issue to address. Perhaps we should say the brain does more than process symbols. There is the matter of physical intelligence, which in an earlier post (here’s a link) I referred to as the inner robot. Specifically, there is signal processing, for example taking the signals from the retina and detecting edges or other visual features. You can program a digital computer to do that as accurately as you like, so the symbol processing view works, but maybe you would prefer to do it with signal processing electronics. I wish I knew how to put that issue away and forget it, but I don’t. I raised the issue once with Allen Newell, a Carnegie Mellon colleague often credited as co-creator of the field of AI. Allen’s answer? A shrug of the shoulders. I think I agree with him.

Does all of this imply that the human brain is a computer? No. As far as I know, there is no compelling argument for or against. All I can say is that computationalism offers a concrete, specific, plausible answer to the problem, and no other proposed answer does.

The question might be settled if AI were a complete success, but we aren’t there yet. There is the issue of what philosophers sometimes call qualia: a sense of self, or the perception of pain, for example. Our current AIs, as smart as they are, have not reached that stage. That came as a surprise to me. When I first got into AI, I figured that if we could build an AI smart enough to pass the Turing test, i.e., smart enough to carry on a conversation like a human, then we could also implement things like pain and self awareness. If you are a non-computationalist, you should take heart from that. In any case, the philosophy of mind remains an interesting and unsettled affair — hard to ignore. Nonetheless, I am now resuming my attempts to ignore it.

Notes

How prominent is the computationalist view among philosophers? I don’t know. The term might be too blunt for some. But here is an interesting survey of philosophers that touches on some surrounding issues if not computationalism per se. Bourget, David, David J. Chalmers, and David Chalmers. “Philosophers on Philosophy: The 2020 PhilPapers Survey.” Philosophers’ Imprint 23 (2023).

If you want to read more about the philosophy of mind, particularly the computationalist view, maybe Dennett is the best place to start. I particularly enjoyed this essay: Dennett, Daniel C. (1978). “Where Am I?” in Brainstorms: Philosophical Essays on Mind and Psychology (pp. 310–323). Cambridge, MA: Bradford Books / MIT Press.

If you want to read about qualia, here is a good place to start: Chalmers, D.J. “Facing up to the problem of consciousness.” Journal of Consciousness Studies 2, no. 3 (1995).