A dilettante’s philosophy of mind - Matthew T. Mason

16 min read Original article ↗

SHARE:

For 50 years I have tried to hide from the philosophy of mind. I failed. It’s time for a brain dump.

Way back in 1974, as an undergrad majoring in computer science, I was lucky enough to get a research position at the MIT AI Lab. I was excited about AI (Artificial Intelligence) and especially excited to join the AI Lab.

Within a few days, as I was walking to the AI Lab, I ran into an old friend, somebody I knew from my freshman dorm. I have forgotten his name, let’s call him Joe. When Joe heard I was in the AI Lab, he was interested. He was studying philosophy, and he believed AI was destined for failure. He threw a couple of questions at me that I could not answer nor even understand.

How strange. Philosophers have opinions about my new job? That didn’t happen when I worked at McDonald’s.

I tried to put this incident out of my mind. I did not want to spend my time reading philosophy. But, it is hard to ignore. As a researcher, as an academic, am I not obligated to engage with others and discuss the ideas on which my career is built? Actually the answer is no, I am not obligated, but I just couldn’t stay away.

So, over the years I tried to ignore the philosophers and failed. I cannot help thinking about it, and occasionally I even have to read a paper or a book, or attend a lecture. I sought ignorance, but I might have risen to the level of dilettante.  I know a little. I have ideas. Perhaps they are misguided or obvious. I invite you to disengage — stop reading. But, I am now ready to engage with Joe, if I can find him. Joe, if you are reading this, if you recognize yourself, send me a note. Here is my answer to your questions — a dilettante’s philosophy of mind.

The mind-body problem

Philosophers have wondered and speculated about the mind for, well, if I were an expert I would know how long. But I do know that one of the interesting puzzles is the “mind-body problem” first posed by Descartes. Descartes proposed two kinds of stuff, which we can call mind-stuff and body-stuff. The philosophers have a cool name for this: Cartesian dualism. Body-stuff is all of the material things around you. A mug, a pencil, a desk, and so on. Body-stuff has mass. Body-stuff obeys the laws of physics.

But how much does an idea weigh? How much does a word weigh? Words and ideas are mind-stuff. No weight. The laws of physics do not apply.

Now, suppose I ask you to pick up a pencil. My words travel to your ear and then (if you are in an agreeable mood) lead to muscular action to lift that pencil. Somehow, my idea turned into your physical action. Mind-stuff affected body-stuff. If the mind-stuff has no weight and no force, how on earth can it interact with body-stuff? The laws of physics offer no role for the mind-stuff.

The answer: computation. At least, that’s what I think. Which is perhaps irrelevant because as a philosopher I am a dilettante. But some real philosophers agree, calling themselves computationalists.

Computationalism

I don’t know if I should explain further, but I will try. Does a computer live in the mind-stuff world, or the body-stuff world? Both. You can send it words or symbols, and it will send you words and symbols back, so it operates in the mind-stuff world. But it is built of physical stuff: transistors, wires, capacitors, and so on. The computer is a device that obeys the laws of physics, brilliantly engineered to make a good show of processing symbols.

Is it really processing symbols? That is a matter of perspective. When you speak aloud, are your words really words? Or are they just vibrations in the atmosphere? When you write a number on a piece of paper, is it really a number, or is it just a pattern of graphite? Either view is defensible; it is your choice. You can choose to interpret these phenomena as just physical body-stuff, but life will be tough with such a grumpy attitude. Life is better if you interpret those vibrations as words, and those graphite smears as numbers, and if you interpret a computer (meaning a box of electronics) as a computer (meaning a symbol processor).

(Actually, you have to switch perspectives from time to time, like if your computer is glitching and you give it a whack, you have switched to the body-stuff perspective for a moment.)

(And isn’t it unfortunate that we have only one word for a computer, whether we view it as a box of electronics or as a symbol processor? That seems to be true of all devices that straddle the boundary between mind and body: the abacus, the calculator, … are there others? One device is an exception: the brain / mind, if by “brain” you mean the gooey stuff in your head, and by “mind” you mean the putatively computational process implemented by that gooey stuff. We had to have two words, since for millennia we did not know that brain and mind are actually just two different views of the same machine.)

Okay, where was I? Ah, yes, as a computationalist, my view is that the brain is a computer, and the mind is the symbol processing function implemented by the brain.

Here is an issue to address. Perhaps we should say the brain does more than process symbols. There is the matter of physical intelligence, which in an earlier post (here’s a link) I referred to as the inner robot. Specifically, there is signal processing, for example taking the signals from the retina and detecting edges or other visual features. You can program a digital computer to do that as accurately as you like, so the symbol processing view works, but maybe you would prefer to do it with signal processing electronics. I wish I knew how to put that issue away and forget it, but I don’t. I raised the issue once with Allen Newell, a Carnegie Mellon colleague often credited as co-creator of the field of AI. Allen’s answer? A shrug of the shoulders. I think I agree with him.

Does all of this imply that the human brain is a computer? No. As far as I know, there is no compelling argument for or against. All I can say is that computationalism offers a concrete, specific, plausible answer to the problem, and no other proposed answer does.

The question might be settled if AI were a complete success, but we aren’t there yet. There is the issue of what philosophers sometimes call qualia: a sense of self, or the perception of pain, for example. Our current AIs, as smart as they are, have not reached that stage. That came as a surprise to me. When I first got into AI, I figured that if we could build an AI smart enough to pass the Turing test, i.e., smart enough to carry on a conversation like a human, then we could also implement things like pain and self awareness. If you are a non-computationalist, you should take heart from that. In any case, the philosophy of mind remains an interesting and unsettled affair — hard to ignore. Nonetheless, I am now resuming my attempts to ignore it.

Notes

How prominent is the computationalist view among philosophers? I don’t know. The term might be too blunt for some. But here is an interesting survey of philosophers that touches on some surrounding issues if not computationalism per se. Bourget, David, David J. Chalmers, and David Chalmers. “Philosophers on Philosophy: The 2020 PhilPapers Survey.” Philosophers’ Imprint 23 (2023).

If you want to read more about the philosophy of mind, particularly the computationalist view, maybe Dennett is the best place to start. I particularly enjoyed this essay: Dennett, Daniel C. (1978). “Where Am I?” in Brainstorms: Philosophical Essays on Mind and Psychology (pp. 310–323). Cambridge, MA: Bradford Books / MIT Press.

If you want to read about qualia, here is a good place to start: Chalmers, D.J. “Facing up to the problem of consciousness.” Journal of Consciousness Studies 2, no. 3 (1995).

Inline Feedbacks

View all comments

I agree. I got my first insight into this when programming in assembly language on the PDP-9, before coming to grad school at MIT. The computational view of programming is pretty straight-forward, but what about typing a character on the teletype? From the computational side, I put a number (“101” in octal for “A” in ASCII) into a particular memory location, and then set a certain bit to “1” rather than “0”. From the physical side, when a certain switch is closed (set to “1”), a circuit looks at the pattern of bits in a certain register, and selects which action the teletype should take. When that is done, it sets that switch back to “0”. Two different, perfectly reasonable descriptions of the same event. Each one, separately, is easy to know how to implement, at least in principle. The amazing thing is the correspondence between the computational and physical representations of the register and the bit. You can make something happen on one side, and its effect happens on the other, too!

Once I discovered the MIT-AI lab, I was sure we would solve this AI problem in five year, ten years max. Obviously I was wrong. The problem in the Mind is a lot harder than I thought, but like you, Matt, I’m convinced that we are on a valuable track. Not the only track, but a useful one (perhaps dangerous too).

Great post, Matt. The mind-stuff vs. body-stuff framing reminds me of what Hofstadter explores in I Am a Strange Loop and (with Dennett) in The Mind’s I: the question of whether intelligence is immortal or mortal. Can a mind be fully extracted from its substrate, or is it fundamentally shaped by the body that hosts it?
Hofstadter’s answer leans toward the latter. In Strange Loop, he argues that the “I” emerges from self-referential patterns in the brain, patterns that are deeply entangled with the physical system running them. The mind is a process whose character depends on the particular substrate giving rise to it.
This matters for the computationalist view. Even if computation offers the most plausible bridge between mind-stuff and body-stuff, the body itself encodes knowledge. Proprioception, reflexes, and our sensorimotor loops shape how we perceive and act. A mind running on a different substrate would, in some meaningful sense, be a different mind.
So maybe the real question behind the mind-body problem is: do we believe intelligence is mortal, bound to its embodiment, or immortal, transferable to any sufficient computational medium? Hofstadter suggests the answer is somewhere in between, and I think that’s where the interesting work lives.
References: Hofstadter, I Am a Strange Loop (2007); Hofstadter & Dennett, The Mind’s I (1981).

Reply to  David Watkins

1 month ago

OK I can see my plan has backfired. Publishing the post does not make it easier for me to ignore philosophical issues.

I haven’t read those works. My instant reaction is to say that the beauty of computation is that it abstracts away from the hardware, so the physical substrate doesn’t matter. The body, information stored outside, sure — a robot can deal with those same things, it doesn’t mean the mind is not computational. But qualia, I’m not sure — I don’t know how to draw boundaries around the computation when we are talking about things like self-awareness. Suppose we succeed in producing an AI that passes a super-duper-Turing test, including matching humans in apparently experiencing pain, a sense of self, the color red, or whatever. Computationalism wins! But then what if we dig into its guts and discover it is just pretending.

Here’s a different thing to worry about: time. Church-Turing says my computer can emulate your brain, but it doesn’t say it can match the speed of your brain. So, is it the same mind if it runs one million times slower? Certainly not the physical intelligence. It isn’t going to win any tennis matches. Cognitive intelligence? Not sure.

I consulted my research assistant, ChatGPT 5.2. ChatGPT says “Ah, the clock-speed problem. You’ve wandered straight into one of the fault lines of computationalism.”

Argh! That bot thinks it is so smart.

Last edited 1 month ago by Matt

Reply to  Matt

1 month ago

Matt, your time point is what grabs me. A mind running a million times slower would have a fundamentally different relationship to its environment, and therefore to itself. So much of thinking is temporally structured: the rhythm of attention, the decay of working memory, the speed at which associations fire. Slow all of that by six orders of magnitude, and you’ve created a new one. The computation may be equivalent, but the experience isn’t.

On “just pretending”: Hofstadter and Dennett wrestle with this in The Mind’s I, and Dennett’s answer is that “just pretending” may be incoherent at sufficient complexity. If the system’s behavior is indistinguishable from experiencing pain across every test we can devise, what additional fact would make it real? The boundary dissolves.

Tell ChatGPT it should try running a million times slower before it gets cocky.

About the retina, I was recently reading von Neumann’s last writings, and he used the example of how three layers of synapses in the retina do a significant amount of processing (including finding of edges, etc.). The connections between the layers (pulses) is very low-bandwidth and low precision, so it might be deduced that each of those neurons is doing a significant amount of analog computing / signal processing work. So, put together, it might be that there is a digital connection of analog computers, or analog connection of analog computers.

A contrasting fully digital example might be a modern vision encoder, which has hundreds of layers and billions of parameters. We probably can’t say if one works better than the other, but what we can say is that the entirity of the brain draws 10W, and I think just that vision encoder may be hard-pressed to consume less than that.

And not one mention of John Searle?
Shit.

Reply to  Anthony

1 month ago

Well I did actually write a bit that was critical of Searle, but it seemed like a bad idea. A dilettante criticizing an accomplished expert? Probably I would just expose my own ignorance. So I left it out. But, here goes …

For those who don’t know, Searle invented the “Chinese Room.” It is a room, with a guy inside. The only access to the room is a window through which messages can be passed in or out. The messages in are in Chinese, and the guy’s responses are in Chinese. So it gives the appearance of understanding the Chinese language.

But, the guy does not understand Chinese. He has a huge trove of data and/or instructions. When he follows the instructions to process the input messages, the responses give the impression of understanding Chinese.

So, that is supposed to be a problem. Does the Chinese Room understand Chinese, or not?

And here is this dilettante’s response. The instructions are software, the data is data, and the guy is a processor. So, does the processor alone understand Chinese? No. Does the processor plus the software and data understand Chinese? Could be. I just don’t see the problem.

So, maybe somebody can explain to me why this is a big deal.

Reply to  Matt

1 month ago

Philosophers have spent a lot of time musing consciousness and qualia, and whether my perception of red is the same as yours, but for a roboticist maybe a more interesting thing to consider is whether a machine can be intelligent if it doesn’t have a physical body that perceives and acts in the world? If an artificial system is able to make perceptual discriminations that are coextensive with ours, and is able to make observable changes to the world which its perceptual systems can assess, then it sounds like it should be in with a chance of interacting with humans, in what we humans might interpret as an intelligent way. It doesn’t matter whether the machine experiences red the way I do, as long as it can judge the same things to be red that I do, and consequently it can predicate behaviours on this judgement in a way I would expect. If “man is the measure of all things” (Protagoras) then intelligence is behaviour that is consistent with human expectations. A non-embodied system wouldn’t be able to test hypotheses about what might be true of the world (assuming it could generate the hypotheses in the first place) because it has no way to act in the world or assess outcomes. It would be consigned to Searle’s Chinese Room processing strings of symbols which may or may not have any grounding in reality, and it wouldn’t know either way (just like an AI chat-bot). This is known as the Symbol Grounding Problem. On this basis It could be argued that being an embodied system is a necessary condition for intelligence, but is it sufficient?

Hi Matt,
Another thoughtful post. However, I don’t understand why brain/mind is not like computer/software? A related question is: should Computer Science be renamed Software Science for everyone not explicitly working on Hardware?

Whether self awareness is running on a brain, a traditional computer, or some virtual machine pretending it is one piece of hardware running on something else, what difference does it make? As a minor digression, in Finnish, there is a single word for virtual computer: Virtuaalikone (no, I don’t speak Finnish, I asked Chat GPT).

–Dave Miller (or a computational process pretending to be me (does it really matter to anyone but me]

Reply to  David Miller

1 month ago

Well, I used to think the brain was a processor, and the mind was a computational process. Because it seems the mind could migrate from one processor to another, or be spread among multiple processors. I talked myself out of that one, but I cannot remember why. Hardware versus software? Maybe I should ponder that one, but it seems problematic, since you could compile the software into hardware.

If you are interested in theories of mind, St. Augustine is also another source who provides a fairly compelling theory of mind. A gross oversimplification of it would be that he breaks the mind down to memory (data), knowledge/understanding of itself (e.g. data struct and algorithm), and love of itself (reprogramming/update of the brain to a truer/newer perspective). St. Augustine would probably reject my analogies as simplistic but that’s okay.