Ask HN: Is Freewill a Requirement of Consciousness?
Debate. Hard to debate this with proven facts, from either side. Opinion? I have no idea. It depends on how you (or I, or someone else) defines or interprets "free will", "requirement", or "consciousness", and whether or not that definition is binary, discrete, or continuous in spectrum. Hard to paramtereize a relationship when we can't confidently define either member of it. Personally, I think it's utterly silly to think that free will fits into a material model. If it's real, I'd sooner believe in a panpsychist universe than a magical meat computer that shifts reality at will. The Quantum Indeterminacy argument is God in the Gaps. > I think it's utterly silly to think that free will fits into a material model. It isn't silly, but it is hard to discuss it with people because logic breaks down under self-reference and logical debate has been the most popular mode of our debates since Aristotle. Even empiricism doesn't save from this weakness, because it implies we ought to update based on the observed evidence. The Halting Problem and the Incompleteness Theorems are very important to recognize. They lead to a recognition of the infinite self-reference that occurs quite naturally as physics reaches its limit and is employed to model an agent which uses that model to model another agent that is modeling them. This produces computationally irreducible phenomenon. From there we start to reach into game theoretic concerns wherein evidence denial on the basis of equilibrium consideration becomes normal. When extended to imperfect information settings we end up discovering that non-deterministic policies are optimal. This optimality proof and the guarantee of the ability to confound through undecidability give us a grip on what selection and variation ought to select for through an appeal to the central limit theorem. The funniest thing to me is that the decision to deny this corresponds with choosing an unfactored and unsimplified representation for reality as being more correct. Yet this backfires in the most beautiful way: it is slower to compute then the factored and simpler representation. Which means it is computationally reducible. Which means other agents can know your output before you do. Which makes you victim to Halting Problem attacks - or rather it makes you determined but your world indeterminable for you. But the even more amusing irony is that the entire reason we even use logic and empiricism is because we recognize the proxy relationship advantage as a function of the proxy not being the actual thing. So it is a self-refuting position, because it tries to reject the underlying motive for both the use of logic and the use of evidence. Which, well, when you see it - now that is rather silly! What it is on the free will side isn't silly, but non-sense. As in, literally non-sensory. When you really realize that is what is happening though it is a mistake to laugh at it. After all, how many fingers am I holding up right now? You aren't sensing it. Non-sense as a belief about your sensory data is actually quite valid, because you aren't sensing it actually corresponds to your actual states. It is congruent, not in-congruent, with relevant states. In formal reasoning about this topic we therefore differentiate between three things: world state, observation state, and information state. This is starting to get into the game theory aspects of the problem, which can end up being very motivating when you notice the proof of optimality of non-deterministic policy functions under imperfect information, but if you really want to understand the undecidability you probably are better off checking out something like https://www.wolframscience.com/nks/p750--the-phenomenon-of-f... Care to dumb this down? I have no clue what you're trying to say here. Another attempt at dumbing it down: We sometimes don't know what will happen until it does. The best we can do is to make a guess. Sometimes this guess will be very often good, but it is still a guess and we need to be open to the idea that our guess was wrong in some aspects. Suppose we could have a perfect guess though. We can use the guess to figure out what would happen and it would always work. Okay, lets use it to determine what happens in the future in which we use it to determine what happens. Do you see the issue here? The guess references itself. So it can't predict the future in advance of it. There comes a point where it is looping and giving you the leading edge of what is knowable given the amount of time that has passed so far. So we don't have the perfect guess, even when we have the perfect guess. We don't know what will happen before it happens, even if we know the rules that tell us what happens before they happen. There are a bunch of wierd things that get implied by the math that is consequent to this, especially when you get into transpositional structures, but it isn't silly so much as it is non-sensory. I mean, it does sound laughable. When Feynman said we guess when we do science and then validate the guesses with experiments people did laugh. Yet this is not because it is silly. It is because the non-sensory cases we can't differentiate from can only be differentiated from the sensory cases via the experiment. What often happens though is that directly after the moment people pretend that it was all known before hand. It is a tricky moment, the present. For some calculations you can make them simpler and do them faster, but for other calculations you can't. When two things that try to predict things by making things simpler apply themselves to predicting each other... well. In order to think about this sensibly you have to think through the consequences of the infinite self-reference. Logic fails here. So does evidence. For example if you observe that your opponent is playing badly, modeling them as a bad player doesn't work, because if you choose to do this, they could exploit the bad model by playing better than that model would predict. So even though the evidence suggests that they are a bad player, the self-reference considerations forces you away from acting as if they actually are. So lets say you play rock? They play paper. Lets say you play scissors? They play rock. Lets say you play paper. They play scissors. Lets say you fully determine everything they are going to do using your complete knowledge of physics and you play rock because your model told you that they would play scissors... They will play paper; obviously. Lets say you anticipate that and you play scissors. Then they play rock. As long as you claim that you can figure everything out, it means they could figure everything out. The harder you push against this, the harder it pushes back. You have to move with the force, diverting it, not fight against it. You've got to take advantage of the inevitability of being unable to overpower, not try to overpower. You have to act as if you can do everything at once, because the undecidability allows you to do so. Not try to determine everything, because if you try to, then you can't, because others try to do the same thing. The question is equivalent to: "Is ABC a requirement of XYZ" There is no unanimous definition for either ABC or XYZ, ultimately leading to profitless discussion. If you can precisely define both the terms, then there will be no need for debate. And if you can't define the terms, then also, there is no need for debate :) Do your own psychology homework. You might want to create the debate yourself. Someone just released this tool last week: https://news.ycombinator.com/item?id=35016444 > Debate. About things we don’t even know exist, or what they are if they do exist? There’s better ways to spend a Friday. I think both of those words are nebulous and therefore it is hard to talk about them in a meaningful way. I feel like freewill is a made up idea entirely and while we might have the experience of free will, the reality is different. I think drugs are a great thing to think about in reference to free will. There are many drugs that will force your mind into specific modalities. There are drugs that can be injected that will force you to sleep, force you unconscious, or force you to experience reality in a way other than what you are accustomed. Not eating food can result in people become hangry without even understanding they are angry because they haven't eaten. Interview results and court judgements are different based on having eaten lunch or not. Even if you believe in the idea of free will, it seems pretty clear that chemical forces can dominate free will. This means that at best free will is a spectrum. The sights you see are little waves reacting with chemicals in your eyeballs conducting electricity through various nerves. The various stimuli we can experience all have physical and chemical basis. These stimuli are then processed by the machinery of our brain, which can alter the structure and machinery of our brain. At least that's my understanding. If everything can be explained by a physical processes, then it seems like if you had perfect knowledge about the current state, then you could predict the next, and if you can predict one state, with perfect knowledge of a previous state, then it seems clear that freewill is an illusion and therefore a product of consciousness and not a requirement of it. If I smashed your hand with a hammer, do you think I could reliably predict you will feel pain? Of course you will. Do you that think that even if you feel pain your response is free will? You can choose to say ow, or try to fight me, or run away so I don't smash your hand again. But what if you had a history of losing fights? What if your testosterone is high or low? What if you have genetic markers that result in under production or over production of adrenaline? Is the adrenaline causing your heart to race free will? Will the subjective experience of rushing adrenaline influence or control your "decision." At what point does influence become control? Are counter influences other predictable physical systems? Why can't depressed people choose to be happy? Why can't ADHD people choose to concentrate? I think dementia and mental illness is another interesting avenue of exploring free well and consciousness. If I were to try to describe free will somewhat rigorously, I would say free will is the subjective experience of the thinking mind overriding the feeling mind. Consciousness is much harder. The idea clearly exists, because we all have some notion of "I." Being able to say "I feel this way" means that consciousness is an idea that exists. It seems like the idea of consiousness must involve the idea of a closed system because "I" indicates a separation of one grouping of atoms from another. I'm not sure what the second property is that creates subjective experience, memory? > If I smashed your hand with a hammer, do you think I could reliably predict you will feel pain? Of course you will. In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer. The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether arbitrary programs eventually halt when run. [1] You are predicting my state in advance of it having been achieved. I'm fully capable of intentionally disrupting your prediction, for example, by drugging the nerves in my arms such that they cannot send signals to my brain. Your claim of knowledge is a false claim and I deny it. This is not a pedantic point. It relates directly to the concept of computationally irreducible systems [2][3][4]. These processes create the condition for non-deterministic outcomes as a consequence of deterministic systems. We then have to ask: since it is possible, should it actually be that way? Which leads to research on optimal solving of games under imperfect information. Nash has shown optimal mixed/impure strategies [5][6][7][8]. > If everything can be explained by a physical processes, then it seems like if you had perfect knowledge about the current state, then you could predict the next, and if you can predict one state... This is false. Even if you know the deterministic rules of a system it is not the case that you can predict the state of that system [1][2][3][4][5][6][7][8]. > I feel like freewill is a made up idea entirely and while we might have the experience of free will, the reality is different. Humans seem to struggle with thinking about this for several reasons, but two important ones are that logic breaks down under self-reference and humans are cooperative with each other [9]. The first is a problem because most of our tradition of debate descends from argumentative traditions descending from Aristotle which is logical tradition of debate [10]. The second is a problem because cooperative agents tend to make themselves predictable. They make themselves stand out from "the world" rather than appearing as if "of the world". This is not actually generally the case, but because we have exceptionally capable senses we don't always realize what the actual decision problems really look like and how nice we have it due to our cooperative tendencies. For a more representative example try to spot the predators in the two cited images which pursue the "of the world" competitive equilibrium [11][12]. [1]: https://en.wikipedia.org/wiki/Undecidable_problem [2]: https://mathworld.wolfram.com/ComputationalIrreducibility.ht... [3]: https://www.reddit.com/r/philosophy/comments/4lbck4/computat... [4]: https://www.wolframscience.com/nks/p750--the-phenomenon-of-f... [5]: https://en.wikipedia.org/wiki/Strategy_(game_theory) [6]: https://en.wikipedia.org/wiki/John_Forbes_Nash_Jr. [7]: https://en.wikipedia.org/wiki/Nash_equilibrium [8]: https://en.wikipedia.org/wiki/Strategy_(game_theory)#Mixed_s... [9]: https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_... [10]: https://plato.stanford.edu/entries/aristotle-logic/ [11]: https://www.boredpanda.com/blog/wp-content/uploads/2017/04/s... [12]: https://www.researchgate.net/profile/Michael-Kuba-2/publicat... This is a well thought out post and I appreciate it because, if nothing else, it gave me things to think about I haven't considered. The halting problem is interesting to think about in relationship to this. We can't determine whether a program will halt, but given a Turing machine, we can know the next state of the program given the current state. So the machine is deterministic, knowing the current state tells you the next state. It sounds like your assertion is more that a system cannot be deterministic because the complete state is not knowable and without the complete state of a system, there is no way for it to be deterministic because state form outside of it will influence it? A Turing machine is decidable from outside itself if state is finite and known. The halting problem is solvable given finite state. > You are predicting my state in advance of it having been achieved. I'm fully capable of intentionally disrupting your prediction, for example, by drugging the nerves in my arms such that they cannot send signals to my brain. Your claim of knowledge is a false claim and I deny it. My claim is with perfect knowledge of the current state of existence, the next state will be predictable, the hand smash statement was not rigorous. It is my belief that reality is governed by physical processes. Drugged nerves are still a physical process. The outcome is a function of the inputs. My claim requires the assumption of perfect knowledge of state. I am open to the idea that is poor choice of assumption. > https://www.reddit.com/r/philosophy/comments/4lbck4/computat... This was a great read, it had satisfying premises and an interesting conclusion. I will have to think about this. > [5][6][7][8] I don't think I understand why game theory is relevant, particularly in light of the assumption of perfect knowledge (that very well could lead to a contradiction and therefore be definitely wrong). I definitely think there is probably a contradiction between perfect knowledge and self reference, thus it is impossible to have perfect knowledge of a system from within the system. > Even if you know the deterministic rules of a system it is not the case that you can predict the state of that system This is an interesting and strong statement. The word predict seems to be the key to it. If you can model a system and run it to the next state, I would call that a predictable system, while it seems like the statement you are making is saying that if the only way to find the result of a system is to re-create it and get the next state, that means it is not predictable? > restated: logic breaks down under self reference therefore humans struggle with thinking about free will and humans cooperate with each other therefore humans struggle with thinking about free well. I think I will have to read the links to form coherent thoughts. --- The idea of computation reducability is quite interesting. The extension of that idea is equally interesting. "Computational expand-ability." I would submit for consideration that a particular human brain may have a finite number of physical configurations and therefore there is a fundamental limit for any given snapshot of time of the axiomatic and consistent statements that can be captured from it. Finite configurations means that "there will always be statements about natural numbers that are true, but that are unprovable within the system." is true, but that it is unintuitive. If time has a start point, and the present is another point, all patterns of state (and therefore a finite set of true statements about natural numbers) can at some point be proven via "non reducible" calculations/via expansion. Likewise "shows that the system cannot demonstrate its own consistency." is also true, but that via expansion, all previously known truths can be shown to be consistent. This is a very intuitive explanation of math to me. With time (computational expansion) true statements can be proven. With time (computational expansion) true statements will be discovered. Because a proof requires the use of true statements, the number of true statements will always be larger than the set of proofs. > It sounds like your assertion is more that a system cannot be deterministic because the complete state is not knowable and without the complete state of a system, there is no way for it to be deterministic because state form outside of it will influence it? Not quite. The choice of perfect knowledge of state is irrelevant. Even with perfect knowledge you can't, for example, predict the thought you think next. If you did, you didn't predict it, you experienced it, because the very act of predicting it was the act of thinking it. There are leading edges to the computation. It is a hindsight bias upon thinking them to think they were predictable. > I definitely think there is probably a contradiction between perfect knowledge and self reference, thus it is impossible to have perfect knowledge of a system from within the system. This is a lot closer to what I'm actually trying to say. Critically, that leading edge of information is a barrier that is very important and agents can use it to cause other agents to be unable to predict what will happen before they experience it. > I don't think I understand why game theory is relevant The reason it becomes relevant is because of optimal decision structure. From the math of game theory we learn that the optimal structure of an agent's decision making isn't to play rock or to play paper or to play scissors. If you choose any of these then an argument from the equilibrium considerations - which is basically an argument from the consequences of self-reference - shows the choice is exploitable. So in game theory we don't select rock and we don't select paper and we don't select scissors. Instead we select a probability vector over those choices. Basically it turns out that when things become unknowable the appropriate response is to change the level of abstraction over which actions are taken. Your consideration about the limits is somewhat aligned with how I think about things, but I think you need to take one further step before things will snap into clarity and the entire discussion will become dissolved. You have a world state in your mental model of the Turing Machine state, right? That is why you talk about a specific Turing Machine as if you could look at it from outside it. What do you think you get when you pause, not from within the Turing Machine, but in the space of Turing Machines implied by your observation so far? How many programs do you think I could write which would output "Hello" midway through the computation? One? Two? Could I suggest to you that there are an infinite number of such potential programs? The observation space corresponds with a sort of superposition of potential world states. It is worse than that though. Your observation space? It contains more statements as more computation takes place. Eventually it has to be abstracted or it isn't computationally reducible. So there are actually more than one observation per information state. When you think about things from within the universe, even though chair is atoms, you are better off having the superposition of chair being highly variable. It corresponds to many different things. So when you are entering decision problems you are in actuality in multiple potential universes, in multiple potential observation spaces, and you can't even tell one way or another except by expanding the computation. > Eventually it has to be abstracted or it isn't computationally reducible. This should be "Eventually it should be abstracted or else you are at risk of being computationally reducible." I was trying to say it isn't computationally reduced in the unabstracted form, but didn't use the words precisely enough.