The Forgotten Solution: Superdeterminism
backreaction.blogspot.com"The Facts" basically say that among the statements, "Your experiment design isn't predestined by the universe to make it accidentally seem like quantum mechanics is true," "the state of the universe today is all you need to know to predict the state of the universe tomorrow," and "an experiment only has one outcome," there is at least one lie. If the first one is a lie that's superdeterminism, the second one the Copenhagen interpretation and the last one Many Worlds. What bothers me about getting philosophical is, philosophers will attempt to choose one or more based on intellectual aesthetic criteria that we developed from the womb onwards in the macroscopic world, while in reality the only legitimate answer is "we don't know." I think that is broadly speaking a problem that hampers the effectiveness of philosophy, there is not enough willingness to say "the present information does not permit a conclusion."
This comment started out good but then stumbled in to some odd critique of philosophy as being incapable of dealing with unknowable things when indeed that seems to be perfectly within the purview of philosophy.
Humes Problem of Induction, for instance, is exactly an example of philosophical practice grappling with these unanswerables.
So, Hume didn't stop ethical philosophers from trying to derive morals, he only stopped some of them. Wittgenstein and Borges didn't stop philosophers from trying to "beat" language games using only language: they only stopped some philosophers. I'm not saying that there aren't visionary heroes who realize that some discussions aren't going to go anywhere for fundamental reasons; instead, I'm highlighting the fact that even when they do, "all of philosophy" almost never reaches a consensus about quitting the debate. In math, when they deduced that the Axiom of Choice was always going to be an axiom, everybody quit looking for ways to confirm or refute it. I think it's a weakness of philosophy that similar things can't happen.
Hume's problem of induction is not about deriving moral principles (I believe you got it confused with the "is-ought" dichotomy.) Most philosophers have largely given up on giving a rational deductive basis for why we should believe in induction, so if anything that seems to be a perfect example of what you're describing.
I don't think the nature of quantum reality is anywhere near as settled. For decades, we thought that it was impossible to test local hidden-variable theories. Thank god some people were still working on the problem!
I might have read Hume the wrong way, but for me he was one of the few philosophers who basically said “there’s no way for us to know for sure” which can be approximated to “we don’t know”.
My favorite philosopher however remains Heraclitus, had we chosen to go his way we might have had less stupid questions, like “is the cat in the box dead or alive?” and instead we might have straight up came up with the answer “the cat is dead, alive and all the states between dead and alive, and we’re fine with that”. Unfortunately Aristotle was not fine with accepting the many “states” of the world “happening” all at the same time and went for the binary True-False way, bad-mouthing Heraclitus in the process. We certainly did manage to build a more efficient society by following Aristotle’s way but I think we have reached a local maximum, or it certainly looks that way. Maybe reverting to the pre-Socratics will help us go over this local maximum.
It's terms like "local maxima" that make me realize how useful math knowledge is in conveying easily understandable concepts quickly. I perfectly understood what you meant, but don't think I could convey the concept in less than 10 words without a reference to "local maxima" or being "over-optimized"
> Humes Problem of Induction, for instance, is exactly an example of philosophical practice grappling with these unanswerables.
Hume's problem of induction is arguably the last substantial thought on the subject, right up through Popper's bridge problem.
> This comment started out good but then stumbled in to some odd critique of philosophy as being incapable of dealing with unknowable things when indeed that seems to be perfectly within the purview of philosophy.
What does it mean to "deal with" unknowable things in this case? If philosophy is claiming this as their purview, what are they going to do with it? I'd posit philosophers have two options:
1. Say, "I don't know." This is the better option, in my opinion, because it's honest, but scientists already said that, so why do we need philosophers to say the same thing? You can speculate beyond this and posit it from the beginning as "if this thing we don't know is true is true, then it would have this effect". But in other fields this would generally be a very low-value sort of discussion--respectable institutions would not, for example, give a lot of funding to scientific experiments which presuppose unstudied phenomena. You'd study the unstudied phenomenon first and come to conclusions there before moving on to further experiments which presuppose it. Philosophy isn't hurting anything by taking this approach, but it's not adding anything to what science has already done.
2. The second option is, you do what philosophers do all-too-often: simply present your speculation as fact, perhaps hiding a "I don't actually know" in a footnote somewhere so you can point to it when criticized. A common variant of this is teaching ridiculous ideas as equally valid, and then saying you're just teaching history of philosophy when criticized. This is how, for example, you get the categorical imperative taught in schools: it's trivial to come up with counterexamples where everyone behaving a certain way would be horrible, but if you point this out, philosophers will often simply say that they're just teaching Kant because he's historically important. Yet Kantian ethics are taught right next to much more realistic ethical ideas, and students often can't differentiate which ones make any sense and which ones don't. This would be like teaching flat earth-ism in science class, and then saying "it's history of science" when criticized. It's a motte and bailey argument[1] and it's dishonest and harmful to rational thought.
It seems to me that science has taken us as far as it's useful to go with regard to determinism, and philosophy has nothing of value to add on the subject.
Hume's Problem of Induction isn't comparable here. In that case, Hume is asking a question which science hasn't/can't ask, which is somewhat useful. I don't think, however, that Hume really answers the question, and I don't think it would be useful to pretend that we know the answer. In the case of Hume's Problem of Induction, philosophy adds the question but not the answer: with superdeterminism, science has already asked the question, and philosophy can't answer it any better, so philosophy has nothing to contribute.
There is also the fourth possibility that the physical space is not similar to euclidean space at short distances, and locality in euclidean space is not physical. In this case the entangled particles are actually linked with one another, and can transmit the information about the filter they are interacting with, but because particles prefer to be linked to closeby particles, the long range links easily break not allowing to pass much information.
My issue with the second statement, about knowing the exact state of the universe at any given reference time is that by definition the information within that state would require the entire space of the universe to store with sufficient detail to make an accurate prediction of future states. (One might also assume it would require a real universe's worth of processing power to compute a new state as well.)
I believe it's impossible to completely isolate any segment of that universe (E.G. to make it smaller and thus predictable within the capability bounds of a larger universe) without literally removing it from that universe. That no matter what every part of an existing universe interacts with every other part, even if very, very, indirectly.
As for the question of free will: I believe the biology is largely deterministic. For me, that leaves the main set of questions in the direction of all of the elements that might happen between, outside, or otherwise beyond our current understanding of how the universe works. I feel that if there is any actual freedom in free will that is where it comes from; otherwise it's just the RNG being too complex to understand completely masking the lack of actual choice.
The fact that nothing can propagate faster than light speed, allows to isolate segments of universe from one another. Of course you can not simulate the whole universe exactly, but simulating a part of it, like completely closed room, or a different smaller universe, is still interesting and useful.
Wolfram proposes an interesting solution to the question of free will, that does not require any randomness: computational irreducibility. It is the hypothesis that for some computations there is only one way to perform. That is if you try to predict what an AI will chose, your only option is to create an exact copy and let that copy to make the choice.
I think you forgot one: "The principle of locality always applies" / "no spooky action at a distance" -- while local hidden variable interpretations have been ruled out, nonlocal ones are still very much on the table.
Where does Pilot Wave theory fall?
My understanding is that it is possible to conduct an experiment which would invalidate pilot wave theory or confirm it, to the extent that theories are invalidated or confirmed--we just haven't figured out how to conduct the experiment.
The article mentions that Bell's inequality was in a similar position in the past.
Why does pilot wave theory violate locality -- does it assume that the pilot waves travel faster than light? And if so, is it necessary for the pilot waves to travel faster than light or can they be limited to the speed of light and preserve locality?
Pilot wave theory is based on the assumption that every particle is interacting with the entire Universe all the time.
Does it have to be the simultaneous version of the current universe or can it be the universe as it was distance/c ago?
Actually with relativity and all I'm not exactly sure there is a just a single correct definition of the instantaneous state of the universe.
Experiments have shown that pilot waves would have to travel faster than the speed of light.
My understanding of the experiment is as follows:
Take two entangled photons, beam them up to satellites far away from each other. The satellites have detectors that measure the polarization angle from 0 to 360 degrees. Since entangled photons have opposite polarization you'd expect an inverted V (red line): https://en.wikipedia.org/wiki/Bell%27s_theorem#/media/File:B...
Instead, you get the blue line. Which is weird, because it is basically a cosine curve, and implies that the photons are able to determine the relative angle of the detectors. The crazy part is that this curve still holds even if those detectors are very far apart and you complete the experiment before any information about the relative angles of the detectors would have time to pass from one detector to the other at the speed of light. This is what implies that a pilot wave would have to move faster than the speed of light.
> Why does pilot wave theory violate locality -- does it assume that the pilot waves travel faster than light?
Yes. The pilot wave at any given point can be affected instantaneously by changes anywhere else in the universe.
But in this regard the pilot wave is not different from the standard Schroedinger’s wavefunction, is it?
No, it isn't. Both are interpretations of non-relativistic quantum mechanics, and in non-relativistic QM there is no speed of light limit.
Schroedinger’s wavefunction isn't sufficient in a relativistic context
True. To be fair, it's true that the relativistic extensions to Bohmian mechanics are not as advanced as for the "standard" theory. But they are not necessarily impossible and the requirement of a prefered foliation may not be so unacceptable if the history of the universe goes back to a singularity (so there is a "local time since singularity" that gives somes sense to the idea of simultaneity).
I am not very familiar with Bohmian mechanics/Pilot wave theory, just wanted to make sure what the schro eq is and isn't was clear to everyone
I don't think the problem with superdeterminism is the lack of free will, but with the way it doesn't really give you anything mentally to work with. It posits some early state from which everything could be deterministically extrapolated... except that state is both very complicated and completely hidden. It takes all of the probabilities and shoves them in a black box and says, "The answers exist, and they're in there. But you can't actually look in the box for the answers. You have to go do the experiment and wait for the speed of light to propagate the answer to you."
Like all interpretations, it's mathematically equivalent to any other. It's just a question of what helps you think about the problem, and I don't think many people find it very edifying. You can replace the box with a random number generator, which is at least small enough to fit in your pocket. The superdeterminism box appears to have been crammed full of untold centillions of answers... none of which are accessible beforehand.
If there were reason to think that the superdeterminism box were somehow smaller -- if it all really came down to just one random bit, say, that had been magnified by chaotic interactions to appear like more -- that would attract some attention. And I suppose it would be conceptually testable, by running Laplace's demon in reverse, except that that's not possible either from inside the universe.
So it doesn't really come as a surprise that superdeterminism falls behind MWI or Copenhagen or even pilot wave, because each of those hands you something that you can use to mentally organize the world. Superdeterminism just seems to hand you a catchprase, "As it was foretold in the Long Ago -- but which I just found out about".
Whats wrong with a pseudo-random number generator? You start the universe with 1 million random bits and then just iterate your function on them. How would we detect repetition at the 2^1 mil level? Maybe the universe would repeat itself after a while, but how would we know?
Superdeterminism also plays nicely with the simulation hypothesis. You seed the virtual machine with some randomness and the physical laws and then you run the simulation.
I don't believe you'd need even a million random bits. It's conceivable that only a few random bits are actually required, and then let iteration take care of the rest.
There's nothing wrong with that. I just don't think people find it very useful as an organizing principle, so it doesn't attract a lot of attention.
The thing about superdeterminism is that it's only interesting if you want to argue philosophy. If you're dealing with hidden variables (or even measurement errors) the only practical tool in your box for handling them is probability distributions.
So either way, you've got a probability distribution. And at this point people just apply Occam's Razor and get on with their lives. You can theorize an infinite number of systems that work exactly like the real world. The question is whether they're useful.
> The thing about superdeterminism is that it's only interesting if you want to argue philosophy.
Like the Many Worlds Interpretation!
Could an untestable underlying theory inspire new models and implications that are testable?
In a general case, yes. This happened recently with gravitational waves. The construction of multiple gravitational wave detectors across the planet allowed us to test a previously untestable hypothesis about the number of dimensions that gravity can act in.
> allowed us to test a previously untestable hypothesis
So it is testable now? Is everyone here using the word "untestable" in the same way, i.e. "untestable today with the current state of the art, but it might be tomorrow" vs. "untestable ever, as a principle, even with perfect tech"
Untestable is a layman's word, everyone here should be saying "falsifiable".
Many Worlds isn't falsifiable and hence untestable; but untestable can also mean not possible to test with current tech. Falsifiable is more accurate here.
"Falsifiable" is a fantastic term but the philosophy of science did not stop with Popper, and it's not the only term we have available to discuss competing theories.
What other terms would you use? "Falsifiable" is more accurate that "untestable"?
"More accurate" is a good way to phrase it. "More accurate" also describes GR when you compare it to Newtonian mechanics. Strictly speaking, Newtonian mechanics is false and has been falsified if you subscribe to the Popperian view. However, if Newtonion mechanics is false, is it useless? No.
As it turns out, there are a few problems with the epistemology of falsifiability. The main ones I can think of are:
1. Duhem-Quine problem - there is a large number of auxiliary hypotheses under test in any experiment, in addition to the primary hypotheses. These very numerous and we need a framework for deciding how to apply the results of our experiment to the many hypotheses.
2. Statistical claims may be unfalsifiable. Consider a theory that claims that a coin flip has a 50% probability of being heads... how does one falsify this? One can't strictly falsify it, but you can show that the evidence is unlikely given the hypothesis. So we need some framework that connects statistical and probabilistic evidence to our knowledge of the world.
Or in summary, the problems with falsifiability are that falsifying a theory doesn't give us the information that we want, and it's impossible to falsify many theories. To abuse analogies, falsifiability is kind of like trying to cross a river with your car, when there's no bridge and the car won't start.
One approach other than Popperian falsifiability is a Bayesian system of belief and likelihood. This only one direction that the philosophy of science is exploring, but it is probably the one most familiar to HN readers.
Amazing, I appreciate the depth you went in and will be reading more on these topics. Thank you.
Any resource you could recommend for Bayesian system of belief? I understand Bayesian probability/math but I haven't explored it as a philosophy.
I don't have resources for Bayesian systems of belief. I do like Error and the Growth of Experimental Knowledge by Deborah Mayo, but the book is a bit intimidating. I admit I haven’t finished reading it, either.
In Superdeterminism each time a particle has to collapse, instead of rolling a dice it looks into a secret table of hidden variables that was calculated at the beginning of the universe. The table was calculated carefully so the apparent random choices follow all the laws of quantum mechanics, and the results are equivalent to what you would expect if any of the other interpretations where correct.
To calculate this secret table you must simulate all the interactions and path in the universe until it ends, because you must know which particles will be entangled, which result will have the "random" generator in the experiments, ...
So the universe is only a movie that follows the random choices made at the beginning of the universe. But the choices are not arbitrary, they have the correct values so when the events really happen they follow the laws of physics. For example, the random choices at the beginning of the universe make it look that you can't transmit information faster than light.
Physics study the laws of the real universe, but we can redefine Physics as the study of the laws that study the random number generator. Both real-Physics and initial-rng-Physics follow special relativity. Bot agree about QM. Both agree about the Bell inequality.
So with Superdeterminism we solve the problem of QM in the real word, because everything we is already determined. Now the problem is how the RNG at the beginning of the universe work to simulate QM and all the other effects. Let's call the study of the RNG Physics. Now the problem is as hard as before Superdeterminism.
What you describe is not superdeterminism, but a replay of a non local theory. The important part happens in the first run when you calculate the table.
What superdeterminism says, is that there exists local and deterministic evaluation rule that will compute consecutive states of the universe, but simply because of the way the rule works experimenters far away end up always choosing the experiments that yield correct results.
Superdeterminism is unpopular because the existence of such evaluation rule seems very unlikely.
From the article:
> Where do these correlations ultimately come from? Well, they come from where everything ultimately comes from, that is from the initial state of the universe. And that’s where most people walk off: They think that you need to precisely choose the initial conditions of the universe to arrange quanta in Anton Zeilinger’s brain just so that he’ll end up turning a knob left rather than right. Besides sounding entirely nuts, it’s also a useless idea, because how the hell would you ever calculate anything with it? And if it’s unfalsifiable but useless, then indeed it isn’t science. So, frowning at superdeterminism is not entirely unjustified.
Yes, despite the title, the articles has better arguments against superdeterminism than for it.
> there exists local and deterministic evaluation rule that will compute consecutive states of the universe
If this is the correct meaning of superdeterminism, then it doesn't make sense. Saying that there are some unknown rules that explain something is not a scientific theory.
You can solve the quantum gravity problem saying that there are some unknown rules that explain that. You can solve the renormalization problem saying that there are some unknown rules that explain that. You can solve everything saying that there are some unknown rules that explain that.
It is not that simple. Bell inequalities show that there cannot exist a local evaluation rule that could explain experiments with entangled particles where two people pick filters independently.
Superdeterminism is saying that strictly speaking we do not have a proof that two people (whose past light cones intersect) can pick the filters independently, so Bell inequalities still would allow local rules, that in addition to describing particles, somehow also restrict the choices that experimentators can make.
So superdeterminism is not a scientific theory, but a hypothesis that there exists a scientific theory that would fit in the small crack left open by Bell inequalities.
No one knows how to construct such a theory, and most people think it cannot be constructed.
I'm not fond of superdeterminism since it's not that useful for making predictions. Any purely deterministic model has implications for free will, so that doesn't seem to be a legitimate criticism.
Actually I would like to know more about provable violations if Bell's theorem as I am somewhat attached to local determinism and haven't seen an experiment that I would consider convincing. I mean the theories behind the experiments are sound, but I'm not sure they're actually measuring what they think they are measuring due to limitations in the experiment setup -- in order to prove a violation of locality your system cannot be in a cyclostationary equilibrium.
In such an equilibrium the system state effectively becomes a standing wave so you risk measuring an effect that was actually a result of a previous cycle and mistakenly interpret it as being a result of the current cycle -- implying a violation in locality because the "cause" was outside of the light cone of the effect. Note that this is analogous to confusing the group and phase velocities of a radio wave (https://www.quora.com/What-is-the-difference-between-phase-v...).
> In such an equilibrium the system state effectively becomes a standing wave so you risk measuring an effect that was actually a result of a previous cycle and mistakenly interpret it as being a result of the current cycle
I don't know where you're getting this from, but it doesn't describe quantum systems on which Bell inequality violations have been experimentally confirmed (such as photon pairs from parametric down conversion).
The only "loophole" that has not been completely closed at this point is that we don't have 100% efficient detectors, but we have detectors that are well over 90% efficient so the claim that somehow all the stuff that will "fix" the Bell inequality violations is hiding in the small percentage of photons not being detected isn't very compelling.
In short the problem I'm addressing is that the interpretation of the experiment assumes that the system is memoryless, so that the only thing being measured is the interaction with the particles being measured.
In the experiments generating the photon pairs from parametric downconversion, for example, does the entire system start up, send 1 photon which gets split into the entangled photon pairs which then go to the detectors -- with no other photons generated?
If there is a warm-up period for the equipment or other photons are emitted or absorbed then there is the potential for memory effects that could interfere with the measurements.
For instance if we treat light as a wave then the cosine correlation with angle we see in the basic "two entangled photons with polarizing lenses experiment" is exactly what we would expect to see. The difficulty is simply resolving this with the particle nature of photons. If the experimental system has memory then it could easily have the phase of the effective wave or some other function of the history of photons encoded in the state of the system.
There are probably some ways to compensate for these memory effects and demonstrate their (non)existence, but I am not a physicist.
> the problem I'm addressing is that the interpretation of the experiment assumes that the system is memoryless
That's easy to verify by testing the various components--parametric down conversion, prisms, beam splitters, etc.--and showing that if you shine repeated photons on them from the same source, prepared in the same state, they all come out in the same state, or more generally give the same results. All of the optical components involved in these experiments have been tested in this way: if they had failed such tests, they wouldn't be used in experiments because we wouldn't be able to be confident in their behavior.
> n the experiments generating the photon pairs from parametric downconversion, for example, does the entire system start up, send 1 photon which gets split into the entangled photon pairs which then go to the detectors -- with no other photons generated?
For current photon sources, it's impossible to control exactly when they emit a photon. The sources are so inefficient (in terms of converting input energy into photons that are useful for the experiment) that they end up emitting photons slowly enough that only one at a time is inside the apparatus. However, a typical experiment does not use just one photon. It has to take data from many photons because the results are statistical, so you need enough runs to do statistics.
> If the experimental system has memory then it could easily have the phase of the effective wave or some other function of the history of photons encoded in the state of the system.
We know how to design systems that do this: they're called "detectors" and "computers that store data". But such systems have to be carefully designed to do those jobs. Optical components like prisms and beam splitters are not designed to do that: they're designed to do exactly the opposite, to act the same way on every photon that comes into them in the same input state. As I noted above, those components have been extensively tested to make sure they do in fact do that; if they didn't, they wouldn't be used in experiments.
>if you shine repeated photons on them from the same source, prepared in the same state, they all come out in the same state, or more generally give the same results.
Those kinds of measurements would violate the uncertainty principle. You can't know the complete state going in to the system or the complete state going out. You can run some tests and justify other assumptions based accepted theories. We generally have a good idea what happens when lots of photons pass through these components. We have some ideas of what happens to single photons (in a statistical sense), but the fundamental question we are investigating is whether there even is a local deterministic description of what happens to single photons passing through the component.
> The sources are so inefficient (in terms of converting input energy into photons that are useful for the experiment)... a typical experiment does not use just one photon. It has to take data from many photons
I was aware of that and it's part of my criticism. If the emitter were to only emit useable photons when it's "in the right state", what stops the "right state" for emitting photons to become correlated with the polarizers?
There are a bunch of "unusable" photons bouncing around interacting with everything and transporting global state. Then there are the "usable" photons that get reflected, absorbed and re-emitted by components of the test bed. Any time they interact with anything they modify the state of whatever they touch. What happens to the photons that reflect off of the polarizers and travel back into the emitter?
If a photon bounces off of a mirror it had to have 1. transfered momentum to whatever it hit, and 2. induced a sufficiently strong opposing electromagnetic field to cause the photon to be reflected or re-emitted. While these are tiny effects they are roughly the same order as the effects that caused the photon to be reflected in the first place, and they all require a change in state of the mirror so that momentum is conserved and Maxwell's laws are not violated (my guess would be that this could cause shifts in electron orbital or proton spin orientation, but that's a bit beyond me).
> Those kinds of measurements would violate the uncertainty principle.
No, they don't. The uncertainty principle places limits on measurements of non-commuting observables on the same system. We are not talking about that here. See below.
> You can't know the complete state going in to the system
Sure you can: just prepare the system in a known state. For example, pass your photon through a vertically oriented polarizing filter: if it comes through, it must be vertically polarized, so you have complete knowledge of its polarization state. (You might have to try multiple photons to get one that passes through: that's why photon sources in these experiments are often inefficient.)
> or the complete state going out
Sure you can: you measure it. For example, you pass the vertically polarized photon that just came through your vertical polarization filter through a beam splitter, and you have detectors at each output of the beam splitter. Exactly one detector will fire for each photon.
> If the emitter were to only emit useable photons when it's "in the right state", what stops the "right state" for emitting photons to become correlated with the polarizers?
> There are a bunch of "unusable" photons bouncing around interacting with everything and transporting global state.
It looks like you don't have a good understanding of how the "emitter" works. What you are calling the "emitter" is really a filter, like the vertical polarizer described above: it throws away the photons coming from a source (like a laser) that don't meet a particular requirement (like vertical polarization). The thrown away photons are either absorbed (as in the case of the polarizer) or they just pass through the apparatus altogether and fly away (as in the case of parametric down conversion, for example: only a small percentage of the laser photons will be down converted, the rest just fly away and are gone).
In no case are the photons not used kept "bouncing around". They're gone. And the photons in the "right" state are just the ones that make it through the filter and are therefore in a known state when they come out, because that's how the filter works: the filter is uncorrelated with what's inside the experiment because, again, that's how the filter works (and it is tested to make sure it works that way).
> What happens to the photons that reflect off of the polarizers and travel back into the emitter?
There aren't any. See above.
> If a photon bounces off of a mirror it had to have 1. transfered momentum to whatever it hit, and 2. induced a sufficiently strong opposing electromagnetic field to cause the photon to be reflected or re-emitted.
1. Yes, but in these experiments the mirror is fixed to the Earth, so the momentum is transferred to the Earth, which means it's effectively gone. The entire Earth is not going to have a "memory" that can become correlated with the rest of the experiment.
2. No. You are thinking of it classically, but we are not talking about a classical process.
There is something very Taoistic about superdeterminism.
And there is something very Taoistic about homotopy type theory too.
Also, I feel that both superdeterminism and homotopy type theory have traces of the holographic principle in them in a somewhat conceptual or abstract way.
Perhaps there exists a nice correspondence between superdeterminism and homotopy type theory that can be used to extend (in a purely functional and categorical way) the simulation hypothesis into a full-fledged theory (and perhaps with its own nice little axiomatic system) to make sense of reality.
The problem is that superdeterminism contains a principle of explosion:
http://en.wikipedia.org/wiki/Principle_of_explosion
If superdeterminism explains quantum mechanics, why not cosmic inflation? Why not matter asymmetry? Why not abiogenesis? Why not Brexit? Superdeterminism, by construction, can explain everything — and all there’s left to do is pray to God.
I will have to disagree with that.
Firstly I don't think anyone has actually formalised superdeterminism in a way that the principle of explosion can be logically introduced to formally undermine superdeterminism. What you are doing here is akin to stretching the conceptual relevance of Godel's incompleteness theorems and trying to use it to prove or disprove the existence of God.
Basically I don't see how it makes sense to say that superdeterminism contains a principle of explosion. Perhaps my interpretation of superdeterminism is very different from yours. Or maybe I simply don't see the picture as you do. If that is the case please enlighten me.
Secondly I think you are missing the point of superdeterminism here.
There is something very computational (and perhaps Taoistic) about superdeterminism. Apparently under this framework the whole notion of "explaining things" is nullified and becomes meaningless. It occurs to me that our everyday notion of "explaining things" exists at a lower abstraction level and thus loses relevance in the face of superdeterminism. I believe if you really want to undermine superdeterminism as a theory (or as a philosophy), the more relevant question here to ask is: is there anything useful/meaningful about reality (or the universe) that can be inferred assuming superdeterminism? And then of course if you are a scientist you would then ask: are they experimentally verifiable?
> I believe if you really want to undermine superdeterminism as a theory (or as a philosophy), the more relevant question here to ask is: is there anything useful/meaningful about reality (or the universe) that can be inferred assuming superdeterminism? And then of course if you are a scientist you would then ask: are they experimentally verifiable?
To even ask the question you must deny your own premise. If indeed superdeterminism is true, then any experimental verification is nullified by definition: the results of any and all experimentation is itself superdetermined regardless of any scientific framework.
The last part was supposed to be taken with a grain of irony in the spirit of Anton Zeilinger:
"We always implicitly assume the freedom of the experimentalist... This fundamental assumption is essential to doing science. If this were not true, then, I suggest, it would make no sense at all to ask nature questions in an experiment, since then nature could determine what our questions are, and that could guide our questions such that we arrive at a false picture of nature."
I guess that is the problem most people have with superdeterminism. Intellectuals in this day of age are too scientifically trained to have any romanticisation for reality under frameworks like synchronicity (even though it was popularised by Pauli before it lost mainstream appeal after actual fundings went into unfruitful statistical research in the 80s) or (in the newtonian time) alchemy and the love for God. This is why superdeterminism is so unpopular.
I just really like superdeterminism because I think it is cute and I believe in the Tao.
What does superdeterminism have to do with the Tao?
There was something formless and perfect
before the universe was born.
It is serene. Empty.
Solitary. Unchanging.
Infinite. Eternally present.
It is the mother of the universe.
For lack of a better name,
I call it the Tao. It flows through all things,
inside and outside, and returns
to the origin of all things. The Tao is great.
The universe is great.
Earth is great.
Man is great.
These are the four great powers. Man follows the earth.
Earth follows the universe.
The universe follows the Tao.
The Tao follows only itself.
That's beautiful.
Can someone explain more clearly how being in a deterministic universe resolves the “problem” of Bell inequalities? It seems like even if the universe were deterministic it would not cause the classic polarizing-filters Bell inequality to seem “reasonable”. In fact it makes it seem less reasonable to me!
The determinism doesn't solve the problem, it makes the non-locality more visible because some people think that state of two particles far away influencing one another is somehow worse than the wave function in whole universe changing it's value at once.
The superdeterminism "solves" the problem by claiming that there is no problem to begin with, and the results look non-local only because the experimenters always pick experiments that look non-local.
How can a local deterministic theory create such complex behavior as thinking people, and at the same time constrain it in a way that time taken to play mario level is correlated with a photon experiment a year later, is left for the reader as an exercise.
The point is not that it's "reasonable", is that it's just what it is. It doesn't resolve the problem, it makes it meaningless.
There was something formless and perfect before the universe was born.
It is serene. Empty.
Solitary. Unchanging.
Infinite. Eternally present.
It is the mother of the universe.
For lack of a better name,
I call it the Tao. It flows through all things,
inside and outside, and returns
to the origin of all things. The Tao is great.
The universe is great.
Earth is great.
Man is great.
These are the four great powers. Man follows the earth.
Earth follows the universe.
The universe follows the Tao.
The Tao follows only itself.
If Superdeterminism means that the initial state of the universe is such that the universe appears to follow quantum mechanics... why? Why would every single decision have its resolution set in a manner that appears to follow QM?
Superdeterminism is a self-defeating philosophy. In essence, it cedes everything to random chance, and makes all scientific inquiry meaningless. There is no longer any "why" or "how." There is merely, "That's just the way it is." Any apparent order or structure which might be observed is exactly that: merely apparent. Therefore any attempt to understand the universe is vain.
It is little better than the presumption that planets move because a prime mover moves them. It is, in essence, to give as the final answer: "Planets move as they do because they cannot do anything else."
>In essence, it cedes everything to random chance, and makes all scientific inquiry meaningless. There is no longer any "why" or "how." There is merely, "That's just the way it is."
Which doesn't make it wrong...
And indeed, is based on the same math as the more mainstream interpretations.