Has Reductionism Run its Course?

6 min read Original article ↗

For more than 2000 years, ever since Democritus’ first musings about atoms, reductionism has driven scientific inquiry. The idea is simple enough: Things are made of smaller things, and if you know what the small things do, you learn what the large things do. Simple – and stunningly successful.

After 2000 years of taking things apart into smaller things, we have learned that all matter is made of molecules, and that molecules are made of atoms. Democritus originally coined the word “atom” to refer to indivisible, elementary units of matter. But what we have come to call “atoms”, we now know, is made of even smaller particles. And those smaller particles are yet again made of even smaller particles.

© Sabine Hossenfelder

The smallest constituents of matter, for all we currently know, are the 25 particles which physicists collect in the standard model of particle physics. Are these particles made up of yet another set of smaller particles, strings, or other things?

It is certainly possible that the particles of the standard model are not the ultimate constituents of matter. But we presently have no particular reason to think they have a substructure. And this raises the question whether attempting to look even closer into the structure of matter is a promising research direction – right here, right now.

It is a question that every researcher in the foundations of physics will be asking themselves, now that the Large Hadron Collider has confirmed the standard model, but found nothing beyond that.

20 years ago, it seemed clear to me that probing physical processes at ever shorter distances is the most reliable way to better understand how the universe works. And since it takes high energies to resolve short distances, this means that slamming particles together at high energies is the route forward. In other words, if you want to know more, you build bigger particle colliders.

This is also, unsurprisingly, what most particle physicists are convinced of. Going to higher energies, so their story goes, is the most reliable way to search for something fundamentally new. This is, in a nutshell, particle physicists’ major argument in favor of building a new particle collider, one even larger than the presently operating Large Hadron Collider.

But this simple story is too simple.

The idea that reductionism means things are made of smaller things is what philosophers more specifically call “methodological reductionism”. It’s a statement about the properties of stuff. But there is another type of reductionism, “theory reductionism”, which instead refers to the relation between theories. One theory can be “reduced” to another one, if the former can be derived from the latter.

Now, the examples of reductionism that particle physicists like to put forward are the cases where both types of reductionism coincide: Atomic physics explains chemistry. Statistical mechanics explains the laws of thermodynamics. The quark model explains regularities in proton collisions. And so on.

But not all cases of successful theory reduction have also been cases of methodological reduction. Take Maxwell’s unification of the electric and magnetic force. From Maxwell’s theory you can derive a whole bunch of equations, such as the Coulomb law and Faraday’s law, that people used before Maxwell explained where they come from. Electromagnetism, is therefore clearly a case of theory reduction, but it did not come with a methodological reduction.

Another well-known exception is Einstein’s theory of General Relativity. General Relativity can be used in more situations than Newton’s theory of gravity. But it is not the physics on short distances that reveals the differences between the two theories. Instead, it is the behavior of bodies at high relative speed and strong gravitational fields that Newtonian gravity cannot cope with.

Another example that belongs on this list is quantum mechanics. Quantum mechanics reproduces classical mechanics in suitable approximations. It is not, however, a theory about small constituents of larger things. Yes, quantum mechanics is often portrayed as a theory for microscopic scales, but, no, this is not correct. Quantum mechanics is really a theory for all scales, large to small. We have observed quantum effects over distances exceeding 100km and for objects weighting as “much” as a nanogram, composed of more than 1013 atoms. It’s just that quantum effects on large scales are difficult to create and observe.

Finally, I would like to mention Noether’s theorem, according to which symmetries give rise to conservation laws. This example is different from the previous ones in that Noether’s theorem was not applied to any theory in particular. But it has resulted in a more fundamental understanding of natural law, and therefore I think it deserve a place on the list.

In summary, history does not support particle physicists’ belief that a deeper understanding of natural law will most likely come from studying shorter distances. On the very contrary, I have begun to worry that physicists’ confidence in methodological reductionism stands in the way of progress. That’s because it suggests we ask certain questions instead of others. And those may just be the wrong questions to ask.

If you believe in methodological reductionism, for example, you may ask what dark energy is made of. But maybe dark energy is not made of anything. Instead, dark energy may be an artifact of our difficulty averaging non-linear equations.

It’s similar with dark matter. The methodological reductionist will ask for a microscopic theory and look for a particle that dark matter is made of. Yet, maybe dark matter is really a phenomenon associated with our misunderstanding of space-time on long distances.

The maybe biggest problem that methodological reductionism causes lies in the area of quantum gravity, that is our attempt to resolve the inconsistency between quantum theory and general relativity. Pretty much all existing approaches – string theory, loop quantum gravity, causal dynamical triangulation (check out my video for more) – assume that methodological reductionism is the answer. Therefore, they rely on new hypotheses for short-distance physics. But maybe that’s the wrong way to tackle the problem. The root of our problem may instead be that quantum theory itself must be replaced by a more fundamental theory, one that explains how quantization works in the first place.

Approaches based on methodological reductionism – like grand unified forces, supersymmetry, string theory, preon models, or technicolor – have failed for the past 30 years. This does not mean that there is nothing more to find at short distances. But it does strongly suggest that the next step forward will be a case of theory reduction that does not rely on taking things apart into smaller things.