I know a few regular readers of this blog have views about the many worlds interpretation of quantum mechanics. I want to ask a question about what is supposed to be the response to a basic worry about a whole family of approaches. (David Wallace's recent book would be an obvious case, but so would Sean Carrol's recent contributions.)

The many worlds interpretation basically says that whenever you make a "meausurement" in QM, (say you have a particle that is spin up in the y direction and you measure spin in the x direction,) the world contintues to evolve according to the Schroedinger equation, and the only thing that makes it look like the measurement has a determinate outcome is that the world splits into two emergent worlds, with an emergent observer in each one. The trick of all this, of course, is to somehow explain why there is probability when all of the outcomes are occuring. One problem I have with all of these attempts to get probability out of is that they all go like this.

1. Assume decoherence gets you branches in some preferred basis.

2. Give an argument that the Born rule applied to the amplitudes of these branches yields something worthy of the name ‘probability.’

The problem is that these steps happen in the reverse order that one would like them to happen.

Look at step one. Decoherence arguments involve steps

1.a) showing that as the system+detector gets entangled with the environment, the reduced density matrix of this entangled pair evolves such that all the off-diagonal elements get very close to zero,

and

1.b)reasoning that therefore, each diagonal element corresponds to an emergent causally inert “branch.”

But step 1.b is fishy insofar as it happens before step 2. Who cares if the little numbers on the off-diagonals are very close to zero, until I know what their physical interpretation is? Not all very small numbers in physics can be interpreted as standing in front of unimportant things. Now, if we could accomplish step 2, then we could discard the off-diagonal elements, because we know that very small _probabilities_ are unimportant. But the cart has been put in front of the horse. We can’t conclude that the “Branches” are real and causally inert and have independent “obsevers” in them _until_ we have a physical interpretation of the off-diagonal elements being small. But all of these Everettian moves do 1.b first, and only afterwards do 2.

Now its true that the fact that the off-diagonal elements are small tells us that the different branches don’t interfere with each other *very much* in terms of their future evolution. I.e., I could evolve a branch forward in time, and the result is *almost* completely independent of the existence of the other branches. But the notions of not very much and almost here are still in terms of small, but physically uninterpreted numbers.

I think what often drives the intuition that it is ok to interpret the small off-diagonal terms as telling you that the brances are independent is that we understand the off-diaganol terms as the "interference terms." But I think this is smuggling, still, a probabilistic notion. "Interference" is a probabilistic notion, that we get from, e.g. thinking about "how often" we expect interference to show up in the statistics.

OK. So, this worry is out there in the literature. What's the response?

## Recent Comments