Myth 3: Quantum Computers Try All Solutions At Once

Chris Ferrie
12 min readMay 27, 2024

“There are two things you should remember when dealing with parallel universes. One, they’re not really parallel, and two, they’re not really universes.” — Douglas Adams

The most fantastical, and hence the most popular, explanation of quantum computing is that it happens simultaneously in parallel worlds. While this is based on the speculations of Very Serious Scientists, it’s not realistic and leads to misconceptions such as the idea that a quantum computer tries all the solutions to a problem at the same time so it can instantly produce the answer.

The idea of parallel universes has long captivated human imagination. From science fiction novels to blockbuster movies, the thought of coexisting realities where alternative versions of ourselves live out different destinies is undeniably intriguing. So, when quantum computing — a field riddled with complexities and counterintuitive principles — emerged, it’s unsurprising that the allure of the parallel universe concept became a go-to explanation for many.

Within this captivating narrative, quantum computers were hailed as miraculous machines that could tap into these alternate realities. It was suggested that, perhaps, these computers operated simultaneously across numerous universes, hence their unparalleled speed and power. Such an idea is not only a testament to our propensity for wonder but also an indicator of how complex quantum physics is to the uninitiated.

Many worlds, one origin

The “quantum tries all solutions at once” myth derives from the Many-Worlds Interpretation (MWI) of quantum physics, which can be traced back to Hugh Everett III’s 1957 Ph.D. thesis, On the Foundations of Quantum Mechanics. However, it was mostly ignored until 1970, when Bryce DeWitt resurrected it in an article, Quantum Mechanics and Reality, appearing in Physics Today. Since then, a growing number of physicists have subscribed to the idea, many referring to themselves as Everettians.

Most popularizations of the MWI focus on the metaphors of a universe that “branches” into “parallel” worlds. This leads to all sorts of confusion. Not only can you waste your money on a Universe Splitter iPhone app (which definitely doesn’t split anything), but physicists even argue amongst themselves at the level of these metaphors. Let’s call this kind of stuff Metaphorical Many-Worlds and not discuss it further. Is there a better way to think about the Many-Worlds Interpretation than this? Yes — and the first thing we are going to do is stop calling it that. Everett’s core idea was the universal wave function. So what is that?

I briefly introduced the wave function as the symbol |𝜓⟩ in the previous myth. Quick recap: a wave function is a mathematical variable (like “x” from algebra class, but dealing with complex numbers) used to calculate what will be observed in experiments. In the context of quantum computers, it’s the quantum information (the qubits) encoded into some physical system. It’s all the information needed to predict what will happen, summarized in the most succinct way possible.

The Schrödinger equation dictates how the qubits, or the wave function, change in time, except, of course, when we attempt to read that information — recalling that reading qubits destroys the quantum data. In other words, there are two rules in quantum physics for how qubits change, and when to apply them is arbitrary and at the discretion of the user of the theory. This bothers all physicists to some extent but bothered Everett the most.

And one wave function to rule them all

The executive summary of Everett is this: quantum theory is consistent without any rule about reading qubits if we consider the quantum information that describes the entire universe — the universal wave function. This wave function evolves according to Schrödinger’s equation always and forever.

This is where all Everettians start. Popular science writer and fervent MWI supporter Sean Carroll calls it “Austere Quantum Mechanics” for its apparent beauty and simplicity. One state, one equation — all’s right with the world. There’s also just one problem — it doesn’t fit at all with our experience of reality. We don’t experience the world as quantum things — being in superposition and whatnot — we experience a definite classical world. We really do experience the effects of reading quantum data. So where does my experience of a qubit in superposition only revealing a single bit fit into the universal wave function?

Let’s go back to superposition. Remember that |0⟩ is a qubit of information in a definite state — reading it deterministically produces the bit 0. We might as well call it a “classical” state. The same goes for |1⟩. On the other hand, the state |0⟩ + |1⟩ is not a classical state as it cannot be encoded into something that only holds bits. If we expand on these descriptions to include more possibilities, the amount of information grows. Luckily, our notation remains succinct. Let’s say that |world 0⟩ is a definite classical state of the entire universe, as is |world 1⟩, and they differ only by one bit (the outcome of reading a single qubit).

In classical physics, “adding worlds” has no meaning, but in quantum physics, |world 0⟩ + |world 1⟩ is a perfectly valid state. If we imagine creating a definite classical state for each of the mutually exclusive events possible and then adding them all up, we will end up with one big superposition state — the universal wave function. Now, we don’t need to write that all down to interpret it — the simple two-state model suffices, so we’ll stick with that. And, it seems to say not that there is a single world that suddenly and randomly jumps upon reading a qubit to either |world 0⟩ or |world 1⟩, but two worlds that exist simultaneously.

Quantum Dad

David Deutsch is often referred to as the “father of quantum computing.” As noted in the brief history presented in the introductory chapter, Deutsch conceived of a model of computation called a universal quantum computer in 1985. Deutsch’s motivation was to find “proof” that MWI is correct. Deutsch is clearly a proponent of the MWI, and he has speculated exactly that which we are referring to as a myth. In his view, when a quantum computer is in a superposition of states, each component state corresponds to a “world” in which that particular computational path is being explored. He dubbed this “quantum parallelism” and suggested that the physical realization of a quantum computer would be definitive experimental evidence of MWI.

Here’s the basic idea in language we have already introduced: if something acts on the superposition state |world 0⟩ + |world 1⟩ as a whole, it seems like it simultaneously acts in both worlds. In his seminal paper, he detailed a small example (now called Deustch’s algorithm) that makes this logic more concrete.

The first thing to note is that algorithms of any type are recipes that solve all instances of a more generic problem. Recall long division — it was not a sequence of steps that worked for only one problem, but any division problem. To describe Deutsch’s algorithm then requires that we understand the problem it is meant to solve.

Consider a simple one-bit computer that accepts a single bit as input and produces a single bit as output. There are not many programs we can run on such a computer. The program could do nothing and return the same bit it received. Or, it could switch the value of the bit (0 becomes 1 and 1 becomes 0). It might also ignore the input bit entirely. It could produce 0 no matter what the input was, or it could produce 1 no matter what the input was. And that’s it. Those four options are the only possible ways to manipulate a single bit. We can split these four programs into two categories: the pair whose outputs added together are odd, and the pair whose outputs added together are even. Given a program, Deutsch’s algorithm tells you which category it belongs to.

If you were given a digital computer with an unknown program, you would expect that you would need to use it twice — once for each possible input bit — and add the outputs together. However, Deutsch showed that if you could input a qubit into the computer, and that qubit was in a superposition state, you only need to run the program once. He later showed, with Richard Jozsa in 1992, that the same is true no matter how many input bits there are. In other words, this is a problem that a digital computer requires exponentially many uses to solve, but only a single use of a quantum computer. It seems the quantum computer has run the program on all inputs simultaneously.

Deutsch then asked if all of that computation is not done in parallel universes, where could it possibly happen?

It happens here

The key to Deutsch’s claim is a mismatch in resources. It doesn’t take that big of a problem before all possible solutions outstrip the total number of things in the entire (single) universe we could use to encode bits. Therefore, the quantum computer must be using resources in other universes.

The problem with this logic is that it discounts quantum information altogether. Sure, it takes exponentially many bits to encode the same number of qubits, but we also have the qubits here, in this single world. Of all people, those who envision a cosmos of countless classical universes seem to lack the imagination to picture a single universe made of quantum stuff instead of classical stuff.

Any argument that quantum computers access parallel worlds or more than three spatial dimensions relies on circular logic that presupposes the objective reality of each component of a superposition state. In other words, they use the MWI to prove the MWI.

Naive quantum parallelism

Now, even if you still want to believe the MWI to be the one true interpretation of quantum physics, its implications for quantum computing are just not useful. In fact, they appear to be dangerous. Computer scientist Scott Aaronson, probably the most famous popularizer (and tied for the most curmudgeonly), bangs on this drum (out of necessity) in every forum he’s invited to, and he appears to be as sympathetic to MWI as you can get without officially endorsing it.

The most obvious logical step from quantum parallelism is that quantum computers try every solution to a problem simultaneously. This runs into two major problems, as Aaronson points out. First, it implies that quantum computers can efficiently solve some of the hardest problems we know of (a famous example being the Traveling Salesman Problem). However, Grover’s algorithm (which we will see later) is the only known quantum algorithm that can be applied generically to such problems, and it provably has only a modest advantage. Technically, unless some new quantum algorithm appears that would upend our understanding of computer science, physics, and philosophy, quantum computers will not be able to solve such problems efficiently.

The second issue with naive quantum parallelism is that supposing a quantum computer could access alternative universes, it seems to do so in the most useless way possible. Rather than performing exponentially many computations in parallel and combining the results, it simply returns a random answer from one of the universes. Of course, actual quantum algorithms don’t work that way either. Instead, algorithms manipulate the coefficients of superpositions (whether there is a “plus” or “minus” between |0⟩ and |1⟩) so that “correct” answers are returned when the quantum data is read. Crucially, quantum computers can only do this for very specific problems, suggesting that the power of quantum data is not access to parallel worlds but simply a matching of a problem’s structure to the mathematics of quantum physics.

Einstein to the rescue

Contrary to popular belief, Einsteinian relativity does not render Newtonian gravity obsolete. In fact, we probably use and refer to Newton’s idea more now than we did before general relativity came along, though that is mostly just because there are more scientists and engineers trying to launch things around than there were a century ago. Occasionally, we even appeal to Newtonian gravity to explain or interpret general relativity. Consider the common technique of placing a bowling ball on stretched fabric to simulate the warping of spacetime. As the ball pulls the fabric down, we can appeal to our intuition from Newtonian gravity to predict what will happen to smaller balls placed on the fabric. This is analogous to when we try to interpret quantum computers through the lens of classical computers — we are explaining the new idea in the context of the older ones. However, we can also explain older ideas through the lens of newer ones, which often have much more explanatory power.

Suppose you’ve just had a riveting lecture on general relativity — which ought to have blown your mind and upended your conception of reality — and are now wondering how the old ideas of Newton fit into the new picture. Einstein asked us to imagine being in a rocket ship in the dead of space, with no planets or stars nearby. Of course, you would be floating inside your rocket ship, feeling weightless. Now, imagine the rocket ship started accelerating forward at a constant rate. The ship would move forward, but you would remain still until the floor of the ship reached you. At that point, the floor would provide a constant force pushing on you. You could “stand up” and walk around on the floor, which now gives you the sensation of weight. In fact, you have no way of knowing whether the rocket ship is accelerating in empty space or is simply standing up, completely still, on Earth. The “feeling” of gravity is just that, a feeling. In other words, gravity is a “fictitious” force like the “centrifugal force” keeping water in a bucket that swung around fast enough. Once you take examples like this on board, you tend to understand both Einsteinian and Newtonian gravity better. Can we do the same for quantum computers?

A quick recap. A qubit is two complex numbers — one associated with 0 and the other with 1. Two qubits are represented as four complex numbers associated with 00, 01, 10, and 11, and so on it goes. A large number of qubits is an exponentially large list of complex numbers, each one associated with a possible ordering of bit values. Ten qubits have 210, or 1024, complex numbers, one of which is associated with 0000000000 and another with 1001011001, and so on. A quantum computation manipulates these numbers by multiplying and adding them with other numbers. One classical interpretation is that the quantum computer is doing classical computation on each of these bit values simultaneously. That’s simple and neat but, as noted above, quickly leads to misconceptions. Let’s consider then taking this quantum computation description for granted and ask how classical computers fit in.

An interesting generalization of digital computers includes randomness, which you can imagine comes about by flipping coins to decide the input of the program. How these “probabilistic bits” are described is actually remarkably similar to qubits. One probabilistic bit is two numbers — the probability of 0 and the probability of 1. Two probabilistic bits are four numbers representing the probability of 00, 01, 10, and 11. Just like with qubits, ten probabilistic bits have 210, or 1024, probabilities, one of which is associated with 0000000000 and another with 1001011001, and so on. The situation is nearly identical, except for the fact that instead of complex numbers, the probabilities are always positive. Now, suppose I have a probabilistic computer that simulates the flipping of ten coins. It manipulates numbers for each of the 1024 possible sequences of heads and tails just like a quantum computer would. So, does it calculate those probabilities in parallel universes? No, obviously not. But clearly, there must be a difference between the two computers.

When you have a list of probabilities representing bits of information, and you change those bits of information — by processing them in a computer, say — then the list of probabilities obviously changes. In general, the new list is a mixed-up version of the old list obtained by multiplying and adding the original numbers together. But here’s the thing: if all those numbers are positive, they can never cancel each other out. Multiplying and adding positive numbers always results in positive numbers. Meanwhile, with qubits, the list can change in drastically different ways because adding negative numbers to positive numbers can lead to cancellation. In other words, a classical computation is just a quantum computation restricted to positive numbers that add up to one.

Rather than picturing quantum computers as some exotic new addition to our classical world, let’s turn things around. Imagine a world fundamentally quantum, where the large objects we’re familiar with are strangely constrained. They can only perform a specific kind of computation without the ability to cancel out possibilities the way full-fledged quantum systems can.

Deflating the multiverse

Quantum computations happen in this universe, not the multiverse. But the media, always on the hunt for tantalizing stories, grabbed onto this narrative, creating a feedback loop. The more the idea was mentioned, the more ingrained it became in public consciousness. Over time, the concept of many worlds became intertwined with quantum computing in popular discourse, leading to the prevalent yet misconstrued belief that quantum computers work by operating simultaneously across parallel universes.

The portrayal of quantum computing as a magical tool that can solve all problems by computing in multiple universes can lead to misunderstandings and inflated expectations. It’s crucial to separate the fascinating yet speculative ideas about the nature of reality from the actual, proven capabilities of quantum computers. While Deutsch was inspired by the MWI and sees quantum computers as evidence for it, the actual operation and utility of quantum computers don’t require MWI to be true. In other words, quantum computers work based on the principles of quantum mechanics, and their functionality is independent of the philosophical interpretation of those principles.



Chris Ferrie

Quantum theorist by day, father by night. Occasionally moonlighting as a author.