Myth 2: Qubits Are Zero And One At The Same Time
“Well, some go this way, and some go that way. But as for me, myself, personally, I prefer the shortcut.” — Lewis Carroll
You probably think that a qubit can be 0 and a 1 at the same time. Or that quantum computing takes advantage of the strange ability of subatomic particles to exist in more than one state at any time. I can hardly fault you for that. After all, we expect places like the New York Times, Nature, Science, New Scientist, Time Magazine, and Scientific American, among many others, to be fairly reputable sources, right? Apparently not. Nearly every popular account of quantum computing has this “0 and 1 at the same time” metaphor.
I say metaphor because it is certainly not literally true that the things involved in quantum computing are 0 and 1 at the same time. Remember that computers don’t actually hold 0s and 1s in their memory. Those labels are just for our convenience. Each bit of a digital computer is a physical thing that exists in two easily distinguishable states. Since we use them to perform logic, we could also have labeled them “true” and “false.” Now it should be obvious — ”true and false at the same time” is just nonsense. (In formal logic, it is technically a false statement.)
The problem is even worse, though, because not only is “0 and 1 at the same time” a gross oversimplification, but it is also a very misleading analogy.
If 1 = 0, I’m the Pope
Me and the Pope are clearly two people. But assume 1 = 0. Then 1 + 1 = 0 + 1. Since 2 = 1, me and the Pope are also one person. Therefore, I am the Pope.
You see where the problem is, right? From a false premise, any conclusion can be proven true. This little example was humorously pointed out by the famous logician Bertrand Russell, though he wasn’t talking about qubits. However, we can clearly see that starting with a blatantly false statement is going to get us nowhere.
Consider the following logic. First, if a qubit can be 0 and 1 at the same time, then two qubits can be 00 and 01 and 10 and 11 at the same time. Three qubits can be 000 and 001 and 010 and 011 and 100 and 101 and 110 and 111 at the same time. And… well, you get the picture. Like mold on that organic bread you bought, exponential growth!
The number of possible ways to arrange n bits is 2 to the power of n — 2n, a potentially big number. If n is 300, then 2 to the power of n is 2300, which is more than the number of atoms in the universe! Think about that. Flip a coin just 300 times, and the number of possible ways they could land is unfathomable. And 300 qubits could be all of them at the same time. If you believe that, then it is easy to believe that a quantum computer has exponential storage capacity and power. That would be magic. Alas, this is not how qubits work.
Clearly, we just need to get “0 and 1 at the same time” out of heads.
Escalating quickly
Let’s skip ahead to the end just for a brief moment. I’m going to tell you what qubits actually are, if only so that I can say I never held anything back. If you take a formal university subject in quantum computing, you will learn that qubits are vectors in a complex linear space. That sounds complicated, but it’s just jargon. Vectors are lists of numbers, spaces are collections of vectors that are linear because you can add vectors together, and the word complex refers to numbers that use the square root of -1, which is also called an imaginary number.
Vector spaces are used everywhere, from finance to data science to computer graphics, and despite the term “imaginary” associated with the square root of -1, complex numbers have very real applications as well. Their beauty lies in their ability to simplify and streamline otherwise complicated mathematical problems across various disciplines, from fluid dynamics to electrical engineering. So, the tools themselves are not mysterious. The rules for how these things are used in quantum computing are not complicated either. However, this is where I will stop short of turning this into a math textbook because I’m sure you’re here to read words and sentences rather than symbols and equations.
In short, a qubit is represented not by “0 and 1 at the same time” but by two complex numbers. These numbers take on a continuum of values, so they are indeed much more versatile than the binary option afforded to a single bit. However, they do come with limitations that prevent them from being a computational panacea. If confusion sets in at any point, remember that qubits are lists of complex numbers, and there is a very solid mathematical framework for dealing with them.
Putting the word quantum in front of everything
In the Marvel movie Ant-Man and The Wasp, the term “quantum” is bandied about so liberally that Paul Rudd (playing Ant-Man) nearly breaks the fourth wall to ask on behalf of the audience, “Do you guys just put the word quantum in front of everything?”
While this was meant to be a joke in the context of the movie’s dialogue, the answer in the real world is an emphatic yes. It’s an inside joke that to be a quantum information theorist amounts to opening a classic textbook on information theory and literally placing the word “quantum” in front of every definition and theorem. Though we have quantum information, quantum entropy, quantum channels, quantum coding, quantum this, and quantum that, we didn’t hold on to “quantum bits.” In fact, we did away with it rather quickly.
Qubits were introduced in a 1995 physics paper titled Quantum Coding by Benjamin Schumacher as follows.
“For our elementary coding system we choose the two-level spin system, which we will call a ‘quantum bit’ or qubit. The qubit will be our fundamental unit of quantum information, and all signals will be encoded into sequences of qubits.”
Boom. Qubits burst on the scene with authority! But wait…what’s this buried in the Acknowledgments section?
“The term ‘qubit’ was coined in jest during one of the author’s many intriguing and valuable conversations with W.K. Wootters, and became the initial impetus for this work.”
Ha! The lesson? Always see a joke through to the end.
Writing in qubits
Usually, you will see qubits “written” with a vertical bar, |, and a right angle bracket, ⟩, which come together to form something called a “ket.” There is always something “inside” the ket, which is just a label. Just as variables in mathematics are given letter symbols (“Let x be…” anyone?), an unspecified qubit is typically given the symbol |𝜓⟩. The notation, called Dirac notation, is not special among the various ways people denote vectors, but physicists have found it convenient. Since quantum computing was born out of this field, it has adopted this notation.
The other important thing to note is the use of the word state, which is confusingly overloaded in both physics and computation. The object |𝜓⟩ is often called the state of a qubit or that the qubit is in the state |𝜓⟩. Sometimes |𝜓⟩ is taken to be equivalent to the physical device encoding the information. So, you’ll hear about the state of physical qubits, for example. This is more of a linguistic convenience than anything else. While there is nothing wrong with using this short-hand in principle, it is what leads to misconceptions, such as things being in two places at once, so caution is advised.
Imagine I hand you a USB stick with a message. It might be said that I’ve given you several bits — as if the information were a physical quantity and the USB stick is the bits containing my message. But, again, that’s just a convenient and economical way to speak about it. Really, I have given you a physical device that can represent a bunch of binary options. I encode my message into these options, and you decode the message by looking at it. The state of the USB stick can be described by my message, but it is not literally my message.
The same logic applies to qubits. I can encode qubits into physical devices. Out of convenience, we say those devices are qubits. It may then seem like the statement “the qubit is in the state |𝜓⟩” implies that |𝜓⟩ is a physical quantity. In reality, |𝜓⟩, which is quantum information, can only describe the state of the physical device — whatever current configuration it might be in. That configuration might be natural, or it might have been arranged intentionally, which is the process of encoding the information |𝜓⟩ into the physical device.
Much like the process of encoding bits into a USB stick is referred to as “writing,” encoding qubits into some quantum device is the writing of quantum data. In the old parlance of quantum physics, this is the same as “preparing” a quantum system, where |𝜓⟩ summarizes the repeatable laboratory procedure to bring a physical system into a particular configuration. In the past, this was for the purpose of experimentation. Now, it is done for the purpose of computation.
All this is to say that, within any specialized discipline, people are sloppy with their jargon. The trouble with quantum computing is that the sloppy jargon is the only thing that leaks out of a field that remains specialized. When these phrases are combined with our everyday conceptions of the world, we get weird myths.
Superposition
It’s been mentioned that a qubit is simply a pair of numbers. There are infinitely many pairs of numbers, but also some special ones. For example, the pair (1,0) and the pair (0,1) are pretty special. In fact, they are so special they are given their own symbols in Dirac notation: |0⟩ and |1⟩. Among the myriad of options, this choice was made to keep the connection to the bits 0 and 1 in mind.
The other pair of numbers that usually gets its own symbol is (1,1). The symbol for this pair is |+⟩, and it’s often called the “plus” state. Why plus? Ah, we’ve finally made it to superposition and the origin of “0 and 1 at the same time.” This is the only bit of math I’ll ask you to do. What is (1,0) + (0,1)? That’s right, it’s (1,1). Writing this with our symbols, |+⟩ = |0⟩ + |1⟩.
The qubit is not in the |0⟩ state, nor is it in the |1⟩ state. Whatever we might do with this qubit seems to affect both |0⟩ and |1⟩ at the same time. So, it certainly does look like it is both 0 and 1. Of course, the reality is more subtle than that.
How would one physically encode the state |+⟩? Naively, the equation suggests first encoding 0, then encoding 1, and, finally, adding them together. That seems reasonable, but it’s not possible. There’s never any addition happening in the physical encoding or processing of qubits. The only reason it is written this way is out of convenience for scientists who want to write the states of qubits on paper. Taking a state |+⟩ and replacing it with |0⟩ + |1⟩ is an intermediate step that students learn to perform to assist in calculations done by hand. A quantum computer could not do this, nor would it need to. A quantum computer holds in its memory |+⟩, full stop.
Any pair of numbers that is not (1,0) or (0,1) can be written as a sum of the two of them. Such states are called superposition states, and the nomenclature gets distorted into phrases like “the qubit is in superposition.” You can probably imagine, or have already seen, many misinterpretations of such a statement. The words tempt us to think that a qubit in superposition is in multiple states simultaneously. However, the simplest true interpretation, though not all that compelling, is just “a qubit in superposition is not one that is encoded as either |0⟩ or |1⟩.” This is exactly why I never get invited back on quantum hype podcasts…
At this point, you may be wondering why we use the labels 0 and 1 in the first place if they are so prone to confusion. There is a good reason for it, and it comes when we attempt to read qubits.
Reading quantum information
Sorry, but you can’t read quantum information.
In a world where classical physics reigns supreme, reading data is straightforward. If you’ve saved a document on your computer, when you open it later, you expect to find the same content. Moreover, reading the content amounts to directly perceiving the symbols encoding bits of information. It is so obvious and intuitive we barely give it a second thought. But when it comes to quantum data, things are much different.
In classical physics — and everyday life — measurement is the process of determining the value of pre-existing properties of things, like weight, dimensions, temperature, and so on. With a well-calibrated instrument, we can “read off” what was always there. We can encode information into these properties and, if all else remains the same, decode the information later.
Now, think about an atom for a moment. It’s tiny. Unimaginably tiny. There are mountains of irrefutable evidence that atoms are real, even though no one has ever seen an atom. What we see with our naked eyes is information on the displays of large instruments. But that information is classical, represented in the digital electronics of the device as bits. In short, any attempt to gain information from a quantum system results in bits, not qubits. We cannot simply “read off” the state of a qubit.
In physics, there is plenty more jargon surrounding this, including measurement, observables, and collapse — none of which is important for quantum computing. All we need to understand is that reading quantum data results in classical data — n qubits of information produce n bits of information when read.
As an example, take some qubit of information — some |𝜓⟩ from earlier. Suppose it is encoded into the energy levels of an atom. Any attempt to “read” the atom by, say, measuring the amount of energy it has will result in a binary outcome. (The atom will decay and give off a photon of light or not.) That’s one bit of information. Since |𝜓⟩ is specified by two continuously varying numbers, one bit is not nearly enough to resolve which pair it is. In other words, you can’t read quantum information.
Quantum measurement
In the previous example, an atom was imagined to encode a qubit of information in its energy state. When read, one of two outcomes will occur. If the atom is in the high energy state, it will release that energy as a photon. But, now it has no energy, so it must be in the low energy state. While there are many clever ways to write and read qubits from physical systems, none of them can avoid this situation. Some of the verbs that have been associated with the outcome of a read qubit are destroyed, deleted, collapsed, and other gruesome-sounding terms. A more straightforward way to say it is simply that the physical system no longer encodes the quantum data.
Going from the classical notion of measurement to the quantum one is a huge physical and philosophical leap and something scientists and philosophers still debate about. So, I’m not going to attempt to give a complete answer to why reading qubits works this way, but I’ll give you the gist of it. To “measure” even large classical systems is often invasive. Things like biopsies make that obvious. A less complicated example is tire pressure. Given a tire, we assume the air inside it has some pressure — some fixed value for that property of air, which is true of the air whether we attempt to measure it or not. However, actually measuring the air pressure requires opening the value to move a needle on some gauge. By letting at least a little bit of air out, we’ve changed the value of the very thing we were attempting to measure.
You might intuit that the more you learn about a system, the more you change it. Quantum physics is what you get when you take that idea to the extreme. We can manipulate quantum objects without disturbing them, but then we would gain no information from them. We can eek a small amount of information at the cost of little disturbance, but extracting the most information possible necessitates maximum disruption. Imagine a tire pressure gauge that lets out all the air and only reports whether the tire was previously full or not. You learn a single bit of information and are always left with a flat tire. Though that’s a good analogy, I promise that quantum computers are more useful than it sounds.
A game of chance
So far, we have that qubits encode quantum data but can only reveal a smaller amount of classical data. But there’s something that should be nagging at you — surely the outcome has to depend somehow on the quantum information |𝜓⟩. Otherwise, what’s the point? Indeed. However, it’s not the outcome itself that depends on |𝜓⟩, but the probability.
Quantum physics is not a deterministic theory. It gives us very accurate predictions of probabilities for the possible outcomes of experiments, but it does not tell us which outcome will happen on each occasion. That is, when we read a qubit, the classical bit we receive is random. Recall the plus state from before, |+⟩ = |0⟩ + |1⟩. When read, it will produce the bit 0 or the bit 1 with equal probability. You might call it a quantum coin — a perfectly unbiased random event. Indeed, this is the basis of commercially available QRN, or Quantum Random Number, generators.
It’s going to be our mantra by the end of this book, but a qubit is a pair of numbers. Let’s give the pair symbols (x,y). If either of the pair is zero, the result of reading the qubit will be deterministically 0 or 1. That is, reading the state |0⟩ results in 0, and reading the state |1⟩ results in 1. All other states have some unavoidable randomness. If x is larger than y, the outcome is biased toward 0. There’s a mathematically precise rule for this called the Born Rule in quantum mechanics, but it requires too much symbolic baggage to present here. Besides, you’ve got the general idea.
Why care about coin tosses?
If quantum computing is based on such uncertainties, how can it be useful? This is where the richness of quantum algorithms comes into play. Quantum algorithms are designed around the uncertainties associated with reading qubits. The task of an algorithm designer is to amplify the probabilities associated with correct solutions and minimize the probabilities of incorrect ones. So, even if individual qubit measurements are uncertain, quantum algorithms as a whole guide the computation toward a useful outcome. This is the topic of the next myth, so there’s more to come on algorithms.
To summarize, a qubit is not “0 and 1 at the same time” but rather is described by a pair of numbers representing its state. This mathematical framework is rich and allows for a wide array of manipulations beyond what can be done with classical bits. Writing and reading qubits involve encoding them into and decoding them from physical systems, but with significant differences compared to classical bits. Most notably, “reading” a qubit is a random event that yields classical information and destroys the quantum information. The qubit state influences the outcome’s probability but isn’t fully revealed in the process. The challenge, fascination, and potential power of quantum computing lie in navigating these intricacies to perform useful computations.
This was part of the following book, and it is free here on Medium!
Physical copies are available on Amazon: What You Shouldn’t Know About Quantum Computers.