Whence Quantum Computing?

Chris Ferrie
14 min readMay 27, 2024

“History is not the past but a map of the past, drawn from a particular point of view, to be useful to the modern traveller.” — Henry Glassie

Almost any discussion of quantum computing — whether in a research article, popular science magazine, or business journal — invokes some comparison to what is called “classical” computing. It’s nearly impossible to talk about quantum computing without using the phrase, so we better start there.

The term classical in classical computing is borrowed from conventions in physics. In physics, we often denote pre-1900 physics as “classical” and post-1900 physics as “modern.” Modern physics includes general relativity and quantum physics. General relativity is Einstein’s theory of curved space and time, which explains the force of gravity. This theory, while instrumental in enabling us to comprehend awe-inspiring images of galaxies, has its most direct technological application in GPS, the Global Positioning System crucial for satellite navigation. Yet, even this remarkable application is not standalone — it also requires quantum technology for accurate functioning.

Quantum computing theory might have some things to say about places of extreme gravity, like black holes, but we will stay grounded on Earth and won’t say more about general relativity in this book. The reason is that, in contrast to general relativity, quantum physics has a broader range of applications that span diverse fields. It is the backbone of lasers and light bulbs, the lifeblood of medical scanners and radiation therapy, and the cornerstone of semiconductors and electron microscopes. It governs the precision of atomic clocks and the power of atomic bombs, among many others. And it is the fuel for quantum computers.

The potency of quantum physics and its technological implications stem from exploring a new world — the microscopic world of atoms. By uniting three of the four fundamental forces of nature, quantum physics presents us with rules governing the basic constituents of matter and energy. The fourth force, gravity, while undeniably significant, is relatively weak — evident when we effortlessly overcome Earth’s gravitational pull each time we stand up. The paradigm shift from classical to quantum physics propelled humanity from the era of steam engines and machinery to the digital, information-driven age. Now, it is driving us into the quantum age.

In the remainder of this post, I want to give you a bit of context about how and why quantum computing came to be. Until recently, quantum and computing were mostly separate fields of study, although they overlap significantly in their engineering applications. So, we’ll go over the “quantum” and “computing” parts first before bringing them together.

The “quantum” in quantum computing

The story of quantum physics started in 1900 with Max Planck’s “quantum hypothesis.” Planck introduced the idea that energy is quantized, meaning it comes in discrete packets, which came to be called quanta. The problem Planck was studying was dubbed the black body radiation problem. A black body is an idealization of an object that absorbs and emits radiation. It’s a pretty good approximation for hot things in thermal equilibrium, like the Sun or a glowing iron pot, which give off roughly the same spectrum (colors) when at the same temperature. In other words, if you heat something up so that it glows “white hot,” it will be the same temperature as the Sun (roughly 6000 ℃). The problem was that no one could figure out why using the physics of the time (now called “classical” physics).

As Planck was doing his calculations, he discovered that if energy has some smallest unit, the formulas worked out perfectly. This was a wild guess he made in desperation, not something that could have been intuited or inferred. But it deviated from classical physics at the most fundamental level, seemingly rendering centuries of development useless, so even Planck himself didn’t take it seriously at first. Indeed, it took many years before others began to pay attention and many more after that before the consequences were fully appreciated. Perhaps not surprisingly, one of those who did grasp the importance early on was Albert Einstein.

Among his illustrious list of contributions to science, Einstein used Planck’s quantum hypothesis to solve the problem of the photoelectric effect. The photoelectric effect is a phenomenon in which electrons are emitted from a material when exposed to light of a certain frequency. Classical physics could not explain why the energy of these ejected electrons depended on the frequency (color) of the light rather than its intensity (brightness) or why there was a cut-off frequency below which no electrons were emitted, regardless of the intensity.

Einstein proposed that light itself is quantized into packets of energy, which later came to be called photons. Each photon has energy proportional to its frequency, in accordance with Planck’s quantum hypothesis. When a photon hits an electron, if its energy (determined by its frequency) is sufficient, it can overcome the energy binding the electron to the material and eject it. Quantization had taken hold. This marked a profound transition in physics, which is still the source of confusion and debate today. Before photons, light was argued to be either a wave or a particle. Now, it seems that it is both — or neither. This came to be known as wave-particle duality.

Until this point, quantum theory seemed to apply only to light. However, in parallel with these developments, experiments were starting to probe the internal structure of atoms in evermore detail. One curiosity was spectral lines. Specific lines or gaps were present when observing the light that the purest forms of elements (hydrogen, helium, lithium, and so on) emitted or absorbed. Many formulas existed, but none could be derived from classical physics or any mechanism until Niels Bohr paid a visit to England in 1911 to discuss the matter with the leading atomic physicists of the day.

Bohr’s model proposed that electrons in an atom could only occupy certain discrete energy levels, consistent with Planck’s quantum hypothesis. When an electron jumps from a higher energy level to a lower one, it emits a photon of light with a frequency that corresponds to the difference in energy between the two levels. This model provided a way to calculate the energies of these spectral lines and brought the atom into the realm of quantum physics. He presented the model formally in 1913.

A decade later, Louis de Broglie proposed in his Ph.D. thesis that not just light but all particles, including electrons, have both particle and wave characteristics. This was a revolutionary idea that was quickly put to the test and verified. At this point, it was clear that quantum theory was more than a technique that could be applied as corrections when classical physics didn’t fit the data. Scientists started to theorize abstractly using concepts of quantum physics rather than turning to it only as a last resort.

In 1925, Werner Heisenberg invented matrix mechanics to deal with the calculations underpinning the theory. Using it, he showed that it is impossible to measure the position and momentum of a particle simultaneously with perfect accuracy. This uncertainty principle became a fundamental and notorious aspect of quantum theory. At the same time, Erwin Schrödinger developed a mathematical equation to describe how de Broglie’s waves might change in time. The variable in the equation — called the wave function — describes the probability of finding a particle in a given location or in a particular state.

In contrast to matrix mechanics, Schrödinger’s picture was called wave mechanics. There was a brief debate about which of the two alternatives was correct. However, Paul Dirac showed that both are equivalent by axiomatizing the theory, demonstrating that a simple set of principles can derive all that was known about it. He also combined quantum mechanics with special relativity, leading to the discovery of antimatter and paving the way for further developments in quantum field theory, which ultimately led to the so-called Standard Model of Particle Physics that supported the seemingly unending stream of particle discoveries in high-energy particle accelerator experiments.

However, by the middle of the 20th century, quantum physics was more or less a settled science. The applications of the theory far outstripped the discoveries, and the majority of the people using it resembled engineers more so than they did physicists. We’ll get to those applications briefly, but first, we need to jump ahead before jumping back in time. During the 1960s and ’70s, engineers in the laboratories of information technology companies like Bell and IBM began to worry about the limits of the new communications and computing devices being built. These things require energy to function, and energy is the primary concern of physics. Did the laws of physics have anything to say about this? And, if they did, wouldn’t the most fundamental laws (quantum laws) have the most to say? Indeed, they did, and this is where the two theoretical fields of physics and computing converged. But to appreciate it, we must tell the other half of the quantum computing backstory.

The “computing” in quantum computing

Often, in quantum physics, it is useful to compare the results of some theory or experiment to what classical physics might predict. So, there is a long tradition in the quantum vs. classical comparison. This tradition was adopted first not by computer scientists or engineers but by quantum physicists who started to study quantum computing in the ’80s. Unlike in physics, the adjective classical in classical computation does not mean it is pre-modern — it is just a way to distinguish it from quantum computation. In other words, whatever quantum computing was going to be compared to, it was bound to be referred to as “classical.”

The device I am using to create this is anything but classical in the usual sense of the term. A modern laptop computer is a marvel of engineering, and digital computing would not be possible without quantum engineering. The components inside your computing devices, like your smartphone, are small enough that quantum mechanics plays an important role in how they were conceived, engineered, and function. What makes my laptop “classical” is not the hardware but the software. What the device does at the most abstract level — that is, compute — can be thought of in classical physics terms. Indeed, anything your smartphone can do, a large enough system of levers and pulleys can do. It’s just that your computer can do it much faster and more reliably. A quantum computer, on the other hand, will function differently at both the device level and at the abstract level.

Abstract computational theory, now referred to as theoretical computer science, was born in the 1930s. British World War II codebreaker Alan Turing devised a theoretical model of a computer now known as a Turing machine. This simple, abstract machine was intended to encapsulate the generic concept of computation. He considered a machine that operates on an infinitely long tape, reading, writing, or moving based on a set of predetermined rules. Remarkably, with this simple model, Turing proved that certain problems couldn’t be solved computationally at all.

Together, Turing and his doctoral supervisor, Alonzo Church, arrived at the Church-Turing thesis, which states that everything computable can be computed by a Turing machine. Essentially, anything that computes can be simulated by Turing’s theoretical device. (Imagine a modern device that emulates within it a much older device, like a video game console, and you have the right picture.) A more modern version relating to physics states that all physical processes can be simulated by a Turing machine.

In 1945, John von Neumann expanded on Turing’s work and proposed the architecture that most computers follow today. Known as the von Neumann architecture, this structure divides the computer into a central processing unit (CPU), memory, input devices, and output devices. Around the same time, Claude Shannon was thinking about the transmission of information. In work that birthed the field of information theory, Shannon introduced the concept of the “bit,” short for binary digit, as the fundamental unit of information. A bit is a variable that can take on one of two values, often represented as 0 and 1, which we will meet again and again in this book.

In his work, Shannon connected the idea of information with uncertainty. When a message reduces uncertainty, it carries information. The more uncertainty a message can eliminate, the more information it contains. Shannon formulated this concept mathematically and developed measures for information, realizing that all types of data — numbers, letters, images, sounds — could be represented with bits, opening the door to the digital revolution. Nowadays, the concept of a bit underlies all of digital computing. Modern computers, for example, represent and process information in groups of bits.

Fast forward to the ’70s, and the field of computer science was in full swing. Researchers devised more nuanced notions of computation, including the extension of what was “computable” to what was computable efficiently. Efficiency has a technical definition, which roughly means that the time to solve the problem does not “blow up” as the problem size increases. A keyword here is exponential. If the amount of time required to solve a problem compounds like investment interest or bacterial growth, we say it is inefficient. It would be computable but highly impractical to solve such a problem. The connection to physics emerged through what was eventually called the Extended Church-Turing Thesis, which states not that a Turing machine can’t merely simulate any physical process but that it can do so efficiently.

Today, all computers have roughly the same architecture. A single computer can do any computational task. Some problems are hard to solve, of course. But if you can prove it is hard to solve on one computer, then you know there is no point in trying to design a new kind of computer. Why? Because the first computer can efficiently simulate the new one! This is what the Extended Church-Turing Thesis says, and it seems like common sense to the modern digital citizen. But is it true? Maybe not. When we think about simulating a physical process based on quantum physics, it appears that a digital computer cannot do this efficiently. Enter quantum computing.

A brief history of quantum computing

Quantum computing was first suggested by Paul Benioff in 1980. He and others were motivated by the aforementioned interest in the physical limitations of computation. It became clear that the mathematical models of computation did not account for the laws of quantum physics. Once this was understood, the obvious next step was to consider a fully quantum mechanical model of a Turing Machine. Parallel to this, Richard Feynman lamented that it was difficult to simulate quantum physics on computers and mused that a computer built on the principles of quantum physics might fare better. At the same time, in Russia, Yuri Manin also hinted at the possibility of quantum computers, both noting their potential access to exponential spaces and the difficulty of coordinating such. However, the idea remained somewhat nebulous for a few years.

In 1985, David Deutsch proposed the universal quantum computer, a model of quantum computation able to simulate any other quantum system. This Quantum Turing Machine paralleled Turing’s original theoretical model of computation based on digital information. This model of a quantum computer is essentially equivalent to the model we work with today. However, it still wasn’t clear at the time whether or not there was an advantage to doing any of this. Finally, in 1992, Deutsch and Richard Jozsa gave the first quantum algorithm providing a provable speed-up — a quantum computer can solve in one run of the algorithm what it might take a conventional computer a growing number as the problem size gets larger. It’s an admittedly contrived problem, but it is quite illustrative, and the so-called Deutsch-Jozsa algorithm will be discussed later.

One of the concerns at the time was how robust a quantum computer would be to noise and, indeed, any small amount of error in the algorithm destroys the computation. Of course, we could redefine the problem to be an approximate one. That is, the problem specification allows a small amount of error. The Deutsch-Jozsa algorithm then still solves the problem (it solves it perfectly, so also approximately). However, now, a digital computer can easily solve the approximate version of the problem. So, the “quantum speed-up” disappears.

Ethan Bernstein and Umesh Vazirani modified the problem in 1993 to one where small errors were allowed. They also devised a quantum algorithm that solved the problem in a single step, while any classical algorithm required many steps. In 1994, Dan Simon devised another problem along the same lines. However, this time, the quantum algorithm used to solve it provided a provable exponential speed-up. That is, as the input size of the problem grows, the best classical algorithm requires a number of steps growing exponentially, while the quantum algorithm requires, at most, a linear number of steps.

Simon’s algorithm was the inspiration for perhaps the most famous quantum algorithm: Shor’s algorithm. In 1994, Peter Shor presented a quantum algorithm that can factor large numbers exponentially faster than the best-known classical algorithm. Since most public-key cryptography is based on the assumed hardness of factoring, Shor sparked a massive interest in quantum computing. The race was on.

Detractors quickly dug in their heels. The most common criticism was the incredible fragility of quantum information. It was (and still occasionally is) argued that a quantum computer would have to be so well isolated from its environment as to make it practically infeasible. Classically, error correction employs redundancy — do the same thing many times, and if an error happens on one of them, the majority still tells the correct answer. However, as we will see later, quantum data cannot be copied, so it would seem error correction is not possible.

Peter Shor showed how to encode quantum data into a larger system such that if an error happened to a small part of it, the quantum data could still be recovered by suitable decoding. There was a three-qubit code and a five-qubit code and, not long after, entire families of codes to protect quantum data from errors. While promising, these were still toy examples that worked in very specific instances. It wasn’t clear whether any required computation could be protected.

Recall that one crucial difference between bits and qubits is that the latter forms a continuous set. For example, 1 is a different state of information than 1.000000001, and so on. Would we need a continuous set of instructions for quantum computers for every possible transformation of quantum data? Keep in mind that a single instruction suffices to do any computation with bits. It’s typically called the NAND (“not and”) gate, and it produces an output bit of 0 when its two input bits are 1 and outputs 1 otherwise. It’s amazing to think everything your computer can do, which is basically everything, can be done by repeatedly applying one single instruction to long lists of zeros and ones. Naively, this would be impossible for qubits. Robert Solovay and Alexei Kitaev independently proved that it isn’t.

The so-called Solovay-Kitaev theorem showed that any quantum algorithm could be achieved with a small fixed set of instructions. This was not only important theoretically but also in practice. Quantum engineers now need only concern themselves with making a handful of operations work well — the rest can be built up from them. But while this sounds promising, there was still a loophole for the naysayers. Error correction is not perfect, even for digital electronics. (Blue Screen Of Death, anyone?) Although quantum error correction demonstrated that the most common errors occurring during the execution of these instructions can be corrected, eventually, rare errors will happen, and those will spoil the computation. Imagine you have a leak in the roof and a bucket that can catch 99% of the water. Sounds great, but eventually, the 1% that’s missed by the bucket will flood the house. What you need is a bucket and a mop. With quantum computer errors, we had the bucket but not the mop.

The theoretical pinnacle of quantum computing research is known as the Fault-Tolerant Threshold Theorem. In the late ’90s, several researchers independently discovered that quantum error correction allows quantum computation to happen indefinitely, so long as the rate at which errors happen is below some threshold. The exact value depends on the details of the computing model, but for the sake of argument, let’s say it is 1%. What does this number mean? If you can engineer the fundamental components well enough so that errors happen less than 1% of the time, then a properly constructed quantum code will produce errors in your computation less than 1% of the time. In other words, you can correct errors faster than they are made, provided those errors occur at a rate below the threshold.

And that was it. Before the turn of the century, all the pieces were in place, and we just needed to wait until someone built the damn thing! But here we are, decades later, without a quantum computer — what gives? Is quantum technology a pipe dream? Is it too dangerous to build? Does it require access to other dimensions? And, just how exactly is the thing supposed to work?

--

--

Chris Ferrie
Chris Ferrie

Written by Chris Ferrie

Quantum theorist by day, father by night. Occasionally moonlighting as a author. csferrie.com

Responses (1)