Myth 5: Quantum Computers Will Replace Digital Computers

Chris Ferrie
11 min readMay 27, 2024

“Create the hype, but don’t ever believe it.” — Simon Cowell

We are immersed in a “replacement culture” that assumes new technologies will completely supplant their predecessors. This mindset, which has witnessed the transition from records to CDs to streaming or from landlines to wireless handsets to mobile phones, might naturally lead many to assume that quantum computers will replace digital ones.

It’s easy to be caught up in the buzz and excitement surrounding emerging technologies, especially something as ground-breaking as quantum computers. Predictions of them replacing our everyday devices, from laptops to smartphones, permeate popular culture and the media. This is a form of hype that misrepresents the potential of quantum computing.

This chapter will first describe the likely trajectory of quantum technology and detail why conventional computers will always be needed and, in some cases, remain superior to quantum computers.

GPUs to QPUs

As mentioned in the introductory chapter, a quantum computer is unlikely to be a “computer,” as we colloquially understand it, but a special-purpose processing unit — the QPU. The term QPU (quantum processing unit) is analogous to the graphics processing unit (GPU).

The history of the GPU begins in the late 1990s, with Nvidia introducing the GeForce 256 in 1999, often celebrated as the first GPU. Originally, GPUs were designed to accelerate graphics rendering for gaming and professional visualization, offloading these tasks from the CPU (central processing unit) to increase performance and efficiency.

Over the years, GPUs evolved from specialized graphics accelerators into highly parallel, multi-core processors capable of handling a wide range of computing tasks. This transition was facilitated by developing programming models like CUDA (Compute Unified Device Architecture), which allowed developers to use GPUs for tasks beyond graphics display, including scientific research, machine learning, and something to do with cryptocurrencies.

Adopting GPUs for parallel processing tasks significantly accelerated computations in fields requiring intensive mathematical calculations, transforming industries and research. This shift underscored GPUs as indispensable for specific applications rather than general computing needs.

Currently, QPUs are in their nascent stages, with ongoing research focused on overcoming significant challenges such as error rates, qubit coherence times, and scalability. Quantum computing is still largely experimental, with prototypes and small-scale quantum computers being tested in research labs and by a few commercial entities. To guess at the quantum technological future is an exercise in crystal gazing, but here we go…

I predict that the developmental trajectory of QPUs will mirror that of early digital computers and GPUs, transitioning from rudimentary and highly specialized devices to more sophisticated and versatile computing platforms. However, due to the inherent complexity and specialized nature of quantum computations, QPUs will evolve into complementary components of classical computers optimized for specific tasks intractable for CPUs. Much like GPUs, QPUs will eventually be utilized beyond the applications we envision for them today, but they will never replace CPUs or even GPUs.

Before we get into what QPUs might do, let’s outline what they certainly won’t do. But before that, we need to talk about what problems are hard and why.

Computational complexity

Theoretical computer scientists categorize computational problems based on the amount of computational resources (like time and space) required to solve them, essentially dividing them into “easy” and “hard” problems.

Imagine you’re trying to crack a safe that uses a pin code. Let’s start with a four-digit pin, where each digit can be either a 0 or a 1. For the first digit, you have two options. For the second digit, you also have two options, but now, combining it with the first digit’s options, you get four total possibilities (00, 01, 10, 11). By the time you reach the fourth digit, you’re looking at sixteen possible combinations (0000, 0001, …, 1111). Clearly, this safe is not very secure.

Now, imagine each digit in the pin can be more than just 0 or 1. Let’s say there are n options for each digit, and the pin is d digits long. The total number of combinations is n to the power of d (or nd, which is n multiplied by itself d times). If the number of digits d stays fixed, but you increase the number of options n for each digit, then the total number of combinations grows polynomially because polynomials involve variables raised to fixed powers. For example, sticking with a four-digit pin but increasing the options for each digit from just 0 and 1 to, say, 0 through 9, the safe gets more secure because the total combinations jump from 16 to 10,000 (which is 104).

On the flip side, if you keep n fixed (like sticking to 0 through 9 and not adding more) but increase the number of digits d, you see an exponential growth in combinations because the variable changing is in the exponent. This means for each new digit you add, you’re multiplying the total number of combinations by n, not just adding n more combinations. If the new safe has six digits instead of four, we go from 10,000 combinations to a million!

Cracking locks where the number of digits is fixed but the number of options can grow is “easy” because the number of steps grows polynomially. Such a problem belongs to the class labeled P (for Polynomial time). Since problems in the class P are already easy for classical computers, quantum computers won’t significantly speed them up. An algorithm is called efficient if it runs in polynomial time. So, both classical and quantum algorithms for this problem are efficient.

On the other hand, cracking locks where the number of options is fixed but the number of digits can grow is “hard” because the number of steps grows exponentially. Note that even though the problem is identical to the one considered in the previous paragraph for any given lock, problem classes require some flexible notion of input size. Since the input is different between the two scenarios, the problems really do occupy different classes. When the number of digits can change, the problem belongs to the class NP (Nondeterministic Polynomial time). This class contains all problems for which a solution can be easily verified. In this case, given the correct combination, it is trivial to test it.

For the more difficult problem of cracking a lock with variable digits, the algorithm of trying every combination is not efficient. In fact, there is no efficient classical algorithm for this problem. It’s not expected that an efficient quantum algorithm exists for this problem, either. (Though, if you believed the last myth, you might be forgiven for thinking so.)

In summary, quantum computers aren’t useful for cracking pin numbers for two reasons. Either the problem is too easy — in which case a classical computer suffices — or the problem is too hard for any computer. By now, you must be wondering what problems a QPU is actually useful for.


Ideally, we’d like to define a class of problems that can be solved with a quantum algorithm in polynomial time. In classical complexity theory, most problem classes are defined relative to a deterministic algorithm. But quantum algorithms end when quantum data is read, a random process. So, we need to add probability to the definition.

Bounded-error Quantum Polynomial time (BQP) is the class of problems that can be solved with a quantum algorithm in polynomial time with a probability of error of at most ⅓. (The ⅓ is an arbitrary choice, strictly less than ½ if you were wondering.) This class includes all the problems that are “easy” for a quantum computer. Some problems in this class are assumed to be hard for classical computers, but there is some fine print.

Briefly, there are huge, century-old open questions in the theory of computer science that would net you fame and fortune if you solved them. One such example is, does P = NP? Another: is NP contained in BQP? We suspect the answer is “no” to both of these questions, but there is currently no mathematical proof. This is why many statements about computational speed-up are couched in dodgy language, and some aren’t. We don’t really know if quantum computers are any different from classical computers, but we highly suspect they are. In this book, for brevity, I won’t add all the necessary caveats to statements about complexity as if it were a graduate class in computer science.

As mentioned in the introductory chapter, the Quantum Algorithm Zoo ( lists problems known to be in BQP — that is, efficiently solvable with a QPU. It’s a long list, but most of the problems are extremely abstract. The most famous is probably factoring, which is solved by Shor’s algorithm. But that is better suited for the next myth. Here, I’ll describe the “obvious” one.

Quantum simulation

In both classical and quantum physics, scientists use mathematical models called Hamiltonians to describe how things behave — from the interactions of tiny particles to the workings of complex materials. However, understanding and predicting the behavior of these systems can be incredibly complex, requiring massive amounts of computational power, even for supercomputers. Today, many approximations are used to make the problem tractable, yet at the expense of accuracy.

Given a Hamiltonian model, a detailed simulation of it with a digital algorithm is so inefficient that it’s not even conceivable that we will ever have the classical computing power to solve it. However, it has been shown that some Hamiltonian model simulations can be efficiently carried out on a quantum computer.

At a high level, the idea is both simple and intuitive. Given the Hamiltonian model, break it up into smaller and smaller chunks until each chunk is an elementary quantum logic instruction. Then, carry those instructions out on a universal quantum computer! Of course, it’s harder than it appears to do the “chunking,” but it intuitively feels easy because we are using quantum physics to mimic other quantum physics.

Why is this important? Well, as Feynman said, “Nature is quantum mechanical, damn it.” So, if we want to engineer and control Nature at the finest scales, we need to be able to model and simulate it in much the same way we model and simulate everything from bridges to airplanes before we attempt to build them. Indeed, quantum simulation may help scientists and engineers design new materials with exciting properties, like superconductors that work at room temperature or ultra-strong and lightweight materials. These could revolutionize fields like energy, transportation, and electronics. By simulating complex molecules accurately, quantum simulation could speed up drug discovery and even allow scientists to probe exotic new phenomena in a virtual environment free from terrestrial constraints, furthering our understanding of the universe’s fundamental workings.

These are surely lofty goals, and this area of quantum algorithm research is a work in progress. We are still far away from a simple pipeline that takes the desired properties of complex systems on one end and pops out solutions on the other. It may be that every potentially revolutionary application is one of the “worst case” scenario problems that even a quantum computer can’t efficiently solve. However, that doesn’t necessarily render quantum computers useless because two algorithms can be technically inefficient while one is much faster than the other.

Brief aside on solving “hard” problems

We try to solve “hard” problems all the time. Just because a problem doesn’t omit an efficient algorithm doesn’t mean we can’t solve it — we simply need to evaluate how much time and energy we are willing to put into solving it. Historically, algorithm breakthroughs have occasionally turned the tide, making previously intractable problems manageable. A classic example of this is the development of the Fast Fourier Transform (FFT) and its profound impact on signal analysis, which can serve as a bridge to understanding the potential of quantum algorithms that don’t omit exponential speed-ups.

The FFT is an algorithm for computing the Discrete Fourier Transform (DFT) and its inverse. The DFT is used across various fields of science and engineering, from signal processing to solving partial differential equations. Before the introduction of the FFT, computing the DFT was a computationally expensive task, requiring n2 operations for n data points. The FFT reduced this complexity to n·log(n) operations. While this appears to be a modest square root speed-up, it enabled the practical processing of large datasets.

Imagine a communications system such as Wi-Fi using 1,000 subcarriers in its signal. The difference between using the DFT and the FFT in encoding and decoding that signal would be 1,000-fold. That’s the difference between one second and fifteen minutes or one day and nearly three years. While it doesn’t provide the highly sought-after exponential advantage, the FFT exemplifies how a clever algorithm can change the landscape of computational problem-solving. It allowed for real-time signal processing and practical image compression, among many tasks that were previously prohibitive.

While the hype around quantum computing speed-ups often invokes the term “exponential,” we also suspect that QPU can provide square-root speed-ups generically. Again, while this sounds modest, keep in mind the FFT and all that it has done for society when you say that!

Grover’s search algorithm

Imagine you have a massive, unsorted database — think of a disorganized phone book with millions of entries. You need to find a specific person’s number, but you don’t know where to start. With a classical computer, machine, or just yourself, you’d have to check the entries individually. On average, you’d have to search through about half the database before finding the right one. In the worst case, you’d end up checking the entire phone book!

While we are unlikely ever to find a scrabbled-up phone book, many problems can be recast as essentially the same thing. Finding the correct combination for a pin number is a perfect example. Thus, solving this problem faster is practically relevant, which is where quantum fits in. Grover’s search algorithm is a quantum algorithm that speeds up the process of finding a marked item in an unsorted database. How does it do that? First, as a fun exercise, let’s give the popular version that utilizes the myths already discussed.

Quantum-powered search. Instead of checking items one by one, a quantum computer can use superposition to examine all database entries at the same time. It’s like looking at all the pages of the phone book simultaneously. Okay, now you know that’s not really how it works. So, what’s a better explanation than simply “complex math?”

Recall from previous chapters that qubits are long lists of numbers that can be both positive and negative. Steps in an algorithm amount to multiplying and adding these numbers together to get a new list. When positive and negative numbers combine, they tend to produce smaller numbers, while like-signed numbers reinforce each other to produce larger numbers. Grover’s algorithm works by first assuming some “oracle” exists that multiples the number at the location of the marked item by −1. So, all the numbers are the same, except for one, but it’s “hidden” — the algorithm can’t know which it is. The next step is diffusion, which spreads the numbers around like a wave dissipating. Equal numbers don’t affect each other, which we would expect from the symmetry of the situation. However, each number shares an asymmetric relationship with the marked number. The effect is that every number has a bit of its value shaved off and given to the marked item’s number.

By repeating the above process, the number corresponding to the marked item becomes larger and larger while every other number approaches zero. Think of it as the right entry becoming brighter while all others fade. The algorithm cleverly manipulates the quantum information so that the marked entry becomes more likely to be found when the data is finally read.

The steps in Grover’s algorithm use the intuition of how waves behave and where interference will occur, which is completely different from how digital algorithms are designed. This illustrates both the difficulty and potential of quantum algorithms. They require new insights to develop, which means many problems we haven’t even thought about could be amenable to them.

While Grover’s algorithm gives merely a square root speed-up over exhaustively searching databases, recall the FFT and its implications.



Chris Ferrie

Quantum theorist by day, father by night. Occasionally moonlighting as a author.