Myth 6: Quantum Computers Will Break the Internet

Chris Ferrie
10 min readMay 27, 2024

“Amateurs hack systems, professionals hack people.” — Bruce Schneier

Y2Q, Q-Day, Quantum Leap Day, Q-Break, the Quantum Apocalypse — whatever you want to call it — the day quantum computers are destined to shatter the internet, decrypting the world’s secrets as easily as a child unwraps their birthday gifts. In anything but hushed tones, we’ve been warned of a digital apocalypse, where privacy crumbles and chaos reigns, all with the flick of a quantum switch.

While it is true that a quantum algorithm exists that can decrypt most messages sent over the internet, the reality of using it is a different story altogether. Of course, like most stories, we need to start at the beginning.

A brief history of internet encryption

The internet’s precursor, ARPANET, was developed as a project by the Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense in the late 1960s. Encryption at this time was primarily a concern for military communications. Crudely speaking, there are only two types of people in this context: us and them. Every physical device can carry the same secret used to encrypt messages. However, in a non-military context, every pair of people in the network needs to share a separate secret. Every new person added to the network would have to find some way of transmitting a unique new secret to every person already in the network. This was a seemingly infeasible proposition until the mid-1970s.

In 1976, Whitfield Diffie and Martin Hellman introduced public key cryptography, fundamentally changing the way encryption could be used over networks. This method allowed two parties to create a shared secret over an insecure network channel. A year later, the RSA (named for Ron Rivest, Adi Shamir, and Leonard Adleman) algorithm was published, providing a practical method for secure data transmission. This algorithm become the foundation for secure communications on the internet, which is simply the amalgamation of multiple networks with standardized communication protocols.

In the 1990s, as the internet became more commercialized and accessible to the public, the need for secure transactions became apparent. The introduction of the Secure Sockets Layer (SSL) protocol by Netscape in 1994 was a critical step in enabling secure online communications and transactions. By the 2000s, with the increasing prevalence of cyber threats, encryption became indispensable. Technologies like Virtual Private Networks (VPNs), end-to-end encryption in messaging apps, and secure browsing protocols for websites become standard practices. Today, it is simply assumed that your interactions on the internet are private. But why and how?

To crack a code

To understand the challenge of breaking internet encryption, especially using RSA, let’s simplify the layers and protocols into a more digestible scenario. Imagine you’re shopping online at your favorite bookstore and decide to purchase a new book from your favorite author (winky face emoji). At checkout, you’re prompted to enter your credit card details. This is where RSA encryption steps in to protect your information.

When you hit “submit” on your credit card details, your browser uses the bookstore’s public key to encrypt this information. The process, devoid of the mathematical symbols you probably don’t want to see while enjoying a quick read, can be likened to locking your credit card details in a box. The bookstore has the only key (the private key) that can unlock the box. Even though the locked box travels across the vast and insecure network of the internet, only the bookstore can open it upon receipt, ensuring that your credit card information remains confidential. However, by inspecting the lock carefully enough, you could deduce what the (private) key looks like, make one, and unlock the box. The question is: how long would this “breaking” of the lock take?

The security of this transaction relies heavily on the RSA algorithm’s use of large prime numbers to generate both the public and the private keys. The public key is openly shared, while the private key remains a closely guarded secret of the bookstore. The strength of RSA lies in the fact that, with current computing power, it is virtually impossible to deduce the private key from the public key due to the difficulty of factoring the product of the two large primes used in their creation.

Consider the following number:


This number (2048 bits long if written in binary) is the product of just two prime numbers (numbers without factors). It’s not difficult to create such a number — simply take two large prime numbers and multiply them together. Your calculator could perform this feat almost instantly. The hard part is doing the reverse, factoring the larger number into its smaller components. I didn’t do that for the number above. That number is RSA-2048. At the time of writing, only RSA Security LLC, the company that published it, knows which two prime numbers were multiplied together to produce it. In fact, if you can “crack” it, they’ll give you $200,000.

The largest RSA number factored as part of this challenge was 768 bits long. It took the equivalent of a single computer 2,000 years to solve. In short, if someone could solve this so-called Integer Factorization Problem efficiently, they’d “break” the internet and blow some minds because of how difficult it appears to be. Shor’s algorithm, which has been mentioned several times, does exactly what internet security and cryptography experts thought was impossible — it efficiently solves the integer factorization problem.

Enter the quantum

Peter Shor introduced his factoring algorithm in 1994 at the 35th Annual Symposium on Foundations of Computer Science. The reaction was a swift mixture of skepticism, excitement, and concern.

As with any sudden and major breakthrough, many were justifiably skeptical. Quantum computing was still very theoretical, with only a handful of researchers working in this fringe field. As other researchers dug into the details, they realized its theoretical significance. It wasn’t just its potential to break encryption but the fact that it represented one of the first convincing examples of a quantum computer outperforming classical ones by a vast margin.

More than just a theoretical curiosity, Shor’s algorithm galvanized research into quantum computing. It provided a tangible goal — if a quantum computer could be built, it would have immediate and dramatic practical implications. This was of obvious concern to cryptographic security experts who were still primarily deployed in a military context.

Nowadays, we routinely teach Shor’s algorithm to undergraduate students. The lesson typically appears near the end of a semester-long course on quantum computing, which assumes the students have been introduced to far more technical and mathematical background than provided in this book. However, the gist can still provide insights.

Much like Grover’s search algorithm, which solves the search problem in a way completely different from its digital counterpart, Shor’s algorithm takes advantage of the way quantum data represents complex numbers. The best-known classical algorithm for factoring is called the General Number Field Sieve. In many ways, it’s more complicated than Shor’s algorithm. Indeed, quantum computers don’t appear to be useful aids in any of the steps in the best classical algorithm. Shor’s algorithm takes a more direct route to solving the problem — one not taken in classical algorithms because it’s far less effective.

Shor realized that factoring can be broken down into steps, and most of the steps are easy (even for digital computers), but one step is hard. The hard step is finding the pattern in a periodic function. While this is not conceptually hard — in fact, it’s the same problem the DFT and FFT discussed above are applied to — it is computationally hard for digital computers.

Recall that the list of numbers carried by quantum data is exponentially long in the number of qubits — that’s 2n complex numbers for n qubits. The FFT approximates the DFT but would still require more than 2n steps to calculate it for this list of numbers. However, Shor and others showed that the DFT can be applied to the quantum data with a quantum computer using only n2 steps (quantum logic operations), an exponential speed-up.

As a quick aside, note that Shor’s algorithm is also not deterministic, underscoring again the critical differences in how we approach digital and quantum algorithm design.

How far away is it?

As you know, quantum computers today are not powerful enough to apply Shor’s algorithm to anything but small toy examples that grade-schoolers could have factored. So, when will they be big enough to break the internet?

In 2021, Craig Gidney and Martin Eker estimated that Shor’s algorithm could crack RSA-2048 with a 20,000-qubit quantum computer. This assumes those qubits work more or less perfectly. However, even if they don’t, we can rely on redundancy and error correction to get us there. Many poorly performing components can come together to act as a better-performing “virtual” component. Virtual qubits are also called logical qubits, whereas the raw physical systems (like atoms and photons) are called physical qubits.

Research in error correction is constantly improving, moving toward higher tolerance of errors and lower overheads. Using the best-known techniques and projected error rates, Gidney and Eker suggest that 20 million physical qubits would be required. That sounds like a lot at a time when even glossy, hyped-up press releases claim qubit numbers in the hundreds.

Gordon Moore, co-founder of Intel, made an astute observation in 1965: the number of transistors packed into an integrated circuit seemed to double roughly every two years. This observation, later termed Moore’s Law, became an informal but remarkably accurate predictor driving exponential growth in computing power over decades. The underlying principle is a feedback loop — as computing power increases, chip manufacturing processes improve, enabling even more transistors to be crammed into smaller spaces, leading to a further boost in computational ability.

While quantum computing is still in its early stages, there are signs it may follow a similar exponential growth trajectory — a quantum Moore’s Law, as it were. If we start with roughly a hundred qubits today, and the number of qubits doubles every two years, it suggests that in about 34 years (around 2058), quantum computers could become powerful enough to crack even the most robust RSA encryption standards. Researchers like Jaime Sevilla and Jess Riedel support this timeline, publishing a report in late 2020 that claimed a 90% confidence of RSA-2048 being factored before 2060.

Store now, decrypt later

While factoring large numbers is the theoretical target, the practical implications are worth considering today. Take, for example, the online word processor used to write this very text, which employs secure communication using a 256-bit public key. That’s much weaker than a 2048-bit key, suggesting it would be vulnerable sooner than the 2058 estimate.

Imagine a nefarious entity collecting and storing encrypted data today. Even without the ability to decrypt it immediately, they could wait until sufficiently powerful quantum computers exist, say around 2058, and then easily decipher the stored secrets. Why does that not bother me? Well, I don’t have 34-year secrets. But government agencies likely do. Secrecy of data has a shelf life, and quantum computers render the immediacy of that more palpable.

While the constant hype about quantum computers can desensitize us from such threats, governments and major corporations have weighed the odds. In late 2023, US security agencies published a factsheet on the impacts of quantum computing, urging all organizations to begin early planning for Q-Day — though, of course, they didn’t use that term.

The goal of so-called post-quantum cryptography is to develop crypto-systems that are secure against quantum computers running Shor’s algorithm and, hopefully, any other quantum algorithm. Ideally, the system would also work with existing communications protocols and networks.

The National Institute of Standards and Technology (NIST) is an agency of the United States government with a mission spanning various domains, including within the realm of information technology and cybersecurity. NIST’s involvement in cryptography dates back several decades, with milestones such as developing the Data Encryption Standard (DES) in the 1970s and the Advanced Encryption Standard (AES) in the early 2000s. It’s unsurprising then that NIST has been proactive in preparing for the transition to post-quantum cryptography. It launched the Post-Quantum Cryptography Standardization Project in 2016 with a call for algorithms that could withstand attacks from quantum computers.

Since then, NIST has gathered proposals for quantum-resistant cryptographic algorithms from around the world and selected candidates for further analysis and testing by hosting public workshops and soliciting feedback to ensure they meet various criteria. As of the latest update, NIST has announced its intention to publish the new post-quantum cryptographic standards in 2024. These standards will include specifications for quantum-resistant algorithms that have been rigorously tested and evaluated. Major internet companies like Google and Apple have already announced the migration of some services to include additional security layers with quantum-resistant techniques. So, the internet may be safe after all! (Don’t worry, your Bitcoins will likely be safe as well.)

The quantum future

Amidst concerns of a quantum apocalypse, it’s vital to differentiate between the messengers of doom — often amplified by tech journalism — and the voices of the quantum research community. The latter seeks not to incite fear but to illuminate the path toward understanding the ultimate limits of our ability to know and control the natural world. In the viral vortex of discussions surrounding quantum computing’s potential to disrupt current cryptographic standards, a beacon of hope shines through: Quantum Key Distribution (QKD).

In a typical QKD scenario, two people wishing to communicate securely use a quantum channel to exchange qubits that will be used to encode bits of a secret key. The quantum properties of these qubits ensure that any attempt by an eavesdropper to intercept and read the key would irreversibly alter their state, thus revealing the intrusion. The fact that reading qubits destroys the quantum information is not a bug but a feature!

Once a secret key is successfully exchanged and its integrity verified, it can be used to encrypt and decrypt classical messages using conventional cryptographic techniques. Thus, the security of communication is guaranteed not by the hardness of computational problems but by the inviolable principles of quantum physics.

QKD heralds a future where communications can be secured against any adversary, regardless of their computational power. This quantum-safe key distribution method could revolutionize how we protect sensitive information, rendering it immune to the threats posed by quantum computing advancements.

Since we don’t have quantum channels lying around, implementing QKD on a global scale faces technological and infrastructural challenges. Specialized hardware will be required to mitigate the difficulty of transmitting qubits over long distances without degradation. Research and development in quantum repeaters and satellite-based QKD are among the avenues being explored to overcome these obstacles.

While Y2Q might get clicks, quantum computing will eventually bring about the opposite of disaster — a global quantum internet providing an additional uncrackable layer of security.



Chris Ferrie

Quantum theorist by day, father by night. Occasionally moonlighting as a author.