The Quantum Hypothesis

Chris Ferrie
11 min readJul 26, 2020

Welcome to Introduction to Quantum Computing. I am your guide, Associate Professor Chris Ferrie, a researcher in the UTS Centre for Quantum Software and Information. This is Lecture 1. The syllabus is here:

You are expected to be prepared for this week by reviewing the background material listed in the Lecture 0.

What will you learn this week?
This week is your quantum orientation. What is quantum as compared to classical computing? And, what is classical computing for that matter? You’ll be introduced to the big picture and some of the tools you’ll need in the subject.
What will you be able to do at the end of this week 1?
At the end of this module, you should be able to answer these questions:
Why is digital computing called “classical” and my laptop called a “classical” computer?
When and why did quantum computing form as a field?
What is so exciting about quantum computing anyway?
What might power quantum computers?
Where can you find tools to do quantum programming and how do you use them?
(The latter two will be covered in Lab 1. Link will be available after the lecture.)

What is classical computing?

Almost any discussion of quantum computing — whether it is in a research article, popular science magazine, or business journal — invokes some comparison to what is called “classical” computing. It’s nearly impossible to talk about quantum computing without using the phrase. So, we better start there.

The term classical in classical computing is borrowed from conventions in physics. In physics, we often denote pre-1900 physics as “classical” and post-1900 physics as “modern”. Modern physics includes general relativity and quantum physics. General relativity is Einstein’s theory of curved space and time which explains the force of gravity. Computational theory might have some things to say about places of extreme gravity, like black holes, but we’re going to stay grounded on Earth and won’t say more about general relativity.

Often in quantum physics it is useful to compare the results of some theory or experiment to what classical physics might predict. And so there is a tradition of comparing quantum vs. classical. This was then adopted first not by computer scientists or engineers, but quantum physicists who started to study quantum computing. The adjective classical in classical computation does not mean it is pre-modern — it is just a way to distinguish it from quantum computation.

Moreover, it is important to note that modern digital computing would not be possible without quantum engineering. The components inside your computing devices, like your smartphone, are small enough that quantum mechanics plays an important role in how they were conceived, engineered, and function. However, what the device does at the most abstract level — that is, compute — can be thought of in classical physics terms. Indeed, anything your smartphone can do, a large enough system of levers and pulleys can do. A quantum computer will function differently at both the device level and the abstract level.

In the context of quantum computing, classical computing can simply be thought of as the manipulation of bits (variables that can take value 0 or 1) based on a set of rules. A Turing machine is the conventional model of a device that does this. Remarkably, it seems any other model computer you can think of can at best perform the same set of computations as a Turing machine. This led to the Church-Turing Thesis, which states that any physical process can be simulated by a Turing machine. The Extended Church-Turing Thesis further states that the above can be done so efficiently.

Efficiency has a technical definition, which roughly means that time to solve the problem does not “blow up” as the problem size increases. For example, writing a number down can be done efficiently as the number of digits increases. Writing the number 1000 takes only twice as long to write as the number 10 because 1000 has 4 digits and 10 has 2 digits. However, counting to 1000 takes 100 times longer than it takes to count to 10 because you have to count to 10 and then 10 more and then 10 more, and so on, 100 times!

Brief history of quantum computing

Quantum computing was first suggested by Paul Benioff in 1980. Prior to this, interest was growing in the physical limitations on computation. It became clear that the mathematical models of computation did not account for the laws of quantum physics. Once this was understood, the obvious next step was to consider a fully quantum mechanical model of a Turing Machine. Parallel to this, Richard Feynman lamented that it was difficult to simulate quantum physics on computers and mused that a computer built on the principles of quantum physics might fare better. At the same time in Russia, Yuri Manin also hinted at the possibility of quantum computers, both noting their potential access to exponential spaces and the difficulty of coordinating such.

In 1985, David Deutsch proposed the universal quantum computer, a model of quantum computation able to simulate any other quantum system. This is essentially equivalent to the model we work with today, however, it still wasn’t clear that there was an advantage of doing any of this.

Finally, in 1992, Deutsch and Richard Jozsa gave the first quantum algorithm providing a provable speed-up — a quantum computer can solve in one run of the algorithm what it might take a conventional computer a growing number as the problem size gets larger. It’s an admittedly contrived problem, but it is quite illustrative and the so-called Deutsch-Jozsa algorithm will be the first quantum algorithm we will study in this subject. One of the concerns at the time was how robust a quantum computer would be to noise and, indeed, any small amount of error in the algorithm destroys the computation. Of course, we could redefine the problem to be an approximate one. That is, the problem specification allows a small amount of error. The Deutsch-Jozsa algorithm then still solves the problem (it solves it perfectly, so also approximately). However, now a digital computer can easily solve the approximate version of the problem. So, the “quantum speed-up” disappears.

Ethan Bernstein and Umesh Vazirani were able to modify the problem in 1993 to one where small errors were allowed. The quantum algorithm then solved the problem in a single step while any classical algorithm required many steps. In 1994, Dan Simon devised another problem along the same lines. However, this time the quantum algorithm to solve it provided a provable exponential speed-up. That is, as the input size of the problem grows, the best classical algorithm requires a number of steps growing exponentially, while the quantum algorithm requires at most a linear number of steps.

Simon’s algorithm was the inspiration for perhaps the most famous quantum algorithm: Shor’s algorithm. In 1994 Peter Shor presented a quantum algorithm that can factor large numbers exponentially faster than the best best known classical algorithm. Since most public key cryptography is based on the assumed hardness of factoring, this sparked huge interest in quantum computing. The race was on.

Detractors quickly dug in their heels. The most common criticism was the incredible fragility of quantum information. It was (and still occasionally is) argued that a quantum computer would have to be so well isolated from its environment as to make it practically infeasible. Classically, error correction employs redundancy — do the same thing thing thong thing and if an error happens on one of them, the majority still tells the correct answer. However, as we will see later, quantum data cannot be copied, and so it would seem error correct is not possible. Shor showed how to encode quantum data into a large system, such that if an error happened to a small part of it, the quantum data could still be recovered by suitable decoding.

The Solovay-Kitaev theorem then showed that any algorithm could be achieved by a small number of instructions. The pinnacle, at least theoretically, was the Fault-Tolerant Threshold Theorem, which states that quantum error correction can allow quantum computation to happen indefinitely, so long as the rate at which errors happen is below some threshold. And that was it. By 1997 all the pieces were in place, and we just needed to wait until someone built the damn thing! But here we are in 2020 without a quantum computer — what gives?

Where are we now?

These lectures are not about building real physical quantum computing devices. That process has been slow. It may take many more years. But the scientific and engineering challenges are beyond the scope of these lectures. However, it is important for you to know what the “state of play” is.

And, well, it’s hard to say. Also, it would be quite foolish to record oneself making a prediction. We would not want to repeat the mistake of the president of IBM in 1943 when he said, “I think there is a world market for maybe five computers.” Oops.

What can be said for certain is that we, those of us located in the spacetime region of Earth 2020, are in the midst of a real scientific and technological revolution — call it the second quantum revolution if you will. We are now building devices for the first time that defy our conventional understanding and computational ability.

Until now, we didn’t build things we could not simulate with mathematics or computer code. No one is going to fly a plane that hasn’t been proven to fly in simulation. No one is going to build a bridge that hasn’t been proven stable by computer modelling. And, no one was going to fly people to the moon without checks and double checks of the maths from Katherine Johnson.

But now we have human-made devices, housed in environments we have engineered to be the coldest places in the universe, that access states of matter we cannot model with solvable mathematical equations or simulate with the world’s fastest supercomputers. Quantum engineers are the new explorers and soon you will be the ones to guide them in the future.

What makes quantum computing so powerful?

It is often said that a quantum computer uses “quantum effects” to calculate some things faster than a classical computer. But what are these so-called “quantum effects”, and why exactly do they provide an advantage? You’ll be surprised and annoyed to find out that it is not so easy to say. But, at the end of this subject, you’ll know precisely what they are — and also why it is so difficult to put it into plain English.

Some of the candidate concepts are easy enough to name: superposition, interference, dimension, entanglement, and unitary (reversible) evolution. At the same time, we will learn about the difficulties in dealing with quantum bits: measurement, randomness, uncertainty, no-cloning, and decoherence. We’ll get to all of these in due time. But I want to leave you with some intuition for how quantum information processing can solve a problem that is impossible with classical data.

Consider the following game. (We’ll be playing a lot of hypothetical games in this subject — academics are an exciting bunch!) You and some of your friends stand in a line. You are each secretly given 1 bit of information (0 or 1). Each player can only pass 1 bit of information to the next in line. To win, the person at the end of the line must report whether the secret information contained an even or odd number of 1’s (the “parity”, for the computer scientists out there).

At first this seems impossible because the obvious idea is to repeat your secret bit to the next in line. But this doesn’t work because the next person will ignore what you said to repeat their secret bit. There’s not enough bandwidth in this 1-bit channel between each player! But, aha!, it’s not that hard to solve if you perform a small computation before sending your bit.

You can’t find out how many 1’s were in the secret bits before you, but you can find out if there had been an even or odd number of 1’s. Take the secret bit you received and look at the bit your friend gave you. If they are both 1 or both 0, send a 0 to the next person. If one of bits was a 1 and the other 0, send a 1 to the next person. If everyone does this, each person finds out if the number of 1’s in the secret bits before them was even or odd, including the last person, who reports the answer for all the secret bits!

Let’s make it a little bit harder. Consider the same setup, but instead of receiving secret bits, you and your friends receive a real number — like 2, 0.25, 1000, or even π! You can still only pass on a single bit to the next player in line, but you are all promised that all the secret numbers added together results in an integer. You win if you can find out if it is even or odd.

Can you win with the same strategy? No! The very first person in the line is toast if they don’t receive an integer as their secret number. So, it’s a trick game. You and your friends need more resources to win. How about if you had a disc and you could pass that along the line, would that work? Yes!

If you rotate the disc by 180° times the secret number you received, and everyone does the same, then only one of two things can happen. Either, the sum of the secret numbers is even and the total rotation of the disc is an even multiple of 180°, or the sum is odd and the total rotation of the disc is an odd multiple of 180°. By spinning around in your office chair, you can convince yourself that these two possibilities are perfectly distinguishable!

And, that’s it, that is the difference between classical and quantum information! Wait… what? Where’s the quantum stuff?

Well, we will see that the “disc-strategy” is equivalent to each player passing a single qubit to the subsequent player after performing a quantum logic operation. But, more than that, it is better than the “disc-strategy” because — as we will see much later in the subject — quantum data can be protected against errors, while analog data cannot. (If you are old enough to make mix-tapes by recording songs from the radio, you’ll know immediately why. If not, sorry, only 90’s kids allowed for this nostalgic aside.) In any case, I hope you see that different kinds of information are not necessarily equivalent. Quantum data can be a resource used to achieve things faster than classical data, and even do things which are impossible with only bits!

Today, all computers have roughly the same architecture. It’s not like a workshop, which is full of completely different tools for every job. A single computer can do any computational task. Some problems are hard to solve, of course. But if you can prove it is hard to solve on one computer, then you know there is no point in trying to design a new kind of computer. Why? Because the first computer can simply simulate the new one!

This all seems like obvious common sense to the modern digital citizen. But is it true? When we think about simulating a physical process based on quantum mechanics, it appears that a digital computer cannot do this efficiently. Enter: quantum computing.

--

--

Chris Ferrie

Quantum theorist by day, father by night. Occasionally moonlighting as a author. csferrie.com