share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
PAUL: Tonight's lecture is nominally the second in a series of Messenger lectures being given by Scott Aaronson from the University of Texas at Austin, though conceptually it's the prequel to yesterday's more technical talk. First, some obligatory words about this particular lecture series. I apologize to those of you who were here yesterday.
There's a small amount of overlap just reminding you that Cornell Messenger lectures were established in 1924 with a gift from Hiram Messenger, Cornell class in 1880, longtime teacher of mathematics, who established this set of lectures to provide a course on the evolution of civilization for the special purpose of raising the moral standard of our political, business, and social life. So it had a-- there's a message in these lectures that goes beyond the specific technical aspects that are discussed. And they are still regarded as one of the most important of the Cornell extracurricular activities. They're in all fields. They're particularly familiar to physicists going way back to the 1920s with Robert Millikan and, of course, Feynman's 1964 lectures on the character of physical law.
This is the third series of Messenger lectures that I've had the privilege to host. All of them have been given by people with a former Cornell connection, either as a Cornell undergrad or a Cornell graduate student, as I was. Scott himself was a Cornell undergrad, earning a bachelor's degree here in the year 2000 in Computer Science. He continued with a PhD from Berkeley in 2004, spent three years as a post-doc at the Institute for Advanced Studies in Waterloo University, and then nine years as a professor at MIT. And in 2016, he joined the Department of Computer Science, University of Texas at Austin as founding director of their new quantum computing center, and as well the David Jay Bruton Centennial professor.
Among his awards are a Presidential Early Career Award, which got him a photo op with the previous president. He's working hard here this week giving both a physics colloquium yesterday, a computer science colloquium on Thursday. As far as I know, he's the first person to do that, giving colloquia in both departments in the same week. And as well, he's participating in a few events with undergraduates, an important engagement component of this lecture series. For further evidence of his popular engagement, he's written a book on quantum computing, Quantum Computing Since Democritus, which I happily use as a supplement in my own course on the subject.
But his public engagement is really highlighted by his popular blog, "Shtetl-Optimized," where he expounds upon the nature of life to the universe and all that, providing answers and entertainment in response to questions on everything from the polynomial hierarchy, to free will, time travel, and the anthropic principle, and as well to the most pressing of today's societal problems. His dedicated readership consist of research experts, knowledgeable members of the general public, and trolls-- including the current US president. I believe in the latter category. Please welcome--
SCOTT AARONSON: He hasn't commented on my blog, I think.
PAUL: Please welcome Scott Aaronson.
[APPLAUSE]
SCOTT AARONSON: Well, thank you again, Paul. It's an honor to be giving these lectures, and humbling. I'm not sure whether I'll be able to raise the moral character of civilization with today's talk, but I'll try.
So I decided to use Google Image search to see what I looked like when I was an undergrad here. So that is, I think, me, with Charlie Van Loan nearly 20 years ago. You can say I wear like exactly the same kind of clothes as I do today, except many waist sizes smaller. And I also did a Google Image search for quantum computer to see what those looked like. And that's apparently it.
It will become clear in this talk that I am not an engineer. I'm a theorist. They normally don't let me into the lab because I'll just break something. But I don't think that that's what they look like. And you can find papers, and slides, and popular introductions to quantum computing and all that stuff at my home page.
So my starting point for this talk has nothing to do with quantum mechanics. It's simply that there are certain technologies that would be great if we had them, and yet that we never seem to see. So what are the fundamental limits of technology? What kind of things would we like?
Well, let's start with warp drive. Where is it? We've been waiting. How about the perpetual motion machine? The only real solution to the world's energy problems?
Or finally, how about what I call the Uber computer. Now, this is a device that wouldn't necessarily tell you the meaning of life or anything like that, but it at least answers any well-posed mathematical question that's put to it. So here, we see it saying Goldbach's Conjecture, true, next question. That is, every even number four or greater is the sum of two prime numbers. It would solve any other math problem immediately.
Now some people probably think that that's what computers today already are. But one of the most basic things we learn in computer science is that well, even computers have limitations. They take-- in fact, even for purely mathematical problems, they take time to do things. There are certain problems, like the famous halting problem, deciding whether a given program will ever stop running, that are provably unsolvable by any computer program that could ever be written. That's what Alan Turing did in the 1930s before he went to break Nazi codes.
There are other problems that are solvable by computers. But as far as we know, only by using astronomical amounts of time. We'll be very interested in this talk in those kinds of problems.
Now, the point I want to make is that for the first two of these technologies, we really understand something deep about why we don't have them. It's not just that the engineers ran out of money or that it was too hard for them to build. These technologies really conflict with what we now see as fundamental principles of physics.
In the case of the warp drive, those principles are those of special relativity, or you could say the causal structure of space-time. If you send the signal faster than light, then in someone's frame of reference, you would be sending that signal backwards in time. So you would then need to worry about, well, what happens if you send a signal back in time that causes someone to kill your grandfather, so therefore you're never born, so therefore the signal doesn't get sent, so therefore you are born, and so forth. [EXPLOSION SOUND] No one wants to worry about that. For the perpetual motion machine, what rules that out is, of course, the second law of thermodynamics, the fact that entropy goes up and up.
But what about for the third one? Are there fundamental physical limits on what can be computed or computed in reasonable amounts of time? And if we understand those limits, then would they have any implications back for physics? So those are the main questions that motivate me.
So to discuss these questions, we need to immerse ourselves a little bit in the world where computer scientists like to spend their time. I've shown here just a small part of this world, the universe of computational problems. We like to classify them.
I have a website that I made 15 years ago called "The Complexity Zoo" that has about 500 of these classes. I'll just mention four of them. How's that?
So maybe the most basic class of computational problems is called p, for polynomial time. This is all of the yes or no questions-- families of questions for which there is an algorithm using a conventional computer-- like the one in your pocket-- to answer the questions using a number of steps that grows only like the size of the problem instance raised to some fixed power. This is what we call polynomial time. So linear in the problem size, or quadratic in the problem size, but at any rate, not exponential in the problem size. And p actually includes most of what we use our computers for on a day-to-day basis-- adding numbers, sorting numbers, almost all of what would go on under the hood in your web browser, or Angry Birds, or whatever.
NP stands for non-deterministic polynomial time. That's the class of all the problems for which if someone handed you a solution, then you could efficiently recognize it. A famous example would be finding the prime factors of an enormous number, let's say, deciding whether the number has a prime factor ending in 7 or something like that. This problem might take an extremely long time to solve if, say, you were given a 10,000-digit number. But if someone wants to convince you that it did have such a prime factor, well, then, they'd just have to show it to you, and you can check it for yourself. That's the key property of an NP problem. They might or might not be efficiently solvable, but at least you can quickly recognize a solution.
Now, NP hard problems are the problems where if someone gave you a magic box to solve those problems then you could use that magic box to efficiently solve any NP problem. That's-- you think that, well what are people smoking to come up with such a definition? But the concept of reducibility is one of the most fundamental that there is in computer science, the ability to-- we classify problems by seeing which ones can be reduced to which other ones. So NP hard problems are at least as hard as anything in NP.
And the famous class of NP complete problems consists of those that are both NP hard and themselves an NP. So these are the hardest problems in NP. And the great discovery that really started theoretical computer science rolling in the early '70s was that hundreds of the problems that people actually want to solve in real life involving optimization, planning, AI are actually in this NP complete class, and therefore, in some sense, they're all the same problem. Examples include the traveling salesperson problem, satisfiability, Sudoku puzzles, Super Mario, as well as things of less practical importance like airline scheduling. And then there may be problems in this sort of twilight zone between P and NP complete. And that is a point that we'll come back to later, very important for quantum computing.
Now, as you may have heard, the literally million dollar question is, does P equal NP? That is, can all of these NP problems be solved in polynomial time? This is one of seven problems for which if you solve it, the Clay Math institute gives you a million dollar reward. Others include the Riemann hypothesis, the Poincaré conjecture, which was solved by Perelman-- although he refused the prize, and four others.
Now, I would sat that P versus NP is self-evidently the most important of all these problems. Well, I'll give you an argument for that. If P equaled NP, so if there were an algorithm to solve NP complete problems in polynomial time, and moreover, it were actually efficient in practice, say linear time or something, then well, you would solve that problem. But what that would mean is that you could also program your computer to solve the other six problems for you because you would simply have to ask it, well, is there a proof, say, at most a billion symbols long, written in some formal language that the computer can check? But P equals NP means is that if such a proof exists, then your algorithm will be able to find it in only modestly more time than it would take to write the proof down.
So what do we believe about this? Well, most computer scientists believe that P is not equal to NP. I like to say that if we were physicists, there's a difference in terminology. What we call a conjecture, they might call a law of nature. And we would probably just give ourselves Nobel prizes for the discovery of the law.
And if it later turned out that P equals NP, that would be OK, too. We would give ourselves more Nobel prizes for the law's overthrow. But mathematicians call things conjecture, so it's just a terminological difference. But certainly it seems like a very wild possibility that any cryptographic one-wave function, for example, could be inverted without having to do brute force search at all over the possible keys. This would be a very different world than most of us feel like we live in.
So there's been an immense amount of work on the P versus NP problem. We do understand something about why even if the classes are different That is so hard to prove with today's mathematical tools. I have a 120-page survey about P versus NP that I put out less than a year ago. If you're interested, you can find that. But we could spend the whole lecture or even in a whole course just talking about this P versus NP problem.
But for me personally, where things get even more interesting is when we bring physics into the picture. P versus NP itself is a purely mathematical question. Whatever the answer is-- well, it has some answer, and that answer has no dependence on the laws of physics any more than Fermat's Last Theorem does, right? There's either this polynomial time algorithm running on a Turing machine, or else there isn't one.
But we can then ask a broader question, which is, is there any physical needs to solve NP complete problems in physical time-- sorry, in polynomial time, including maybe by going beyond the limits of the kinds of computers that we use today that are-- that the mathematical model of a Turing machine captures? Oh yeah, P versus NP, as you can see, has even influenced popular culture. It's been on the Simpsons, Futurama, and several other TV shows that are less good, so we won't talk about them.
So the key pre-supposition that ties the P versus NP problem to the physical world is something called the Extended Church-Turing Thesis. And in my formulation, that says that any physically realistic computing device can be stimulated by a deterministic, or at least a probabilistic, Turing machine-- those are-- if you don't know what a Turing machine is, just imagine any programming language that you've ever heard of or seen, and that's equivalent to it-- with at most a polynomial overhead in time and memory. If we dropped that proviso "with at most a polynomial overhead," then we would get a statement simply called the Church-Turing Thesis, the sort of anything that can be done in the physical world can be simulated using a Turing machine, loosely speaking. It's not clear whether Church or Turing themselves ever said anything tantamount to that. But if they didn't, then they should have.
The extended version says that not only can nature be simulated by a Turing machine to any desired precision you like, but moreover, that it can be done by a simulation which incurs only a reasonable blow up in time and memory. So it's not exponentially slower than nature itself is. But this is not a theorem. This is not anything susceptible to proof. This is ultimately an empirical question about what kind of universe we live in. So it behooves us to ask, how sure are we of this thesis? What would a challenge to it even look like?
Well, let me walk you through a few possible challenges just so we get a sense for what they could look like before we come to quantum computing where, of course, you know that we're going. So one of my favorite proposals is from the '60s or so. And it simply says, we'll just take two glass plates and put pegs between them in whatever pattern you want, and then dip the resulting configuration into a tub of soapy water and take it out. And look at the pattern of soap bubbles that form between the pegs.
Now, typically what bubbles will do is that they'll relax into a lowest energy state, which often means minimizing the total length of bubble connecting the pegs. But minimizing the total length of line segments connecting a set of points in the plane is actually a famous problem called the Minimum Steiner Tree problem, and is known to be one of these NP hard problem. So now we're faced with a puzzle. Does a tub of soapy water do what all the world super computers cannot? Could you break cryptographic codes using vats of soapy water? Why doesn't anyone try it?
Well, there was a discussion of this on the Internet some years ago. And some people said the same thing that I would say about it, which is that well, systems in nature like to minimize their potential energy, but they don't always succeed at doing that. For example, if you had a rock and a crevasse on a mountainside, it could reach a state of lower potential energy by rolling up first and then rolling down, but it's rarely observed to do that. So why shouldn't we say the same here? The soap bubbles could get trapped in a locally optimal configuration, which is not the global optimum.
So and then someone came and said, OK, but this is just an academic computer science party line. I bet not one of you has actually done this experiment, and you have no idea what would happen. So that's what led to the one foray into experimental physics of my career, I guess.
So I used to take this thing around to talks. It was hell getting it through airport security. But-- by the way, it's great fun to try at home. If you do it, use Plexiglas instead of glass so you don't cut your hands. Ask how I know.
What you find is that with four or five pegs-- actually this typically does find the minimum Steiner Tree. It's fun to watch an NP complete problem being solved before your eyes. As you start adding more pegs, like six or seven, you find that indeed it does not always get to the optimal configuration. Sometimes there's even intermediate cycles of bubble which then proves that it can't be minimal. Now, I didn't try every possible brand of soap, but I think there's some circumstantial evidence here that nature is not solving NP complete problems by magic.
This may sound silly. And yet every so often, you will read in the popular press about another way to solve NP complete problems that is basically tantamount to this. There was a thing called MemComputing, which is basically just this.
People talk about protein folding, which can be under some formalizations is an NP complete problem, and yet every cell in your body does it every second. So can biology evade the extended Church-Turing thesis? Well, I think the proteins that occur in real cells have been under enormous selection pressure to fold in a reliable way to not get stuck in local optima. Even despite that, it sometimes happens anyway. Prions, the agent of Mad Cow disease seems to be an example of that. But if you designed a protein where folding it into its lowest energy state would require proving the Riemann hypothesis, which, in principle, one could do because the problem is NP complete, my prediction is that it would not fold very well.
So everyone talks about quantum computers, but what about the other great theory of 20th century physics? Why not a relativity computer? So the idea here is very simple. You would start your computer working on some hard problem, then you leave the computer on Earth and you board a spaceship which accelerates to relativistic speed. You then decelerate and return to Earth in Earth's frame of reference, billions of years have now passed, civilization has collapsed-- if it didn't long before then-- all of your friends are long dead, but if you can dig your computer out of the rubble, and it's somehow still connected to a power source, then you can get the answer to your hard problem.
But to me, this raises an immediate question which is, why doesn't anyone try that? I mean, if you're worried about your friends, then just bring them on the spaceship with you. I think that there's actually an interesting answer to this. We're asking what in fundamental physics rules this out? And I think that it actually has to do with the amount of energy that it would take to the relativistic speed that would be needed.
One can calculate that if you wanted to get an exponential computational speed up in this way, then you would need to accelerate to exponentially close to c, the speed of light. But to do that requires an exponential amount of energy. So in some sense, we've traded one exponential resource for another one. Before lift off, presumably powering up your spaceship already takes exponential time of fueling it up. Where are you going to get this exponential amount of fuel from? Maybe there are some super duper compressed fuel, but that brings problems of its own with it, as we'll see shortly.
A closely related example is what I like to call the Zeno computer. This is a computer that would simply do its first step in one second, its second step in half a second, its third step and a quarter second, and so on, so that after two seconds, it would have done infinitely many steps. Again why doesn't anyone try that? Well, in some sense, people do try it. There are people who tried to overclock their microprocessor, run it much faster than the recommended speed. But some of you may know the problem with doing that, which is, if you run your microprocessor too fast, it will overheat, and it will melt. This is why computers have fans.
It's again interesting to ask, well, what is the fundamental limit on clock speed of computers? Is there one? The answer, I think again has to do with energy. So as you ran a processor faster and faster, you would need to cool it more and more. And this is why some supercomputers used to be cooled with liquid helium or liquid nitrogen. When the clock speed gets fast enough, we might as well forget about cooling and just talk about the energy inherent in the computation itself.
And as far as anyone knows, the fundamental limit here would occur around when your computer was doing one step per Planck time, which is about 10 to the 43 steps per second. The simplest model of a computer running at that kind of speed would be just a photon that would bounce back and forth between two mirrors that were separated by one Planck length, which is 10 to the minus 33 centimeters.
Now the trouble is such a photon by e equal HV would have so much energy confined in so small of a region that it would exceed its own schwarzschild radius, which is basically just a fancy way of saying that your computer would collapse to a black hole. I've always liked that as nature's way of telling you not to do something. But these are nice examples for how when you look at one or two aspects of physics in isolation, it can often look like you have computational superpowers. And it's really important to look at how all of the known laws of physics interact with each other.
That brings us to quantum computing. You knew it was coming. Here are some happy spin 1/2 particles. Again, I don't think that's what they will look like.
Now, in order to explain-- at first glance, quantum computing, especially if you've read anything about it in the popular press, sounds almost science fictiony and crazy as those other things that I talked about. And yet, we know that there are experimental groups all over the world that are currently racing to build this. And some of them are very optimistic that they're going to see actual speed ups within the next year or so.
So what gives? How is this different from the other examples? Well, in order to answer that question, I'll have to tell you something about quantum mechanics. Now, this scares some people because quantum mechanics has acquired a reputation for being somehow complicated or hard. Which if you want to apply it to real physical systems, it can indeed be complicated.
Not being a physicist, I am free to let you in on a secret, which is that quantum mechanics is actually really simple once you take the physics out of it. The way that I think of quantum mechanics is that it is a certain generalization of the rules of probability. It's almost what you would be forced to invent even as a mathematician in the 1800s if someone told you to invent something that is like probability theory except that it involves the minus signs. You have to make sense of the concept of numbers like probabilities and yet they can be negative or, for good measure, even complex numbers.
So Feynman, who is one of the previous Messenger lecturers that I could never live up to used to say that everything about quantum mechanics is contained in the double-slit experiment, which is just the thing where you shoot photons one at a time at a screen with two slits in it, and you look at where they end up on a second screen. And what you find is that there were certain dark spots where a photon-- where the photon never appears. Where the photon appears is probabilistic. Sometimes it's one place, sometimes another. But that by itself is not the strange part. We could cook up 20 theories before breakfast that would just involve ordinary randomness.
The strange part is these dark patches where the photon never shows up. Because what turns out is that if you were to close off one of the slits, then the photon can appear in those plates. So in other words, by decreasing the number of paths that the photon could take, you can increase the chance that the photon reaches a certain point.
That is the part that is strange. Forget about God playing dice, blah, blah, blah. The part that is strange is the way that these dice behave. That you can close off a slit, and that increases the chance that the photon gets to a place.
Now, physicists encountered more and more phenomena that were kind of like that over a 25-year period. And to make a very long story short, what they eventually realized is that nature operates on a different operating system than conventional probability theory. We have to upgrade the OS to involve complex numbers called amplitudes that are assigned to every possible event, for example, every possible path that the photon could take.
So the quantum mechanical story about what happens here is that the photon can reach a certain spot, say, by going through the first slit with a positive amplitude or by going through the second slit with a negative amplitude. And when that's the case, those two contributions can, as we say, interfere destructively and cancel each other out so that the total amplitude for that spot is zero.
The one other thing I've got to tell you is that the probability that you find the photon at a given place when you look for it there is equal to the squared absolute value of the amplitude. That's one of the basic rules of quantum mechanics called the Born rule. So if something has an amplitude of zero, then it never happens. It's probability is zero.
But if you close off one of the slits, then the amplitude becomes only positive or only negative. So then the probability being a square of that, is a positive number, and then the thing can happen. Everything you've ever heard about the spookiness of the quantum world, everything you ever will hear about it is just this thing on this slide over and over again, OK, interference of amplitudes.
So to be a little more precise about it, the central claim of quantum mechanics is that if you have any isolated object, and it can be in two perfectly distinguishable states, which let's call them 0 and 1-- and physicists like to surround them by these funny looking brackets for some reason-- then it can also be in a complex linear combination of those states, or as we call it, a superposition. So we would write that a0 plus b1, where a and b are the complex numbers called amplitudes which satisfy absolute value of a squared plus the absolute value of b squared equals 1. So if we were to consider real amplitudes only for simplicity, then the possible amplitudes are just the solutions of a squared plus b squared equals 1, in other words, a circle, the unit circle.
So I can have the 0 state, like this one. I could have the 1 state which is orthogonal to it, so this vertical direction here. And I can have any combination of the two, like this state 0 plus 1 over square root 2, what we call an equal superposition of 0 and 1.
Now if we observe, then we see 0 with probability absolute value of a squared, and one with probability absolute value of b squared. Furthermore, once you ask the system whether it's 0 or 1, then it makes up its mind and it sticks with whatever choice it made. So if it tells you it's 1, then from then on it's 1. Your active measurement has changed the state. This is the famous collapse of the quantum state.
Now, what is the idea of quantum computing? Well, a system that can be in a superposition of 0 and 1, we call a quantum bit, or a qubit. I know Professor Mermin is here, and he spells qubit without the u. But this is how everyone else spells it.
Now the key point is that if I have an entangled state of, say, n of these qubits, then the rules of quantum mechanics say that I need a list of 2 to the n amplitudes to fully specify its state. That is one amplitude for every possible configuration of all n of the particles-- or all n of the qubits. So my state looks like this.
So for even just 1,000 particles, we're saying that nature of to the side somewhere has to maintain some scratch paper with 2 to the 1,000 complex numbers. That's more than could be written down in the entire observable universe. And if anything happens to the particles, well then nature has to cross off all of those numbers and replace them with new numbers. So that's a pretty staggering amount of computational work for nature to be going to just to keep track of a few particles. This is what quantum mechanics has told us since 1926 is the way the world works.
And chemists and physicists have actually known this for many decades. They've known it mostly as a practical problem when you're trying to simulate quantum mechanics using conventional computers. With any known simulation method, the time needed to do that, well, in the worst case, roughly double with each particle that you add. And so there is a very severe limit on the size of the quantum systems that we're able to simulate without resorting to approximation methods that don't always work.
It was not until the early '80s that a few physicists, including, again, Feynman as well as David Deutsch, started saying, well, why don't we turn that lemon into lemonade? Why don't we build computers that themselves would operate on quantum mechanical principles that would have these superposition states and could exploit interference of amplitudes? Now, of course, they then faced the question, supposing we built such a computer, well what would it be good for?
At the time, they were really only able to suggest one answer to that question, which is that such a computer would be good for simulating quantum mechanics itself. That sounds a little tautological, yet if and when we get practical quantum computers, I think that this is actually the most important application of them still that we know about. I think this will probably be the biggest workhorse because it has so many applications to maybe designing high temperature superconductors, better solar cells, new kinds of drugs, any problem in physics, or chemistry, or nanoscience where you have many interacting quantum systems, and you need to get the predictions right.
Now at this point, I better address head on the fundamental misconception about quantum computing that is the thing that every popular article-- with few exceptions-- has sat about it for the last 20 years, and which is wrong. I have been pushing this boulder up the hill on my blog for more than a decade. So now I'll do it again.
So what almost everyone wants to say is that a quantum computer is just like the computers that you already know, except that it can try every possible answer in parallel. It can try each answer in a different parallel universe. Well, that sounds really powerful. If you had that, it's easy to see, easy to explain to anyone why that would give you an advantage. But it also sounds absurd in a way. So can we pinpoint what is wrong with that? So I think we can.
Here is the issue. It's quite easy with a quantum computer to create an equal superposition over all the possible answers to your computational problem. That you indeed can do. And you could describe that as you like as each possible answer in a different parallel universe. I don't care if you call it that. Whether you do or don't is up to you.
But here's the problem. For the computer to be useful, at some point you've got to measure the thing. And if you just measure it in equal superposition, not having done anything else, then the rules of quantum mechanics, the Born rule, tell you that all you're going to get out will be a random answer. And if you just wanted a random answer, well, you could have picked one yourself with a lot less trouble. So the entire hope of getting a speed up for a quantum computer is to exploit the fact that amplitudes behave differently from probabilities. Amplitudes can interfere with each other.
The idea with every quantum algorithm is to choreograph things so that for each wrong answer, some of the paths leading to that answer have positive amplitude, and others have negative amplitude. So on the whole, they cancel each other out whereas the paths leading to the right answer should reinforce. All have the same sign. If you can arrange that, then when you measure, you'll see the right answer with high probability.
So that's the whole game, to use interference to boost the probability of getting the right answer. And probability is all we need. If I had a machine that has a merely 10% chance factoring my number, well then, I just run it 10 times. And each time, I can check the answer for myself. So in some sense, a quantum computer is just a massively scaled-up version of the double-slit experiment.
So now in computer science, one thing that we like to do is take any concept at all and associate to it one of these inscrutable sequences of capital letters. So that was done by my former advisor at Berkeley, Umesh Vazirani and his student Ethan Bernstein in 1993. They defined another of these complexity classes called BQP for bounded-error quantum polynomial time.
And this is basically just the quantum generalization of the class P. It's the class of all the decision problems that are solvable in polynomial time by a quantum computer, meaning a computer whose state is the superposition involving, say, n or n squared or so qubits, and that could perform a sequence of operations involving, let's say, just one or two of the qubits at a time. I won't give a formal definition.
By the way, the physicists have much better names for things, you know, quark, black hole, and so forth. We're stuck with BQP and NP. But it's just as interesting
So, the famous discovery in the '90s that really got everyone excited about quantum computing was the result of Peter Shor that factoring integers is one of these problems that is in BQP. In other words, Shor gave a polynomial time algorithm-- actually about a quadratic time algorithm for finding the prime factors of a number. You could factor a 10,000-digit number using only something like 10,000 squared steps.
Now, this made many people excited about quantum computing who had not been before. Some of you may know that almost all of the encryption that we currently use to secure the Internet is based on the assumption belief that factoring is a hard problem-- factoring, or a few closely related problems, which are also solvable using Shor's algorithm. So if you built a quantum computer, Shor is saying that you can break almost all of the encryption that we now use.
So let's draw a revised picture. We had P, NP, NP complete, BQP with this wavy border because everything quantum is spooky and weird. And BQP generalizes P. It snags at least some of the NP problems that are not known to be NP, including very, very important ones like factoring.
But now it's very important to understand that factoring is neither known nor believed to be an NP complete problem. In fact, it is only by exploiting very special structure in the factoring problem that people are able to use it as the basis for cryptography. And it's by exploiting much of that same special structure that Shor was able to give a quantum algorithm to solve the problem.
Now, this is all well and good, but for some reason, people keep asking me the question when I give these talks, well, can quantum computers actually be built? So it behooves me to tell you, well, what is the current state of the art in terms of building quantum computers, let's say, that are able to factor numbers using Shor's factoring algorithm?
Well, I'm proud to report that after about 20 years of experimental work and several billions of dollars by groups all over the world, we've now confirmed using Shor's algorithm that 21 equals 3 times 7 with high statistical confidence. Yeah, I think 35 could be on the horizon. Why is scaling up so difficult? Why is it-- small quantum computers have been demonstrated, why is it so hard to scale up and build one with, say, a million or a billion qubits?
Well, the central difficulty was recognized from the beginning. It's something called decoherence, which basically just means unwanted interaction between your quantum computer and its external environment. That interaction has the effect of prematurely measuring the computer's state, forcing it back down to a classical state. So it's not just a measurement by a conscious observer or a person that can force a qubit to collapse to either 0 or 1, it's actually any interaction with the wider world, a measuring device, a stray photon in the room. And what this means-- so the interference is something that quantum systems really like to indulge in in private, not when anyone is watching them.
So what this means is that if you want to build a quantum computer and have it work, you have to keep it incredibly well isolated from its environment. But at the same time, the qubits have to be very strongly interacting with each other, and not only that, but in a very precisely choreographed pattern. It's those twin requirements that make this so hard.
And it's so hard that from the beginning there have been a few skeptics who have said this can never be done. This is fundamentally impossible. Yes, you could build a quantum computer maybe with five qubits, then people do that, then they say, OK, maybe you could do with 10 qubits, then people do that, then they say, OK, maybe you could do it with 20 qubits, and people do that, then they say, OK, fine, maybe you could build it with 50 qubits, but you're not going to get beyond that. Something is going to kick in that will prevent you from going any further.
Now, I like to say that for me personally, the number one application of quantum computing, more than factoring or even quantum simulation, is just proving those people wrong. That's really what I care about. Everything else is just the cherry on the sundae. Let's demonstrate explicitly that the extended Church-Turing thesis can be violated, that these kinds of machines can be built, and then let's ask if we can do something useful with them.
Now, there was a big discovery in the mid '90s that convinced most of the field that building a quantum computer is quote unquote, "merely a staggeringly hard engineering problem" not something that presents a fundamental difficulty in physics. That discovery was something called the Quantum Fault-Tolerance Theorem. And basically what it says is that if you want to build a scalable quantum computer, you don't have to get the decoherence down to zero. You don't have to perfectly isolate your machine. You merely have to make the decoherence very, very small, depending on details, maybe a 1% chance of an error per qubit per time step or something like that.
And once you get the error small enough, you can use very clever error correcting codes to encode the quantum information you care about non-locally across many physical qubits in such a way that even if, let's say, any 1% of your qubits were to leak out into the environment, the quantum information that you cared about would still be recoverable from the other 99%. And you could constantly measure-- monitor the qubits in a way that would only tell you whether an error has occurred, and if so, how to fix it. The monitoring would not tell you the rest of the information about the quantum state, which you don't want to know because it would be like opening the oven before your souffle is done. It would collapse it.
So given the Fault-Tolerance Theorem, what I like to say is that the idea that you can eventually build a quantum computer is the conservative possibility. It's the one that just takes existing physics for granted. If it turns out to be impossible for some deep reason, that's actually 100 times more exciting. I hope that happens. But, you know, I'm not betting on it.
So now the other central question that people ask about quantum computing is a more theoretical one. Supposing we build this, what are the ultimate limits of what it can do? And particularly relevant for this talk, could quantum computers solve the NP complete problems in polynomial time, and thereby achieve this holy grail of computer science?
Well, we don't know. But most of us think that the answer is no. Can we prove it? Well, hell no we can't prove it. I mean, we can't even prove that classical computers can't do it. That's the P versus NP problem, so of course, we can't prove that quantum computers can't. But we can say something about it.
So aside from Shor's algorithm, maybe the second most important quantum algorithm is called Grover's algorithm. And this is an algorithm that actually solves a much more general problem than Shor's does. It can take any space whatsoever of n possible solutions, including solutions to an NP complete problem, or entries in an actual physical database, and it can search for a solution that is valid, that meets your requirements, in only about the square root of n steps, so quadratically faster than you would need classically where in the worst case, you would have to just look at all n of them, or on average, about n over 2. This is assuming that you are able to check many different solutions in superposition, as we would be able to do with an NP complete problem.
So what this lets us do is, instead of needing, say, 2 to the thousandth time to solve some NP complete problem, say with 1,000 independent variables, you could say, now with Grover, you merely need 2 to the 500 time. So you can see this pushes forward the limit of what you can do, but it doesn't change an exponential into a polynomial.
And now, an important result by Bennett, Bernstein, Brassard, Vazirani, which actually predates Grover's algorithm, says the Grover's algorithm is optimal. For this black box search task, if you know nothing further about the structure of your problem, then even a quantum computer is going to need at least square root of n steps to concentrate a large amount of amplitude on the right solution. You can make a superposition over all the solutions, but then the point is you've got to do something to collect more amplitude on the solution.
And that collecting the amplitude, focusing it in on the solution, takes time just like it would in the classical world. Here it takes less time, a square root of n steps instead of n, but the difference is merely quadratic rather than exponential. So what this means is that if there were a fast quantum algorithm for NP complete problems, it would somehow have to exploit their structure just as a classical algorithm would have to.
So then the question becomes, well, is there a quantum algorithm that would exploit the structure of NP complete problems? And there's been one big proposal for such an algorithm. This is something called the Adiabatic Algorithm proposed by Farhi and others actually while he was an undergrad at Cornell, almost 20 years ago. And if you know what simulated annealing is, that classical local search heuristics, you can think of this as a quantum analog of those. It's like a local search method, possibly souped up by the use of quantum mechanics.
You may have also heard that there is a company called D-Wave in Vancouver that's raised hundreds of millions of dollars. It's been on the cover of Time magazine. And its entire business model is to build a very special purpose quantum computers, quantum annealing devices they call it, whose only role is to run some noisy version of this Adiabatic Algorithm, so to try to solve NP complete optimization problems. Their latest model has about 2,000 qubits. It sounds pretty impressive. And they've sold a few of them, one to Google, on the Lockheed Martin.
And so what is this Adiabatic Algorithm? Well, you start out applying some operation with an easily prepared lowest energy state, and you weave your system in that lowest energy state. These operations are called Hamiltonians. You slowly change it to a different Hamiltonian whose lowest energy state encodes the solution to the NP complete problem that you would like to solve. And now, there is actually a theorem that says that if you could very slowly enough from one to the other, then you must end up in the ground state of your final Hamiltonian, which would then mean that you would have solved your NP complete problem.
So what's the catch? Well, so the hope from the beginning has been here to take advantage of an effect called quantum tunneling. So there are cases where when I do local search classically, I can get stuck, as I said before, in this crevasse in the mountainside. And even simulated annealing, which will sometimes go upwards, could get stuck here for an exponential amount of time. But depending on the height and the width of this spike, there are cases where a quantum particle, so quantum adiabatic evolution, would, as the physicists say, tunnel past the spike, or just sort of get over it and down to the global optimum in only polynomial time.
But the problem is-- well there are a couple of problems. One is that the running time of this Adiabatic Algorithm, in general, depends on a quantity called the minimum spectral gap. That's the gap between the smallest and the second smallest eigenvalues of this Hamiltonian. So these two functions, if they ever get super duper close to each other, or however close they get, you have to run the algorithm for the inverse of that amount of time, roughly. So then the question became, well, how close do these functions get to each other? And the hope was that, well, they wouldn't get too close, and then this algorithm would solve NP complete problems in polynomial time.
Farhi has told me a wonderful story, which is that he actually went to an expert in condensed matter physics and described the Adiabatic Algorithm. And physicists have decades of experience calculating the spectral gaps for their own reasons. And he said to this person, based on your experience with similar systems, do you think that this spectral gap will decrease polynomially or exponentially as a function of the number of particles? And the guy thought about it, and he said, I think it will decrease exponentially.
Now that was not the answer that Farhi wanted to hear. So he said, why? What's the physical reason for it? And the expert thought about it some more, and he said, it's because otherwise your algorithm would work.
[LAUGHTER]
I think there's actually something deep here, which is that after you've seen enough ways that nature makes it hard for you to solve NP complete problems, you might be tempted to just take the hardness of NP complete problems as a basic principle, like the second law, like no superluminal signaling, I then see what else you could explain by way of that principle. And I think to some extent, people have been doing that.
But in any case, a rigorous analysis indeed eventually confirmed that for hard instances of NP complete problems, like the 3SAT problem, this spectral gap can decrease exponentially. Which means that in any case, this Adiabatic Algorithm will not solve NP complete problems in polynomial time in the worst case.
So now the question is different. Now the question is, well, even if it doesn't always work, maybe it will sometimes be a lot faster than a classical computer. You know, one can hope. Maybe it will often be a little bit faster than a classical computer. The truth is we don't really know that well. We might not know for sure until we can build real quantum computers to test the thing out with because it's a heuristic.
Now to come back to the D-Wave device, I told you that they have 2,000 qubits. But-- well, one issue is that even if their qubits we're perfect, we still don't know what kind of speed up this algorithm can give you. The other issue is that their qubits are very noisy. The reason why they were able to build 2,000 on one chip already is that they didn't care so much about maintaining-- the qubits maintaining their quantum coherence for a long time. If you don't care about that, it's a lot easier.
But the trouble is that as far as we can tell today from the independent experiments that have been done on their system over the last four or five years, their device does not seem to get a speed up for anything when you do a fair comparison, or at least no speed up that has to do with quantum mechanics as opposed to this being a piece of special purpose hardware that's pretty fast at simulating itself. It It is an interesting piece of hardware from a physics and engineering point of view. There is quantum tunneling behavior that goes on there, but it looks like to get a speed up, you're going to need much higher quality qubits. I'm personally a lot more excited about 50 qubits that we really understand than about 2,000 qubits that we don't.
So this brings us to-- well, what I talked about yesterday was this very, very timely subject of quant-- with the unfortunate name of quantum supremacy, which basically just means getting a clear quantum speed up for something not necessarily anything useful, and thereby proving that quantum computers really can do something faster. So we've been trying-- some of us, a lot of what I've been doing in my own work is to try to lay the theoretical foundations for these sorts of experiments. We did this first with my student Alex Arkhipov for something called boson sampling, which is a very rudimentary type of quantum computer involving just a network of beam splitters through which a bunch of photons pass. And this has been experimentally demonstrated with six photons, although it's difficult to scale it up.
And then what's going to be happening this year is that Google and IBM are racing to build superconducting quantum computing chips with about 50 qubits. 50 is not enough as far as we know to do anything really useful, but it should be enough to do something that is quite challenging for a classical computer to simulate, as in needing about 2 to the 50th steps. They and one of the main things that we've been doing is trying to tell our friends at Google and IBM what exactly they should do with these systems as they come online over the next year that we are as confident as possible is hard to simulate with a classical computer, and therefore establishes the separation. Because ultimately it is a question of complexity theory. We don't know for sure that any of these things are hard to simulate, but we can try to base that on our beliefs about complexity classes.
So, I had another slide about the complexity of decoding the Hawking radiation that comes out of a black hole, and how actually the limits of quantum computers have helped us understand this subject. Because I am running out of time, I think I'm going to just tantalize you with this. And then if someone wants to ask me about it, I can come back to it in the question period.
But this is an amazing connection between quantum computing, the stuff I've been telling you about, and quantum gravity. That's been a very hot subject in the last five years or so, and also something that I've gotten excited about, and part of what I've been working on. And more broadly we've been able to use ideas from quantum computing to get new insights into condensed matter physics, even into classical computer science. So even before we can build a practical quantum computer, I think the ideas can be poured into other fields. And there's been a lot of technology transfer.
Summary. Quantum computers are the most powerful kind of computer allowed by the currently known laws of physics. If there's something more powerful, well then it depends on physics that we don't know yet. The first clear quantum speed ups, at least for contrived tasks, may be achieved within the next year.
To do something useful, like quantum simulation or certainly breaking cryptographic codes, that's going to take longer, but is also a serious prospect. I don't know if civilization will survive for another two years. But if it lasts for long enough, well that's my main uncertainty here. I think, eventually we'll get practical quantum computers because why not.
Contrary to what you read, we expect exponential speed ups by a quantum computer only for certain special problems with special structure that lets us choreograph these interference patterns. For a much broader set of problems, we expect square root type of speed ups, the type of speed ups that you could get from Grover's search algorithm, for example.
In general, the pattern of what quantum computers can and cannot do, their capabilities and limits, is weirder, I think, than any science fiction writer would have had the imagination to invent. But at the end of the day, what gives me some confidence that this is real because if it was made up by a science fiction writer, then they would say, well, it's just you trial the answers in parallel. They would never say, well, you choreograph interference, and it lets you do factoring and discrete log, but not NP complete problems, like who ordered that?
But the limit-- there's a good side to the limits of quantum computers. They are what opened up the possibility of designing new cryptographic codes that not even a quantum computer would break. And they may even help to protect the geometry of space-time according to these recent connections. So let me stop there. Thank you.
[APPLAUSE]
PAUL: We do have time for a few questions.
SCOTT AARONSON: Yeah.
AUDIENCE: Would you talk about the relationship between quantum computing and Hawking radiation?
SCOTT AARONSON: Yeah, sure. Sure. All right.
PAUL: Before you do that--
SCOTT AARONSON: Yeah.
PAUL: --can go back two slides?
SCOTT AARONSON: OK. This one?
PAUL: No.
SCOTT AARONSON: This one? This one?
PAUL: No.
SCOTT AARONSON: This one?
PAUL: No, with the five qubit x slide.
SCOTT AARONSON: I don't know which slide you're talking about.
PAUL: Go forward.
SCOTT AARONSON: Forward? This?
PAUL: Yes.
SCOTT AARONSON: OK.
PAUL: So, I've seen this twice and you talk about this like it's a 50-qubit array. I just want to make it clear for people who don't recognize this. This is a chip that was produced by Martinez's group while he was still at Santa Barbara. He's now working on this for Google.
SCOTT AARONSON: Yeah.
PAUL: But it's worth pointing out that these five cross-shaped objects there are the qubits.
SCOTT AARONSON: Yeah.
PAUL: And so this is the five-qubit device.
SCOTT AARONSON: OK. Thanks Paul. The issue is that I've been giving versions of this talk for years, and I should update this image in my PowerPoint.
PAUL: No, but it's still useful that there's a clear, physical picture of what a qubit looks like.
SCOTT AARONSON: Yeah. No, I've even-- I've been to their lab. I've looked at their 20-- you know their 22-qubit one was their latest in the microscope. Of course, if they had they put some dryer lint there, they probably could have fooled me. But it's-- but, yeah I think 49-qubit ones are being in like 7-by-7 arrays of qubits are being fabricated right about now.
OK, so then the Hawking radiation thing. So now a famous question, Hawking-- who's that guy if you haven't seen him-- asked in the 1970s is how can information escape from a black hole? So quantum mechanics, if it applies to everything in the universe, then all physical processes should be reversible. From a modern point of view, even measurement could be seen as just a physical interaction in which you become entangled with the system that you're measuring. And with a sufficiently advanced apparatus, even that might be reversible.
So quantum mechanics is based on conservation of information. No information ever gets destroyed. But a black hole seems incompatible with that, manifestly. Things hit the singularity, and poof, they're gone.
So this problem has driven a lot of fundamental physics for the last 40 years. And a major idea about how to resolve it, which was developed by Lenny Susskind, and Gerard 't Hooft, and others in the 1990s was called Black Hole Complementarity. And it basically says that there are two different dual descriptions of a black hole appropriate to different observers.
An observer outside of the black hole never needs to make reference to what is inside of the event horizon. So the outside observer could imagine that they just drop, say, an encyclopedia into a black hole, and it never even makes it past the event horizon. It sort of gets pancaked over the event horizon, scrambled up in some crazy way, and then eventually it fizzles out. The information comes back out because one of Hawking's most famous discoveries was that black holes eventually do evaporate. They emit what's called Hawking radiation. A black hole the mass of our sun would completely evaporate away after a mere 10 to the 67 years.
Now, Hawking's calculation didn't give us any mechanism for the radiation to actually encode the information about what fell into the black hole. But we could say, well, that's just because of our ignorance. If we had a real quantum theory of gravity, then it would explain how the infalling information actually gets into the Hawking radiation and comes out.
An observer who jumped into the black hole would have a different perspective. In their description, the encyclopedia does go right past the event horizon, the event horizon is not even a very special place, and it continues on until it hits the singularity.
Now, how can we reconcile these two totally different views of what's going on? Well, I never really understood it. In 2012, a very big thing happened called the Firewall Paradox, you may have heard about. And it involved many of the world's great physicists agreeing that they didn't understand this either.
And in particular, it was a thought experiment which involved some observer, always named Alice, who stays outside of a black hole, collects the Hawking radiation that comes out of it for 10 to the 67 years. She has a really long grant. And she routes these photons into her quantum computer. She processes them. She measures them in some very specific way, And then she jumps into the black hole for the sake of science to see what happens.
And if you accepted all of the reigning ideas about Black Hole Complementarity, then you were led to a firm prediction that what Alice ought to encounter is an end of space-time, not at the singularity, where it was supposed to be, but already at the event horizon. So she cannot pass through the event horizon contrary to what general relativity had told us for a century. So this was a problem.
And there were hundreds of papers that came out about what was the wrong assumption that led to this absurd-seeming prediction? How do you get out of this issue? And to me, one of the most interesting responses to this paradox was by Daniel Harlow and Patrick Hayden. Harlow is a string theorist. Hayden is a quantum computing person.
And they teamed up, and they said, well, yeah, it's true that there is some measurement, in principle, that one could apply to this Hawking radiation where the effect of it would be to destroy the geometry of space-time at the event horizon. That the Hawking radiation somehow is dual to or encodes what the black hole itself, and by operating on one, you change the other. You destroy the smooth space-time at the event horizon.
But then they asked, well, how hard would it be to actually do that? What would Alice actually need to do as a down-to-earth engineering matter? And what they said is that there is very strong evidence that in order to create this problem, Alice would have to perform a quantum computation that is exponentially long. So she'd have to solve a problem that's exponentially hard, even for a quantum computer, which means not something she could do in a mere 10 to the 67 years, something that would take two to 10 to the 67 years.
So the black hole would have long ago evaporated before she had made a dent in the problem. So maybe there's no problem. So now whether this actually solves the problem or not is a little above my pay grade.
But when it came to the central technical question, why is Alice's problem exponentially hard? Harlow and Hayden's evidence for that involves giving one of these reductions, computer science hard disk reduction. And they actually used, well, the central result from my PhD thesis in 2004, which is that in the black box setting, the setting that we talked about with Grover's algorithm, a quantum computer would need a large amount of time, not only to solve the Grover's search problem, but also to find collisions in a long list of numbers in which there are many collisions to be found.
So also for this task of finding collisions, let's say, in a cryptographic hash function or something like that, if you treat the hash function as a black box, then again, a quantum computer only gets a polynomial advantage over a classical computer and not an exponential advantage. In that case, the advantage is that only like n to the 2/3 power.
So what Harlow and Hayden said was that for Alice to create this firewall, she would have to do a computational task on the Hawking radiation, which is basically involves finding collisions in a function with exponentially many different inputs. And that is why the problem is so hard.
So after this, I sort of-- I don't know really anything about string theory, or quantum gravity, or black holes, but now these people were speaking our language, and I had no choice but to get involved as a sort of computer science consultant through the string theorists where nowadays they will come, and they'll say, well, what about this? Is this hard? And we'll say, well what assumption are you willing to make?
So that's been a really exciting thing, that now that string theorists, at least the ones I know, they barely talk about strings anymore. What they're talking about qubits, entanglement, quantum circuits. So now I have-- now we have no choice but to talk to them.
[LAUGHTER]
PAUL: So with that question put me in a quandary as the host here because I knew it was a 10-minute question, and a complex decision problem as to whether to invite any short questions first.
I want to make an announcement. We're going to have a reception on the fourth floor of PSP with a limited amount of desserts and snacks. And I feel compelled just to ask if there are any other quick questions, if there's some [INAUDIBLE] in the audience, there's a [INAUDIBLE].
[LAUGHTER]
Going once, going twice, OK, we will reconvene upstairs.
[APPLAUSE]
Scott Aaronson offers a crash course on quantum computing, which seeks to exploit the strange rules of quantum physics to solve certain problems dramatically faster than we know how to solve them with any existing computer, Nov. 28, 2017 as part of the Messenger Lecture Series. Aaronson is the David J. Bruton Centennial Professor of Computer Science and director of the Quantum Information Center at the University of Texas at Austin.