share
interactive transcript
request transcript/captions
live captions
|
MyPlaylist
PAUL GINSPARG: OK, so while people are slowly still shuffling in, I get to announce that this is the first of three Bethe lectures to be given by John Preskill, who is currently the director of the Institute for Quantum Information and Matter at Caltech. The Bethe lectures are given in honor of Hans Bethe, a member of the department here from the mid-1930s to the mid-1970s.
I'm going to defer the full blown version of Hans Bethe's many achievements, including the heartwarming slideshow of Hans Bethe as you've never seen him, namely as a youth. And suffice it to say here that he wrote a major article in every decade of his 70 years, greater than 70 years, as an active physicist. That included shortly after his arrival at Cornell explaining how stars burn hydrogen to helium, work which created the field of nuclear astrophysics and for which he won the Nobel Prize in 1967.
He took a few years of leave from Cornell during World War II to be director of the Theoretical Division at Los Alamos, where he reported to Oppenheimer. And among the physicists who reported to him was Richard Feynman. He later became a prominent arms control advocate. And he was the major pillar in building the physics department here to the world class status it continues to enjoy today.
Moreover, he fostered the informal collegial environment that remains one of its defining characteristics. Instead of expanding on all of that, instead I'm going to relate a personal anecdote from my time as a graduate student here. This is not included in any of the official biographies, but it does illustrate his uniquely supreme discernment.
He was retired in the late 1970s when I was a graduate student here. But we all had labs over in Newman. I think that's that direction.
And one Sunday afternoon, I found him standing by a photocopy machine looking somewhat stymied. And this was the pre-digital era. So like any self-respecting graduate student at the time, I was expert in the care and feeding of photocopy machines, quickly cleared the paper jam, gave him a few other pointers in applied technology. And he was off and running.
The next day, his administrative assistant, Velma Ray-- probably then still called a secretary-- told me that Hans had related to her how he'd been in the previous day and was having difficulties. But he was helped by a quote, "very nice graduate student" unquote. Hans Bethe was the only person ever to have made that observation.
[LAUGHTER]
That wasn't supposed to be funny.
[LAUGHTER]
OK. So I also happen to know John Preskill during that period, having met him when he arrived at Harvard in 1975 to be a graduate student. He arrived from Princeton. And by a strange coincidence, just before this talk, he told me his own story, his own adventure in pedagogic applied technology where, as an undergraduate and apparently with no uncertainty whatsoever, he demonstrated to Werner Heisenberg how to operate a coffee vending machine.
[LAUGHTER]
Did I get that right? OK. He went on at Harvard to complete his thesis in 1980 under the tutelage of Steven Weinberg, a former Cornell undergraduate. He remained at Harvard as a member of the Harvard Society of Fellows, then junior faculty.
We overlapped there again during that period. And then he left for Caltech in 1983, where he has remained since and is currently the Richard P. Feynman professor of theoretical physics. And Feynman, too, spent quality time here at Cornell after being recruited by Bethe.
John began his career in particle physics and cosmology. But in the 1990s, he became excited about the possibilities of using quantum physics to solve otherwise intractable computational problems. And he's been a leader and major proponent in this field.
His second lecture will be a public lecture on Wednesday night at 7:30. His third lecture will be more technical on simulating quantum field theory with a quantum computer. That'll be Friday at 1 o'clock.
Among his many accomplishments, John is responsible for the nomenclature quantum supremacy, a subject you can follow via his subreddit alt.right.quantum.
[LAUGHTER]
Let us welcome John Preskill.
[APPLAUSE]
JOHN PRESKILL: Thank you, Paul. I'm sometimes asked, what was Paul Ginsparg like when he was a student? And the answer is just like that.
[LAUGHTER]
Like many physicists, I admired Hans Bethe. In the '80s and '90s, my early years at Caltech, he was an annual visitor and would come and tell us about his latest advances in understanding how supernovae explode. He kept a zest for doing science and continued to rack up scientific accomplishments till very late in life. He really was a great man. So I'm honored to be the Bethe lecturer this year.
My topic today is quantum information science, a kind of synthesis of three of the pillars of 20th century science, computer science, quantum physics, and information theory. And this is a broad topic which includes among its facets using quantum strategies to improve measurements, using principles of quantum physics to protect our privacy, distributing quantumness around the world, and using quantum simulators and quantum computers to explore the physics of very complex many particle systems and to solve hard problems. And I'll touch on all these aspects. But the main emphasis particularly in the first part of the talk will be on the prospects for solving problems or learning new things from near-term quantum information processing platforms.
The way I like to look at the field of quantum information science is we're really in the early stages of the exploration of a new frontier of the physical sciences, what we might call the complexity frontier or entanglement frontier. We are realizing and perfecting the capability of creating and precisely controlling very complex quantum states of many particles so complex that we can't simulate them with our most powerful digital computers or describe their properties very well using existing theoretical ideas. And that opens new opportunities for discovery.
Our conviction that this frontier will be fruitful to explore rests largely on two central ideas-- quantum complexity, our basis for thinking that quantum computers will be powerful, and quantum error correction, our basis for thinking that quantum computers can be scaled up to large devices that can solve hard problems. And both of those ideas rest on the underlying concept of quantum entanglement. That's the word we use for the characteristic correlations among the parts of a quantum system that are different from the correlations we encounter in ordinary experience.
You can think about it this way. Imagine a book which is many pages long. Let's say 100 pages. And if that's a classical book written in bits, you can read the pages one at a time.
Every time you read another page, you learn another 1% of the content of the book. And after you've read all the pages one by one, you know everything that's in the book. But suppose instead that it's a quantum book written in qubits. And the pages are very highly entangled with one another.
That means that when you look at the pages one at a time, you see just random gibberish revealing almost nothing that distinguishes one highly entangled book from another. And even after you've read all 100 pages one by one, you know almost nothing about the content of the book. And that's because the information isn't encoded in the individual pages. It's encoded almost entirely in the correlations among the pages.
And to read the book, you have to make a collective observation of many pages at once. That's quantum entanglement. And it's very different from any notion of correlation that we're directly familiar with.
Now, to write an entangled quantum book, we can't use ordinary bits. We have to use what we call qubits. And there are many different ways in which a qubit can be realized physically of which I've listed a few here.
A qubit can be carried by a very simple object, an elementary particle like a photon, which carries information in its polarization state, or a single electron, which has information encoded in its magnetic field, or a single atom, which might be in either its ground state or some long-lived excited state. But a qubit can also be encoded in a more complex physical system, like an electrical circuit at very low temperature, which conducts electricity without resistance. And all these encodings are really quite remarkable.
We can encode information in such a simple system, an elementary particle. Or we can encode the information in a much more complex system in the collective motion of billions of electrons. And yet that object behaves, for information processing purposes, like a single atom.
When we speak of quantum complexity, what we have in mind is the overwhelming complexity of describing in ordinary classical language all the correlations among a system of many qubits. Now, that complexity does not in itself guarantee that we can use quantum computers to do useful things. But we have several kinds of arguments indicating that quantum computers will be powerful.
One is that we know of problems which we believe to be hard for classical computers, but for which efficient quantum algorithms have been discovered. The most famous example is the problem of finding the prime factors of large composite integers. We think factoring is hard classically, because people have been trying for decades to find better algorithms for factoring. But still the best algorithms that we have have a runtime which grows super polynomially in the number of digits of the number we wish to factor.
The theoretical computer scientists have also provided arguments based on plausible complexity theory assumptions, indicating that we can do a relatively simple quantum computation and then measure all the qubits and that, by doing so, we are sampling from a probability distribution that we can't sample from by any efficient classical means. But perhaps most tellingly, we don't know how to simulate a quantum computer using a classical computer efficiently. And that's not for lack of trying.
Physicists and chemists have been trying for decades to find better methods for simulating the behavior of highly entangled quantum systems. But still the best algorithms that we have have a scaling that's exponential in the number of qubits in the system. Now, we shouldn't think that the power of quantum computing is unlimited.
We don't expect, for example, that a quantum computer would be able to find exact solutions to worst case instances of hard optimization problems. I think it's one of the most interesting differences between quantum and classical that I've ever heard, that a classical computer can not efficiently simulate a quantum computer and that there are problems which are classically hard, but quantumly easy. So it's a compelling question to understand better what are the problems in that class of quantumly easy and classically hard.
And for a physicist, there is a natural place to look. That is the problem of describing the behavior of highly entangled states of many particles. Some years ago, two great physicists, Laughlin and Pines, pointed out that we really have a theory of everything that matters in ordinary life.
It's the theory of electrons interacting electromagnetically with atomic nuclei. We know exactly the equations describing that theory, the Schrodinger equation for that many-body system. But we can't solve the equations.
And they went so far as to say, no computer existing or that will ever exist can break this barrier. They pointed out that we have a simple correct theory of everything only to discover that it has revealed exactly nothing about many things of great importance. What they had in mind are situations in which quantum entanglement has an important role.
In fact, years before Laughlin and Pines made this statement, the physicists and former Cornell Professor Richard Feynman had articulated a rebuttal when he said, "nature isn't classical, damn it. And if you want to make a simulation of nature, you'd better make it quantum mechanical." And Laughlin and Pines knew that Feynman had made this proposal years earlier, but they had dismissed the idea as impractical.
For a physicist like Feynman, what seems most important about quantum computing is that we expect that with a quantum computer we'd be able to efficiently simulate any process that occurs in nature. And we don't think that's true of ordinary digital computers, which cannot simulate highly entangled quantum matter. So that means with a quantum computer, we ought to be able to probe more deeply into the properties of complex molecules and exotic materials and also explore fundamental physics in new ways-- for example, by simulating high energy collisions of strongly interacting elementary particles, or the quantum physics of a black hole, or the quantum behavior of the universe right after the Big Bang.
Now, in defense of Laughlin and Pines, it's been about 40 years since Feynman made that call for the launch of a new field. And we're just now getting to the place where quantum computers are capable of doing potentially interesting things. So what's taking so long?
Building a quantum computer is really hard. And what makes it hard is that we require that three nearly mutually exclusive criteria all be simultaneously satisfied. We want the qubits in our device to interact strongly with one another, so we can process the information that they encode.
But we don't want the qubits to interact with the outside world, which would cause errors in the device. Except that we do want to be able to control the quantum system from the outside and eventually read out the result of a computation. And it's very hard to build an experimental platform that satisfies all of these desiderata.
The enemy of a quantum computer is what we call decoherence. Physicists like to imagine a quantum state of a cat, which is simultaneously alive and dead. But we never observe that type of coherent superposition of microscopically distinguishable objects.
And we understand why that's the case. It's because a real cat very rapidly interacts with its environment. And those interactions with the environment in effect immediately measure the cat, so that it becomes either completely dead or completely alive.
And that's the phenomenon of decoherence. And it helps us to understand why it's so that, even though quantum mechanics holds sway at microscopic scales, still classical physics provides a perfectly adequate description of what we observe in ordinary experience. Now, a quantum computer isn't going to be much like a cat.
Except that it, too, will inevitably interact with its surroundings even if we try hard to prevent it. And that could potentially lead to decoherence, and therefore the failure of a computation. So we can't expect to operate a large scale quantum computer unless we can protect against coherence and other potential sources of error.
So really the problem what makes quantum information different from classical is that, if you observe quantum information, you unavoidably disturb it in some uncontrollable way. So if we're going to process quantum information accurately, we need to keep it almost perfectly isolated from the outside world. And that sounds impossible, because our hardware is never going to be perfect.
But we've understood in principle how to do it using the idea we call quantum error correction. And the essence of the idea is that, if you want to protect quantum information from damage, you should encode it in a very highly entangled form. So that when the environment interacts with the parts of the system one at a time, like the pages of that 100 page book I described earlier, the environment can't access any of that protected information, and therefore can't damage it.
And we've also understood how to process information that's encoded in this very highly entangled form. So at least in the not too distant future, we ought to be able to encode a quantum state of a cat which is simultaneously dead and alive and store that delicate superposition state in a quantum memory for an extremely long time. So where's the field now?
We are on the brink of what has been called quantum supremacy. That means quantum devices which are capable of performing tasks which surpass what we can do with our most powerful digital computers irrespective of whether we really care about that task for other reasons or not. We call that quantum supremacy.
I thought we should have a word for the new era of quantum information which is just starting to dawn. So I proposed a word NISQ. It's an acronym which means Noisy Intermediate Scale Quantum.
Intermediate scale indicates that we're talking about devices that are large enough, perhaps 50 qubits or more is enough, that we can't by brute force simulate the quantum system using our most powerful existing digital computers. But noisy reminds us that the hardware will be imperfect, that the quantum gates in the device will sometimes make errors. And these devices won't be protected by quantum error correction. So noise is going to limit their computational power.
Now, to physicists, NISQ is exciting. It gives us a tool for exploring the behavior of highly entangled many qubit systems in a regime which, up until now, has been experimentally inaccessible. And NISQ devices might have other interesting applications, including ones of potential commercial interests. But we're not sure that's the case.
We shouldn't expect NISQ to change the world by itself. It should be regarded as a step towards more powerful quantum technologies that we hope to develop in the future. I do feel confident that quantum technology is going to have a transformative impact on human society eventually, but we're not sure how long that's going to take.
Now, I've emphasized the number of qubits as an important milestone for quantum computing. But we don't care only about the number of qubits, we also compare about other things, including in particular the quality of the qubit, and in particular about the reliability of the 2-qubit gates in the device which are used to process the information. And with the best quantum hardware we currently have based on superconducting circuits or trapped ions, under the best conditions a 2-qubit quantum gate has an error rate per gate of about one part in 1,000.
And we haven't yet seen whether error rates that high with the existing technology can be maintained in more complex, many-qubit devices. And that's going to limit the number of gates that we can perform and still read out the result of a computation with reasonable signal to noise. We shouldn't expect to do many more than 1,000 gates and get the right answer.
So that limits the computational power of the technology. Eventually, as I've said, we will be able to do better, either by engineering much better qubits with lower gate error rates, or by using quantum error correction. But neither one of those things is likely to happen for a while.
Now, there's an emerging paradigm of how near-term quantum processors might be used, a kind of hybrid quantum classical scheme. It makes sense to use our powerful classical computers to the extent that we can and then to boost that power using quantum device. So the way this might work is we execute a relatively small quantum circuit, then measure all the qubits, send the outcomes of those measurements to a classical processor, which returns instructions to slightly modify the quantum circuit. And then that process is iterated until convergence with the goal of minimizing some cost function for the purpose of solving an optimization problem.
As I've already remarked, we don't expect quantum computer to find exact solutions to hard instances of combinatorial optimization problems. But it's possible that they'll be able to find more accurate approximate solutions or to find those approximate solutions faster. When we apply this technique to the problem of solving optimization problems, it often goes by the name QAOA, Quantum Approximate Optimization Algorithm. But it can also be used for the purpose of finding low energy states of a many-particle quantum system in which case it's often called VQE, the Variational Quantum Eigensolver. But the idea is very similar in both cases.
Now, should we expect that NISQ devices running this type of classical quantum hybrid algorithm will be able to outperform the best classical algorithms for solving similar problems? Well, we don't really know, but it's a lot to ask. Because the classical methods are very well-honed after decades of development and improvement. And the NISQ processors are just starting to be available. But we're going to try it and see how well it works.
There are lots of examples in classical computing of algorithms which turned out to be powerful and valuable, but where, in advance, before we experimented with those algorithms, theorists did not have compelling arguments that the algorithms would be useful. A current example is deep learning, which is finding many applications that are having a transformative effect on technology. But we still have a very limited understanding from a theoretical perspective of why for many applications of interest deep learning networks can be trained efficiently. And it's possible that quantum computing will be like that, that although we don't have compelling theoretical arguments in advance, that some of these optimization methods with quantum processors will be powerful.
We will find through experimentation that they have useful applications even in the NISQ era. But I emphasize again that, because of the imperfect gates, there will be severe limits on the size of the algorithms that we can run. Typically, we'll be dealing in the next few years with devices up to something like 100 qubits and with a circuit depth, which is likely to be less than 100 computational steps. And in order to find useful applications under those limitations, we'll need a vibrant dialogue between the algorithm experts and the application users.
Now, I've emphasized 50 qubits a kind of a milestone for quantum computing where we reach, at least loosely speaking, the barrier where brute force simulation with a classical computer is out of reach. Actually, we already have quantum devices with thousands of qubits, specifically the device built by D-Wave systems. Now, the D-Wave machine is not a general purpose quantum computer.
It operates according to a different paradigm. It's what we call a quantum annealer. But it can be used to solve optimization problems. And it does solve them successfully. But so far, we don't have compelling theoretical arguments or persuasive experimental evidence that quantum annealers can outperform the best classical computers that we have solving the same problems using the best methods.
What's coming soon are potentially more powerful quantum annealers, which are what we call non-stoquastic. That just means they're harder to simulate classically. And that will perhaps improve the potential for quantum annealers to surpass classical devices in performing some tasks. But as of now, we don't have very strong theoretical arguments that quantum annealers will achieve speed-ups over classical devices. We're going to have to experiment with them more to understand their capabilities.
Now, it's going to be important in the near term to think about whether we can build some form of noise resilience into the algorithms that we run on these near-term platforms. We won't be able to do quantum error correction, which is too expensive in terms of additional qubits. But perhaps we can design our algorithms, so they're not quite so susceptible to noise.
Now, if I do a generic circuit with, let's say, G gates, a faulty gate anywhere in the circuit can cause the final outcome to have an error. And so I wouldn't expect to be able to perform a circuit with many more than 1 over G gates and get the right answer with a reasonable probability. But the algorithms that we run to solve real problems have some structure.
And depending on the nature of the algorithm and the circuit that we use to implement it, we might be able to tolerate a higher gate error rate. In some of the applications to simulation of physical systems, we can tolerate a constant probability of error per measured qubit in the final result. And only a limited number of faulty gates can cause an error in a given qubit. That might happen, because the circuit has relatively small depth.
So the errors are not able to propagate so much before the final read out. And some of the methods that have been proposed for applying NISQ devices to quantum problems have some natural noise resilience built in, so that errors that occur early in the circuit can decay away before the final read out. So there's an opportunity, through cooperation between theorists and experimentalists, to understand better how to build some resistance to noise into our algorithms before we reach the era where we can do quantum error correction and make quantum computing truly scalable.
It's natural to wonder about the potential of combining quantum information processing with machine learning. It's possible that a quantum deep learning network built from quantum hardware would have advantages over a classical network for some purposes. Perhaps it could be trained more easily for some applications.
But we don't really know whether that's the case. We'll have to try it with NISQ technology and see how well it works. One argument which makes some people hopeful about applications of quantum machine learning is based on what we call QRAM or Quantum Random Access Memory.
And that just means that we can take a very long classical vector, a lot of classical data, and encode it very succinctly with an exponential savings of space in a small number of qubits. However, for typical applications that have been proposed of QRAM to machine learning applications, there are input output bottlenecks which cause trouble. So in particular, loading classical data into QRAM is quite expensive.
And the cost of doing so could nullify any exponential advantage of dealing with QRAM in a quantum network. And furthermore, the output from a quantum network is a quantum state. And if it has n qubits, we can't get more than n bits of information out of it by measuring it per run.
It's probably more natural to think about quantum machine learning as a task which involves an input and an output which are quantum states rather than classical data. And that might have applications to characterizing or controlling complex quantum systems. It makes sense, if with a deep network we're trying to learn about the properties of correlated probability distribution, that a quantum advantage will be more likely if that probability distribution involves some underlying quantum entanglement, as when we're trying to learn about a quantum state.
This idea of QRAM, of succinctly encoding classical data in a quantum state, also underlies another quantum computing application or algorithm which might have a number of uses. And that's a quantum algorithm that speeds up linear algebra. The input to the problem or to the algorithm, which we call HHL after the three authors who first discussed it about 10 years ago, is a very succinctly described, very large matrix and an input vector which is encoded as a quantum state.
So it's a vector of length n, but encoded in log n qubits. And the output of the algorithm is also a quantum state, which is the result of applying the inverse of the input matrix to the input quantum state. And for a fixed accuracy, that can be achieved in a time which scales logarithmically with the size of the matrix, which is an exponential improvement over classical matrix inversion.
So that sounds pretty good. But the caveat is that both the input vector and the output are quantum states. So in the case of the output quantum state, we can gain some information about it by measuring it.
And then we can run the algorithm many times to characterize the output in more detail. But to encode classical data as the input to the algorithm, we would have to load that data into QRAM. And because that's expensive to do, it could nullify the potential exponential advantage of doing quantum matrix inversion.
Now, we do think that this algorithm is powerful and that it should have applications, because it's an example of what we call a BQP-complete problem. So that means that any problem that a quantum computer can solve efficiently can be encoded as an instance of this matrix inversion problem, which is sped up exponentially as I just described. So unless quantum computers have no advantage in any application over classical ones or at least no super polynomial advantage, then there must be hard instances of this problem, which can be sped up by a quantum computer.
But finding applications is elusive if we really want an end to end application. But typically, in applications it's necessary to do some preconditioning of this matrix to reduce the ratio of its largest to smallest eigenvalue by some method. And that itself can be an expensive step.
Chances are that, although eventually quantum matrix inversion will have interesting applications, those won't be realized in the NISQ era. Because this algorithm is just too expensive, we think, to be executed in a useful way without error correction. Now, a problem that we're pretty confident is hard, as I've mentioned, is simulating many particle quantum systems.
We think that's hard, because people have been trying to find good algorithms for doing it for a long time, and they haven't succeeded. And with quantum computers, we will be able to do such simulations. Though it's not clear that we'll be able to get useful results out with real applications to physics during the NISQ era before we have error corrected quantum computers. But the long-term applications could be very impactful. Just the applications to chemistry could, in principle, improve human health, energy production, agriculture, and so on.
Now, classical computers are particularly bad at simulating dynamics, at describing how a highly entangled quantum system changes with time. And so it's in the simulation of dynamics that we think quantum computers are likely to have a big advantage. And physicists are hoping that there will be noteworthy improvements in our understanding of quantum dynamics resulting from near-term experiments using these NISQ experimental platforms.
An analogy that may be helpful is that in the '60s and '70s, when people started to be able to simulate non-linear dynamical systems, that seeded advances in our understanding of classical chaos. Right now, in comparison, we don't understand quantum chaos very well at all. Because we can't simulate highly entangled chaotic quantum systems. But experiments done with NISQ devices in the near term could teach us a lot about that.
Now, I should make a distinction between digital and analog quantum simulators. By an analog quantum simulator I mean a system with many qubits whose dynamics resembles the dynamics of some model system that we'd like to study, whereas a digital quantum simulator is a gate-based universal quantum computer which can be used to solve any quantum problem and can be used for physical simulation, in particular if suitably programmed. Analog quantum simulation has been a quite active subject already for over 15 years, but digital quantum simulation with gate-based quantum computers is really just getting started.
We can use similar experimental platforms for both purposes, like superconducting circuits, trapped ions, neutral atoms, and so on. The analog quantum simulators are becoming increasingly sophisticated and controllable. But they are limited by imperfect control of the Hamiltonian of the system. And they're probably best suited for studying properties that are reasonably robust with respect to small errors in the simulated Hamiltonian.
We can anticipate that eventually digital, that is circuit-based quantum simulators, will surpass their analog cousins, just because they can be more precisely controlled by using error correction. But that might not happen for a long time. So in the near term, analog quantum simulators could teach us a lot about many particle quantum physics.
I'll give a few examples of some of the recent developments to give you an idea of what's been happening in quantum platforms applied to physics. Now, one problem that physicists are very interested in is, how do quantum systems which start out far from equilibrium converge to equilibrium? For chaotic quantum systems, if I perturb the system, say by interacting with it locally, storing some information locally in the system, that locally encoded information spreads out very quickly and soon becomes invisible to local observables. Because it becomes encoded in the form of a very highly entangled state.
But the other extreme is what we call localization in which often, due to strong disorder in the system, it remains only weakly entangled. And locally encoded information spreads out slowly and the thermalization is delayed for a long time. Now, in recent experiments done by the Harvard group, led by Mikhail Lukin with a 51 atom analog quantum simulator based on neutral highly excited Rydberg atoms, they discovered an unexpected intermediate case, that there's a regime in which the system has two kinds of quantum states.
I call them Type A and Type B, where the quantum states do thermalize very quickly, as I just described if they're Type A. But the Type B states instead undergo long-lived coherent oscillations and thermalize much more slowly. And that seems rather remarkable, Because, otherwise, the Type A and Type B states seemed to be very similar.
So we're still trying to understand what's going on in this system, but it's been suggested that this is the first experimentally observed instance of what have been called quantum many-body scars. The idea of scarring in few-body chaotic quantum systems has been discussed for a long time. Scars are signatures in the wave function, which are associated with unstable periodic orbits in the classical limit of the quantum system.
And in this case, something analogous may be happening, but in a many-body setting. So I think it's very encouraging that in this experiment which was one of the first where arguably we were simulating dynamics with a quantum platform in a regime, which goes beyond what we can simulate classically, a new phenomenon that had been unexpected was found. Now, I spoke of the difference between analog and digital quantum simulations.
There's something which is sort of in between, which we could call a programmable analog quantum simulator. And what I mean by that is a system which has some native Hamiltonian, but where we can switch from one Hamiltonian to another very rapidly, like we can switch in a circuit-based setting from one gate to another. And that can be useful for doing variational calculations looking for low energy states of a complex quantum system.
In fact, even if we can't control the Hamiltonian perfectly, if the control errors are reproducible, they don't have to interfere with the power of the platform for doing such variational calculations. There was a recent experiment by the Innsbruck Group working with a trapped ion device, the group led by Rainer Blatt. And they did a study of one-dimensional quantum electrodynamics on a lattice, a lattice with a total of 20 sites which they encoded in a 20-qubit processor with 20 ions in a trap.
And they studied the low energy spectrum of the system by doing variational calculations in which they would turn on a Hamiltonian, let the system evolve for a certain amount of time, suddenly switch the Hamiltonian, evolve for some other amount of time, and so on repeating several levels of switched Hamiltonian simulation, and then compute the expectation value of the energy in that state, and then try to optimize the energy by varying those control parameters, the Hamiltonians of the system and the evolution times. And in doing so, they were quite successful in reproducing the energy spectrum of low-lying states, which for this system with 20 qubits you could compute by exact diagonalization of the Hamiltonian. But what was encouraging was that there was no sign that decoherence in the device, even though there was no error correction, was interfering with getting good results. And they have plans to scale this up to about 50 ions, which is probably as far as they can go with the current technology.
I also wanted to mention one recent theoretical development. One thing that's a concern about these methods for doing hybrid classical quantum optimization is just the classical part of the optimization can be quite difficult. To do the search in the classical parameter space for finding the quantum state that produces the optimal value of the cost function is a hard classical problem. So it's important to have other methods for using relatively low depth quantum circuits. We can really execute with NISQ devices to address quantum problems.
Now, one thing that's often done in classical computational physics is one simulates the evolution of a system in imaginary time. And if we evolve forward in time for a large imaginary time, an initial state will converge onto a good approximation to the ground state. And it had been thought that this was not a very smart thing to do with a quantum computer, which is more naturally inclined to do unitary rather than contractive imaginary time evolution.
One could do imaginary time of evolution, but it seemed to be very expensive. Because one had to post-select and add additional [INAUDIBLE] qubits. But in a recent paper by Garnet Chan and others, they described a more efficient way of simulating imaginary time evolution by observing that we don't really need to simulate the operator e to the minus beta H.
It's enough to simulate the result of applying that operator to some specified input state. And that task can be performed by some unitary, which you can do in a quantum computer times summary normalization of the state. And the trick is to find the right unitary. But they argue that that was a relatively easy linear problem to solve.
Now, this isn't a panacea. And this method doesn't always succeed in finding ground states, in particular when they have long correlation length. But I think it's interesting and suggestive of how we still have a long way to go in thinking about how to use relatively low-depth quantum circuits to address problems of interest in many-body quantum mechanics.
Well, as I have emphasized several times, in these NISQ era devices, we won't be able to do quantum error correction. Eventually, we will use quantum error correction to overcome the difficulty of the noise in imperfect quantum gates. The trouble is that quantum error correction has a very high overhead cost in the number of qubits and the number of gates.
Now, how high that cost is depends on how good our hardware is and on the problem we're trying to solve. But as a rough estimate, if I wanted to solve an interesting problem in quantum chemistry surpassing what we can do classically, we'd probably need hundreds of protected qubits and altogether millions of physical qubits to provide sufficient redundancy to protect against noise.
So to get to a computation of that scale, we have to cross a rather daunting chasm from where the technology is likely to be in the next couple of years with devices of order 100 qubits to millions of physical qubits. And that's probably going to take a while. But it's important to continue the advances in the reliability of the quantum gates-- the systems engineering, the design of algorithms, the error correction protocols, and so on-- so we can more quickly get to that tempting era of truly scalable, error corrected full tolerant quantum computing.
Now, I'd like to talk about a few other topics in the time that remains. Let's talk about how we're going to protect our privacy in the post quantum era. One of the first quantum algorithms to attract substantial attention was Peter Shor's algorithm for factoring large numbers, which can be used to break the widely used public key protocols that we use to protect our privacy when we communicate over the internet.
So someday we won't be able to use those protocols anymore, because they can be attacked by quantum computers. But that's probably not going to happen for a while. So maybe we just don't have to worry about it.
Well, I say that's not so clear. Because there are several timescales we have to keep in mind. One is, if we were going to replace the current public key schemes by new ones, it takes quite a while to make that transition to new standards.
And also, if we use a key for private communication today, we would like those keys to remain secure for some time in the future so our messages can't be direct decrypted before we're willing to make them public. So if you think you want to keep your keys private for 20 years after use and you think it might take 15 years to put a new scheme into place replacing the public key schemes we use now, then you've got a problem if there's going to be widespread-- or if any adversary is going to have quantum computer within 35 years. And nobody can promise you that's not the case.
So what are we going to do? There are two paths to follow. One is called post-quantum cryptography. That's something that we can run on the same hardware that we're using today, but where, on the basis of plausible assumptions about computational complexity, we think the new protocols are just too hard for a quantum computer to break.
And the other approach we call quantum cryptography. And that means replacing the infrastructure that we have now with a new quantum internet that can transmit quantum signals, but where the security does not rely on unproven computational assumptions. So chances are both of these options are going to be part of privacy in the post-quantum world. Either A or B might be the choice based on the needs of the user.
Now, further research on quantum resistance, that is strengthening the case that certain protocols are too hard to break with quantum computers, that will boost the prospects for option A. And option B will seem more realistic if we develop quantum repeaters or other means of extending the range of quantum communication beyond what's possible now. But either way, cryptographers should be quantum savvy now if they want to think about how to protect our privacy decades into the future.
A quantum network would have four elements. There would be n nodes where quantum signals are produced and detected, quantum channels to connect the nodes, quantum repeaters which extend the range of communication, and classical channels which would be needed to complete the execution of quantum protocols. The quantum channel is likely to be photonic, quantum information carried by traveling photons either through free space or optical fiber. But either way, loss of photons is a serious limitation.
With the optical fiber that we use now for telecom communication, the loss in the fiber is about 17 decibels per 100 kilometers. So that means if you send 50 photons through 100 kilometers of fiber, about one of the photons will get through without being absorbed. But if you try to send photons through 1,000 kilometers of fiber, only 1 in 10 to the 17 will get through.
So we want to extend the range to a global scale. That could be done either with satellites flying overhead, which distribute entanglement between distantly separated points on the ground, or by means of quantum repeaters which can refresh the quantum signal. So it can be sent a longer distance. But this is a very different technology than the repeaters we use for classical communication, because you can't just measure a quantum signal and resend it, which would destroy its coherence.
A quantum repeater would have to have a quantum memory in which we could deposit a quantum signal. And then we'd also have to do some processing to clean up that signal a little bit and extend entanglement over longer distances. Now, the processing that we need is not so sophisticated as we need for large scale quantum computing, but still that quantum repeater technology has not yet been developed.
A nice thing about quantum cryptography is that we don't actually have to trust the end nodes. It has the property we call device independence. That means it's possible to test that the equipment is operating correctly even if you purchase the equipment from your adversary. That's a feature that classical cryptography cannot match.
In a quantum network, we might need to have transducers. So we can take signals carried by optical photons and transfer them to microwave frequency devices, the frequencies of typical quantum processors. That's another technology we need to develop.
And also, we would like to find other applications of the ability to distribute entanglement around the world. Aside from quantum key distribution, among the suggestions are using a distributed sensor network for very sensitive earth measurement or for synchronization of a global clock system. Another application area is quantum sensing.
Now, sensing with high precision with quantum devices is mostly done with single quantum systems, like a single electron spin for probing at high spatial resolution very weak signals, like magnetic fields, which can be used for non-invasive sensing of living matter and advanced materials. Or for example-- signal atoms which are used in atom interferometers for applications to positioning, navigation, and timing, measuring acceleration, rotation, gravitational fields, and gravity, on gradients, which has applications to figuring out where you are when you don't have access to GPS or looking for things underground.
Now, the big challenge is to achieve further advancements in the sensitivity of these quantum devices through clever quantum strategies using entanglement or squeezing or error correction. In particular, we might make a more powerful sensor by putting together many quantum systems in highly entangled states. And we would want to engineer those states so they'd have high sensitivity to the signal we're interested in, but good resistance against noise. And finding optimal states for that purpose might actually be a promising task for a quantum machine learning algorithm.
To advance quantum sensing, we need many of the technological advances that would improve the prospects of quantum computing, including better materials, more precise coherent control over quantum devices, longer coherence times which can improve sensitivity, more efficient read out, compact devices suited for applications, but most of all new ideas about how to use quantum technological capabilities to do new kinds of measurements. Aside from applications to medicine and to navigation, quantum sensors can also allow us to make fundamental physics measurements in new ways. For example, with qubits, we can detect microwave photons with higher sensitivity, which could be used in searching for dark matter candidates like axions.
And it's also possible to up convert radio frequency signals to microwave signals, which would allow those searches for new types of dark matter particles to go to lower mass than is possible with current techniques. And with quantum sensors inside a crystal, we might be able to distinguish between different types of damage in a crystal caused by recoil induced by a dark matter particle to better distinguish the signal of the energy deposited by that particle from background signals, like neutrons or neutrinos. And improved quantum sensors have other applications to fundamental physics, including searching for time dependence of fundamental constants more sensitively or improving the limits on electric dipole moments or magnetic quadruple moments and searches for breaking of fundamental symmetries.
Perhaps the coolest example of quantum sensing techniques advancing physics is the search for gravitational waves by LIGO, which, in its new data run which just began last week, will for the first time be injecting squeeze light into the interferometer, which improves the signal in a regime where it had been dominated by quantum noise at high frequencies, frequencies around a kilohertz. Having higher sensitivity in that part of the LIGO detection band could be very useful for detecting the sloshing around of nuclear fluid when two neutron stars merge. And that's something that, up to now, LIGO didn't have the capability to detect. And further improvement in sensitivity will mean even more interesting physics can be extracted from those detections.
I expect that someday we'll use quantum computers to study elementary particle physics in other ways, for example, by simulating quantum field theories. It has been an ongoing program for some decades to study the properties of quantum chromodynamics using Euclidean Monte Carlo computation, something that Ken Wilson pioneered here at Cornell. And a lot of interesting physics has come out of those simulations.
But there are some things that we don't expect we'd ever be able to probe using those Euclidean methods that can run on classical computers, including the real time simulation of particles that scatter. If I wanted to simulate a high energy collision between two hadrons really from first principles without phenomenological assumptions, we could do that with a quantum computer. And we don't know any way of doing it efficiently with a classical algorithm. And we'd also be able to study the behavior of nuclear matter in a regime which seems to be inaccessible with classical methods.
I'll talk a little bit more about this in my lecture on Friday. So far, we've done some sort of preliminary estimates of what the resources are that you would need to simulate a high energy collision described by a strongly interacting quantum field theory. And in addition, there have been some pioneering efforts to do experiments to study one-dimensional quantum electrodynamics. In particular, I mentioned one of those experiments earlier, the 20-site version of QED that was studied by the Innsbruck Group.
And there are also some studies of dynamics that have been done so far with few-body systems. If we want to have some quantum advantage that can be realized in the relatively near term, probably we should start by studying one-dimensional field theories. In one dimension, we have pretty good classical methods for doing the simulations. But the classical methods can't keep track of a state when it becomes highly entangled.
So for example, if you scatter two particles at very high energy and produce many particles in the final state, there's so much entanglement in that state. It grows exponentially with the number of particles that classical simulations would be hopeless. But a quantum computer could do accurate simulations.
I expect that someday we'll use quantum computers to teach us about quantum gravity. You'll often hear it said that part of the reason quantum gravity is so difficult is that we can't do experiments to explore it. And I guess that's fair, but I think we can get insights into quantum gravity in the not too distant future using quantum platforms to do simulations.
Why do we want to study quantum gravity? Well, of course, we'd like to have a deeper understanding of all the fundamental interactions in nature. And we would like to be able to resolve deep puzzles about the quantum behavior of black holes, which have had us stymied for decades.
We'd like to understand the very early history of the universe. Now, we happen to live in what we call de Sitter space, which has positive curvature and no boundary. And we don't understand very well how to think about quantum mechanics in de Sitter space.
An easier case is what we call anti-de Sitter space where the curvature is negative and the spacetime has a boundary. And we can think about observables that are tied to the boundary in trying to understand the quantum mechanics in spacetime in that case. And there's been amazing progress in understanding quantum gravity in that negatively curved anti-de Sitter space through what we call holographic duality.
Amazingly, quantum gravity in the bulk anti-de Sitter space is equivalent through a complicated mapping to a field theory without gravity, which resides on the boundary of the spacetime, a theory with one fewer dimension. And quite remarkably in recent years, we've appreciated that the properties of the bulk spacetime is encoded in the boundary theory in the form of quantum entanglement. And what's been quite delightful for the quantum information is we've understood that the mapping from the bulk spacetime to the boundary can be viewed as the encoding map of a quantum error correcting code.
So we can leverage some of the things that we've learned about quantum error correction to try to understand quantum gravity better. We still have a lot more to do. We still don't really understand how to think about what's inside a black hole. We would like to understand better the properties of this quantum code that relates the bulk and the boundary in holographic duality. And eventually, we're going to have to meet the challenge of understanding quantum mechanics and de Sitter space.
What are the kinds of things we might do in experiments that could advance our understanding? Well, as I've said, the bulk geometry in this holographic correspondence is encoded in the form of quantum entanglement. So using highly entangled quantum platforms, we can try to get insights into the properties of that bulk geometry by studying the entanglement of a highly entangled quantum system.
One of the mysteries about the bulk is why locality holds so well. That is operators, which are space-like separated from one another in the bulk are nearly commuting. And from the point of view of the boundary physics, that seems especially mysterious. Because these bulk operators get mapped to very non-local boundary operators that overlap with one another.
But the commutativity properties of those boundary operators could be studied, for example, by doing linear response experiments in a highly entangled system. We'd like to be able to understand what happens when a black hole forms and evaporates. In the boundary language, that corresponds to some highly excited state settling down to thermal equilibrium, which in principle we can study experimentally. Black holes insist on being the very best at everything.
And we think, in particular, they are the fastest scramblers of quantum information, The. Most chaotic quantum systems that the laws of nature will allow. And there have already been experiments which have given us some preliminary information about how information gets scrambled in highly entangled quantum systems. And further experiments in that direction might give us insight into quantum information processing by black holes. And even exotic processes in the bulk, like traveling through a traversable wormhole in the bulk, can be translated into the boundary language, something that could be experimentally accessible like a coherent form of quantum teleportation which could be studied.
So let me sum up the main points I've tried to make. We're entering the NISQ era, the era of Noisy Intermediate Scale Quantum computing. Will we be able, in this era, to outperform our most powerful classical computers for some tasks of interest?
We don't know. We're going to try it and see how well it works. In particular, we'll be able to test these hybrid quantum classical methods for solving optimization problems and, as we experiment with them, try to improve them.
Quantum dynamics of highly entangled systems is particularly hard to simulate, and therefore a promising arena for quantum advantage and simulation. The truly transformative quantum computing technologies will probably have to be error corrected, will have to be fault tolerant. And that may still be a ways off, maybe decades away.
We don't really know for sure how long that's going to take. Progress toward achieving error corrected fault tolerant quantum computing has to be a high priority for advancing the field. Quantum sensing, networking, and computing are all going to advance together, since they are based on closely related technologies.
And next generation quantum sensors can provide unprecedented capabilities of potential commercial interests, but also will provide potentially new ways of exploring fundamental physics. And quantum simulators can someday, we hope, probe aspects of quantum field theory and quantum gravity, which are beyond the reach of classical simulators, perhaps illuminating the nature of quantum spacetime. These are ambitious goals and probably will not be realized in the very near term. But the research we do today can help to bring us closer to the era where quantum simulation teaches us valuable lessons.
So it's an exciting time for quantum information processing. I tried to inject kind of a cautionary tone into the talk. I think we should go into this NISQ era with well-grounded expectations.
But I am optimistic that remarkable discoveries will be made with near-term quantum platforms. But the real payoff of quantum technology will probably require decades of sustained effort. But if we put that effort in, I'm sure the results will vindicate that program. So thanks a lot for listening.
[APPLAUSE]
PAUL GINSPARG: OK. So as host, I was carefully watching the time and made an executive decision to just let it keep going. I would like to open up for a few questions, because that's always fun, not too many. And just as a reminder, John will be meeting with students afterwards. Somebody will be here to pick him up. So are there any urgent and compelling questions? Yeah.
AUDIENCE: Can you expand a little on your thoughts on the quantum annealing computations that are being developed at D-Wave and this non-stoquastic next generation? Is there a promise there compared to the more general quantum computers that are being worked on?
JOHN PRESKILL: So the question was about quantum annealer and could I say more about what we might learn by using quantum annealer and, in particular, how that connects with the longer term prospects for quantum information processing. Well, you asked particularly about a non-stoquastic. Part of the reason that theorists had difficulty making progress-- well, that's not the only reason-- in showing that quantum annealers had the potential to do interesting things is until very recently they had the property of being stochastic systems, which means, in the language of computational, physics they don't have a sign problem.
And that means that they can be simulated using classical methods, which are called quantum Monte Carlo methods, which would run on a digital computer. And what made the simulations hard was the fact that the systems are very noisy. And it's hard to simulate the noisy systems. But that's not likely to improve their computational power.
Now that the devices are becoming non-stoquastic, the simulations are even more difficult. So if you're an optimist, you might say that the chances that these devices can do something that surpasses what we can do classically will improve as a result. Now, part of the reason that one has a different attitude about quantum annealing and circuit-based quantum computing is that we don't have a theoretical argument that quantum annealers are scalable.
In the case of circuit-based quantum computing using quantum error correction, we can show that, if the error rate per gate is below a certain limit, then with an overhead that scales polylogarithmically in the size of the circuit we're trying to simulate we can push the logical error rate down as much as we need to. There's no such argument for quantum annealers. They are quite noisy.
And so, you know, there are things you can do to try to error correct them, but no real argument that they're scalable. But you know, when D-Wave had a 128-qubit device, I said, well, they won't be able to go to 512 and still solve problems. And then I thought, well, they won't be able to go to 1,000 qubits, 2,000 qubits.
But they can still solve optimization problems, not with an improvement in scaling over what can be done classically. But the fact that they work at all I find kind of amazing. So I think they're very interesting devices. We still don't really understand what's going on with them very well. So they're interesting to experiment with.
PAUL GINSPARG: In addition to the not knowing about the scalability, there was the point you made during the talk which is that we don't know that theoretically they're in a different complexity class from simulating classical annealing. So that's the other showstopper. So that was a little longer. I guess one last question.
AUDIENCE: OK. The question is if the actual spin could be frozen by using [INAUDIBLE] magnetic field, [INAUDIBLE]?
JOHN PRESKILL: Say it again. If an [INAUDIBLE]?
AUDIENCE: If a [INAUDIBLE] spin could be frozen by using [INAUDIBLE] electromagnetic field [INAUDIBLE] spin up or spin down [INAUDIBLE]?
JOHN PRESKILL: Ah, well, so the question was if we can freeze an electron beam so that the spins are all locked together, either they're all up or all down, would that have applications to quantum computing? I'm not sure. That would be a way of making a kind of robust classical memory, which is protected against bit flips.
But as a memory, it wouldn't have any protection against phase errors between the all spin up and the all spin down state. So the alignment of the spins would not suffice for having good quantum error correction properties. If you wanted to think of it as a quantum information processing platform, then, of course, we don't want the spins to all be frozen.
We want to be able to have coherent superpositions of each of the spins and to be able to do 2-qubit gates by manipulating their spin-spin interactions. The platform in which something like that is done is electron spins and semiconductor quantum dots, where the quantum information is really carried by the electron spin. And 2-qubit quantum gates can be performed using the interactions between the spins.
Right now, that technology is behind the two leader superconducting circuits and ion traps. The gate fidelities just aren't nearly as good. But they're improving. And you can tell a story about how that could potentially leapfrog over the lead technologies now and have better prospects for long-term scalability.
PAUL GINSPARG: I'm going to reluctantly call it there. The question I had for John, which I'll save for two nights, so he has fair warning was, you know, this was very cautionary, as he said. But my question will be, what if he throws all caution to the wind?
Where will he project we'll be 10 years or 20 years from now? So come on Wednesday night, you'll hear the answer to that. And let's thank John again.
[APPLAUSE]
Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of today's classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away --- we should regard it as a significant step toward the more powerful quantum technologies of the future.
As part of the Spring 2019 Hans Bethe Lecture Series at Cornell, Physicist John Preskilll gave the Physics Colloquium, "Quantum Computing in the NISQ Era and Beyond," April 8 in Schwartz Auditorium, Rockefeller Hall, suggesting that quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing.
Preskill is the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology, and director of the Institute for Quantum Information and Matter at Caltech. He received his Ph.D. in physics in 1980 from Harvard, and joined the Caltech faculty in 1983.
The Hans Bethe Lectures, established by the Department of Physics and the College of Arts and Sciences, honor Bethe, Cornell professor of physics from 1936 until his death in 2005. Bethe won the Nobel Prize in physics in 1967 for his description of the nuclear processes that power the sun.