share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
[MUSIC PLAYING] PAUL GINSPARG: This is the third of John Preskill's Bethe Lectures. I know all of you are suffering from Bethe fatigue at this point, but I'm going to continue the ritual with the third of a few lectures. I just wanted to point out about something that I said briefly on Wednesday night. I wasn't able to go into great detail.
But it was a funny coincidence that my son was accepted in a music program this summer on Shelter Island, and I was sitting there trying to think, why is Shelter Island familiar to me? And then realized, of course, Shelter Island Conference, 1947, which is actually in my memory because look, for all of your graduate students and postdocs, I have to say, this is what we all dream about.
You go to a conference, you hear this weird result. And on the train back from the conference, you do the definitive calculation, which turns out to be right. You get, in this case, what was he doing? The question was they measure-- you can excite the hydrogen atom between the first two excited state, transition that with a gigahertz microwave. And the Dirac theory, it should be 0. You try to calculate it with the [INAUDIBLE] photon effects, and it comes out to be infinite. And so the real answer has to be somewhere in between.
And what he did was, on the train, he calculated the energy of the free electron and the energy of the bound electron. They were both infinite. They subtracted it. He subtracted it and got the right number, but it wasn't sure. There's a wonderful video on Web of Stories where he describes it. He wasn't sure if he got it right or not. He might have been off by a factor of 2 because he had to work with the fundamental constants from memory. As soon as he got off the train, the first thing he did was he went to a library and looked it up and found out that it came out correctly, and QED.
[LAUGHTER]
I did want to say that that's the serious side, but he also had a spirit of fun about science. And that was the last two anecdotes I was going to give for completeness. There's a well-known story, but well-known might not apply to everybody here, so you can all be exposed to it. April 1, 1948, there was a paper entitled The Origin of Chemical Elements, which was a significant precursor to the Big Bang Theory. And it was worked on by George Gamov and his student Ralph Alpher.
And Gamov couldn't resist the notion of a work about cosmology being offered by the first three letters of the Greek alphabet. And so, although Bethe was not involved in the work at all, the paper that appeared was authored by Alpher, Bethe, and Gamov. And he was apparently good with this. I don't know if the graduate student involved, Alpher, may not have been quite so happy with the joke.
JOHN PRESKILL: He was not happy.
PAUL GINSPARG: He was not happy? He told you.
[LAUGHTER]
But that wasn't the first one. There was a paper that appeared in Naturwissenschaften-- and I don't have quite enough room here-- where Bethe, at the age of 25, with two collaborators, derived this wonderful equation. Let's see if I can get it right. It was 2 over alpha plus 1. And when he substituted in the value of [? t-naught ?] minus 273 degrees-- that was absolute zero-- you see that's-- oops. You see that's 273 plus 1 is 274. And so you derive that alpha is equal to 1 over 137, which should strike you as an odd formula to calculate, a dimensionless number in terms of something with dimensions.
And--
[LAUGHTER]
--it caused some consternation. I read about this in his memoirs. And they were actually forced to write, three months later, a correction. And this I put down so I could get it. Oh, the paper was entitled "On the Quantum Theory of the Temperature of Absolute Zero." And the correction said they had to explain it was intended as a parody of certain other numerological articles, and that they, quote, "regretted that the formulation they gave to the idea was suited to produce misunderstanding."
[LAUGHTER]
He explained in his memoirs that his actual target in this was Eddington, who was apparently off the ball on this. Now, today's speaker, Preskill, is far too serious for any of that sort of frivolity. And so-- [LAUGHS] I was unable to find any comparable examples.
[LAUGHTER]
So I'll just repeat what I've been saying. He is a professor at Caltech. He's been there since 1983. He's been leading in the theoretical aspects of practical applications of quantum computing in his current role as Director of the Institute for Quantum Information in Matter. And he's going to tell us about simulating quantum theory with a quantum computer.
[APPLAUSE]
JOHN PRESKILL: Well, thank you, Paul. I've had a lot of fun at Cornell this week. You've all been great hosts. I'm especially grateful to Paul for limiting himself to just three Bethe anecdotes in the introduction--
[LAUGHTER]
--which I'm sure must have been hard. I've expressed earlier in my talks my sincere admiration for Hans Bethe. Yeah. Of course, everyone admired Hans. But of Cornell theoretical physicists, my true personal hero was Ken Wilson. And among other things in his legacy, he taught us that we can learn a lot about quantum field theory by trying to simulate it on a computer, or actually doing so. So I wanted to give this talk at Cornell because I consider the topic to be an extension of Wilson's great legacy.
Now, I spoke in my earlier talks about the current status of quantum information science. And in case you weren't there on Monday or you've forgotten, I'll just briefly remind you of some of the things that I said. I view quantum information science as a great opportunity not only to build a technology which could have a transformative effect on the world, but to explore physics in new ways. By building devices that can simulate and study very highly entangled quantum matter we have opportunities for new discoveries and for understanding things about nature, which would otherwise be inaccessible.
And in particular, with a quantum computer, we expect that we'd be able to efficiently simulate any process that occurs in nature, which will have implications through the impact on studying and designing new molecules and materials, but will also allow us to probe, in an unprecedented, way properties of strongly coupled quantum field theory and other topics of fundamental interest.
So the short summary of what I want to say is that we've already learned a lot from Lattice QCD. And we will learn more with exascale devices that we expect to have available in a few years, but there will be some challenges that remain. We don't expect, even with far more advanced digital computers, to be able to simulate the real-time behavior of particles colliding at high energy, for example, at the Large Hadron Collider.
And we don't have the tools with Lattice QCD to study the behavior of nuclear matter at finite chemical potential, which would be relevant for astrophysics or get access to all the properties we'd like to know about of the quark-gluon plasma and, in general hadronic-- [AUDIO OUT] --matter far from equilibrium.
With quantum computers, we'll be able to address these challenges. Now, I'm not sure when. That physics impact might still be a ways off, but I think we can learn a lot by thinking now about how to use quantum computers to study these problems. I've written a series of papers on the topic of simulating quantum field theory. Here are some of the references. The most recent is the talk I gave last summer at Lattice 2018, which gives kind of a high level overview of the topic.
So we are now on the verge of what we call quantum supremacy, the use of quantum devices to perform tasks that surpass what we can do with the most powerful-existing digital computers. And it's useful to have a word for the era of quantum information processing, which is now underway. So it's helpful to use the word NISQ-- Noisy Intermediate Scale Quantum.
Intermediate scale means we're talking about a number of qubits, which is too large to simulate by brute force with existing digital computers. Noisy reminds us that the gates in these devices are imperfect. That will limit the depth of the quantum circuits we can execute and still read out a result with reasonable signal to noise. But still, this is exciting. It's a new tool for exploring the properties of highly entangled matter in a regime which has never been experimentally accessible before.
I believe that quantum technology is going to have a big impact on human society eventually, but that might still be a ways off. Still, as a scientific opportunity, I think it's quite exciting now. So where is the hardware now? There are devices which have been built and are in the process of being calibrated and getting ready for experiments by several industry groups.
Google has announced that they have a 72 qubit quantum computer, though we haven't seen data from that device yet. IBM has announced that they have a 50 qubit device, but, again, no experimental papers yet. Now, Rigetti says they are going to build a 128 qubit quantum computer, but that platform is still in the planning stages. There have also been some recent experiments with analog quantum simulators, which are becoming increasingly sophisticated.
We've seen recently from the Harvard group a simulation of an Ising-like spin system using a 51 qubit device, which was used to explore dynamical phase transitions in a regime which would not be easy to simulate classically. And similar experiments were done with trapped ions by the Maryland group. And there are a lot of other interesting platforms that are being developed. And in the long run, that could be very important, since we don't know which technology has the best prospects for scalability to large devices at this stage.
Although, on the slide, I'm emphasizing the number of qubits, that's not the only thing we care about. We care a lot about the quality of the qubits and, in particular, about the accuracy of the 2 qubit gates in the device that process the information, which now, under the best conditions, have an error rate of about one in 1,000. Eventually, we will overcome those limitations imposed by the noise by using the principles of quantum error correction by redundantly encoding quantum information to protect it from damage.
But quantum error correction has a very substantial overhead cost. How high that cost is depends on the quality of the qubits, the gate error rates, and also on the problem we're trying to solve-- the algorithm we want to run. But if we wanted to simulate quantum field theory-- let's say QCD-- we might need millions of protected qubits. And with the devices that we have now, that would probably require billions of physical qubits. So that may still be a ways off, but it's important to continue to advance the technology. And in particular, big improvements in gate error rates might make those overhead costs much more reasonable.
It's useful to keep in mind the distinction between digital and analog simulators. By an analog simulator, we mean a many-qubit device with its own native, but controllable Hamiltonian that resembles the Hamiltonian of some model system that we wish to study. And by a digital simulator, I mean a general purpose quantum computer which can simulate any physical system of interest when suitably programmed. Similar experimental platforms can be used for both purposes.
And there are some very ambitious proposals about ways in which the analog systems-- which have been under development for some 20 years by now-- might be used to simulate gauge theories in the relatively near term. But so far, that hasn't been achieved except in one-dimensional systems, which I'll talk about a little bit later.
Analog systems are limited because of the imperfect control of the Hamiltonian. Eventually, we expect they'll be surpassed by digital simulators, which can be error corrected and, therefore, more precisely controlled. That might not happen for a long time though, so we can learn a lot from analog simulators in the meantime. But mostly, in this talk, I will have in mind an error-corrected digital quantum computer for the task of simulating quantum field theory.
So why do we want to simulate quantum field theory? Well, of course, we'd like to understand more deeply the properties of the fundamental interactions and-- [AUDIO OUT] --particles. And that means a deeper understanding of quantum field theory. From a computer science perspective, it's actually a very compelling question. Can a quantum computer simulate quantum field theory, or, the broader question, would a quantum computer be able to simulate any process that occurs in nature?
The computer scientists speak of the extended Church-Turing thesis, meaning the idea that a Turing machine can efficiently simulate any other type of computing device. Well, that was upended by the rise of the quantum computer. We don't think a classical Turing machine can efficiently simulate a highly entangled quantum system. And so it's been supplanted by the quantum version of the Church Turing thesis, that a general purpose quantum computer can efficiently simulate any other device or any other natural process.
And we don't know for sure whether that's the case or not, but either a yes or no answer is quite exciting. If the answer is yes-- that a quantum computer can efficiently simulate anything-- then quantum computers will have applications to addressing questions about nature, including questions about quantum gravity. If the answer is no, that's even more exciting. It means our current notion of a quantum computer hasn't properly captured the computational power in the laws of nature, [AUDIO OUT] an even more powerful device is allowed by the laws of physics.
Now, with simulations of QCD, we would, for example, be able to simulate high energy collisions between protons like those occurring in the Large Hadron Collider. Nowadays, we can't do that ab initio from QCD, but do it with phenomenological models instead. And the reliability of those models is open to question, especially when we extrapolate them to higher energies. And we would also, with quantum computers, be able to simulate nuclear matter, which we think is a hard problem with digital computers and can't be done with the Euclidean Monte Carlo simulations that are in use now.
And simulating quantum field theory can also be used or viewed as a step towards simulating quantum gravity, particularly in the case of quantum gravity in anti-de Sitter space, which has a dual description in terms of a field theory living at the boundary of spacetime.
So what kinds of problems do we expect to solve with a quantum algorithm? Well, we should be able to simulate any kind of scattering process, which can be set up as a problem in which the input to the problem is some incoming state, and then the goal is to sample accurately from the final states that are produced.
We can also simulate general vacuum-to-vacuum probabilities in the presence of sources. Using a quantum computer, we should be able to compute general S matrix elements and real-time correlation functions, all of which are beyond the reach of what we know how to do with digital simulations of quantum field theory today.
So if you were interested in out-of-equilibrium behavior, including transport properties, the way to think about it, typically, is we can imagine doing an experiment which would probe these properties, and then, with a quantum computer, set up a simulation of such an experiment. That simulation might not be possible with a classical computer, but with a quantum computer, we expect that it should be possible.
So why is Ken Wilson my hero?
AUDIENCE: Sir, before you go on, may I ask a question?
JOHN PRESKILL: Yeah.
AUDIENCE: So we talk about qubits, which are analogous or basically analogous to quantum spin-1/2 objects. The sign problem in many cases arises from fermions at high density. Fermions have long-range entanglement by virtue of the anti-computation. How do we know that implementing the all to all kind of correlations necessary to produce fermionic anti-computational rules of qubits can be done efficiently?
JOHN PRESKILL: Well, so the question is, how do we simulate fermions with qubits, which don't obey Fermi statistics? And so, of course, we will have to encode the fermion somehow using qubits. And that is possible in a manner which I would call efficient, meaning with a polynomial scaling and system size. The most familiar example of that is the Jordan-Wigner transformation, but there are generalizations of that that have been formulated for higher dimensions. They do slow things down, but not, you know, not by a super-polynomial factor.
Yeah, Wilson taught us what quantum field theory is. He didn't give the final and complete answer to that question, but he did understand more deeply than those who preceded him what we mean by quantum field theory. And he understood in a deeper way the meaning of renormalization. It's really thanks to him that we think of quantum field theory today as the long-distance behavior of some regulated theory, like a theory which can be defined on a lattice.
And you might think that the description of the physics at long distances would be extremely complex, but thanks to the miracle of universality, which Wilson discovered and promoted, the long-distance physics can typically be described by just a small number of renormalized parameters. And that's really why we're able to do physics at all, because if we had to understand all the details of the microscopic physics, say at the Planck scale, to compute the properties of the hydrogen atom, we'd be in a lot of trouble.
Of course, it's a two-edged sword. By the same token, we can't learn a great deal about physics far in the ultraviolet, say at the Planck scale, by doing low-energy experiments. Really, Wilson had many of these insights as a result of thinking about, how would you simulate quantum field theory on a digital computer? And I feel that when we address the conceptual question of how to simulate field theory on a quantum computer, this could also lead to very helpful conceptual advances.
In quantum information science, we've become accustomed to computer scientists and physicists working together. The computer scientists tend to be sticklers for rigor, and when they analyze algorithms to make rigorous statements about the resources involved, of course, we would like to do the same. And in the case of quantum field theory, at least we have rigorous foundations. We can define what we mean by a quantum field theory in terms of axioms like the Whiteman axioms.
But if I'm going to try to prove something about an algorithm that simulates QCD in the continuum to some accuracy, that's difficult to do in a fully rigorous way at this stage, because we still don't have a fully rigorous construction of QCD. In other words, we can't prove that QCD has a continuum limit that satisfies the Whiteman axioms. So we have to be a little bit more modest, at least for theories like QCD that are asymptotically free in four spacetime dimensions. Although in lower dimensions, there have been rigorous constructions, so we're on somewhat more solid ground in the case of super renormalizable theories, which have a milder divergence structure.
So our philosophy is to be as precise as we can in the analysis of algorithms, non-rigorous when it's necessary. And in our case, in order to estimate the errors in our algorithms, we made some use of perturbation theory, which we can't completely rigorously justify for the purpose in particular of finding how our errors depend on the physical value of the lattice spacing. Now something you have to get used to is that, with quantum computers, we simulate things in real time, not an imaginary time. Although as I mentioned in one of my earlier talks, there's been some recent progress in understanding how to do efficient simulations of evolution in imaginary time with quantum computers, but that's not what I'm talking about today.
It can be useful to simulate in imaginary time. It provides us with an efficient means of preparing ground states and thermal states of a quantum theory. But fundamentally it's not really so bad, because real time evolution is how nature works, and if our goal is to simulate nature, then simulating evolution in real time should be enough for us. And that's the problem that we think in general is a hard problem. At least, we don't know how to simulate highly entangled systems as they evolve according to the Schrodinger equation.
And I've already mentioned all the applications that we hope such simulations will have. We work with the Hamiltonian. So you can think of this as the evolution in time in some particular frame, but we can still extract results if we're dealing with a Lorentz covariant theory, which are frame-independent. So what's the type of task we would like our algorithm to perform? Well, it could be formulated this way. There's some initial state that we want to propose as an input to our algorithm.
So the first step is state preparation. We have to load that state into a quantum computer. And then we will let that initial state evolve forward in time for some specified amount of time. And for that, we have to simulate evolution according to the Schrodinger equation, which we would do in a kind of standard way by dividing time up into small, discrete increments, Trotter approximation and evolving forward in time in a set of discrete steps.
And then at the end, we would measure something, simulate the measurement of some observable, of interest. And the goal of such an algorithm is to sample accurately from the probability distribution of outcomes of that final measurement, given the input state and the evolution time. And then if I want to learn things like scattering cross sections, I would do just what an experimentalist would do. I would sample many times and average with good statistics to determine the value of that quantity.
And what we're typically interested in, at least as theorists, is to determine as best we can how the resources needed to do that simulation will scale with various aspects of the input to the problem, like the size of the system that we're simulating, the error we're trying to achieve, the total number of particles involved, the total energy of the process, the mass gap of the theory, and so on. And what I typically mean by resources is how many qubits are we going to need, and how many gates are we're going to perform in a given device? How long will it take to get the answer?
And what we hope is that we can show that the scaling of the resources with all of these parameters characterizing the input is a polynomial scaling bounded by some power of the input parameters. And if we were to do the simulation by brute force on a digital computer, it would be exponential. So that would be our quantum advantage. Of course, to get started, we need a way to efficiently prepare that initial state. And in order to simulate the quantum field theory, we will have to regularize it, for example, by putting it on the lattice. And that will introduce some error that we'll have to take into account.
Now, this problem of preparing the initial state can be a hard problem in some cases. And in fact, even finding the ground state of a classical system can be a problem which is too hard for a quantum computer to solve. It could be NP-hard. As has been known for decades, if I have a frustrated spin system, finding the lowest energy state of that system requires that I solve an NP-hard optimization problem. And that's just too hard to do.
And in the case of finding the ground state of a quantum system, we think the problem in general is even harder. It's what we call QMA-hard. NP-hard means it's as hard as any problem whose solution we can efficiently check with a classical computer. QMA-hard means as hard as any problem whose solution we can check with a quantum computer. And even for a system with geometrically local interactions in one dimension, the problem of finding the ground state can be QMA-hard in worst case instances.
But we don't have to worry about that so much if our goal is to simulate nature, because states which are NP- or QMA-hard hard to create are not going to arise in the world, because nature would have to solve such a hard computational problem to prepare such states. And we could make a similar remark about finite temperature Gibbs states. In some cases, determining a Gibbs state is sufficiently low temperature, even at non-zero temperature, might be a hard problem. But in those cases, we don't expect to see such systems in equilibrium.
In other words, when we look around, the state of the universe that we see with something that was prepared in early universe cosmology, and it's not impossible, but it's a reasonable hypothesis that early universe did not solve a computationally intractable problem to prepare the state.
So let's say we want to prepare the ground state. There are two main methods for doing so that we've studied and analyzed and are broadly applicable. One is adiabatic state preparation. Start out with some system whose ground state is relatively easy to describe and construct, prepare that state and load it into a quantum computer, and then simulate the time evolution as the Hamiltonian slowly evolves away from that initial easy Hamiltonian toward one for which it's hard to find the ground state. And we rely on the adiabatic theorem. If we do that sufficiently slowly, and the energy gap between the ground state and the first excited state does not get too small, then we can successfully do the state preparation relatively efficiently.
In some cases, particularly in one spatial dimension, we might be able to find the ground state by solving a classical problem efficiently. And in that case, once we've done so, we would then take that classical description and compile it as a quantum circuit, which we could then load into a quantum computer. So we have to decide how to regulate the theory. And there are a number of possible choices. In some ways, it might seem natural, when we're doing quantum field theory, to work in momentum space. It's by going to momentum space that we can diagonalize the free field theory and formulate perturbation theory, say, by Feynman diagrams.
And we've studied that to some degree, but it seems to be more efficient to formulate our simulation using position space instead, simply because the Hamiltonian of the system is geometrically local in space. If we put the theory on a spatial lattice, we only have interactions in the Hamiltonian between neighboring spatial sites on the lattice, neighboring sites on the lattice, where in momentum space, the interactions are more complicated when we study an interacting field theory.
So we'll do the standard thing. We'll put the theory on a spatial lattice with some lattice spacing, and the physical value of that lattice spacing will be one of our sources of error. We'll define the parameters of the Hamiltonian at the scale of lattice spacing. And of course, if we can make that lattice spacing smaller in physical units, that will improve our accuracy, but it will also mean increasing the cost of the algorithm, because it will require more qubits and gates to encode the state and evolve the state.
We're going to put fields and their conjugate momenta on each one of the lattice sites. And those fields and momenta are actually unbounded operators, so we have to truncate them, somehow express them in terms of a bounded number of qubits. And how many qubits we can get away with will be determined in part by the energy of the process that we want to study. So what should we simulate? Well, we could consider, for example, a self-coupled scalar field theory in two, three, or four spacetime dimensions, that is, one, two, or three spatial dimensions. If we didn't have the phi to the fourth term, this would just be a Gaussian theory, which would be easy to solve classically. It would be a theory of non-interacting particles.
The interactions among the particles come from 5/4 term. If we're in less than four spacetime dimensions, then the coupling parameter lambda has the dimensions of mass to a positive power. There is a dimensionless parameter which characterizes the strength of the interactions. And the theory becomes strongly coupled, hard to simulate classically as we approach the phase transition in which the physical mass of the particle in the theory gets very small.
In four spacetime dimensions, technically the theory becomes trivial as we go to very long distances, but it can still be interesting to simulate when we have a non-zero lattice spacing, and the theory becomes strongly coupled at short distances. That would be of interest even if there is some ultraviolet completion, which has a different structure. Beyond the scale of the lattice spacing, we might still want to simulate physics at energies which are low compared to one over the lattice spacing.
And in our work so far, we've always assumed that the theory has a mass gap, and that makes it easier to do simulations and, in particular, to do the adiabatic state preparation that we need to get started. How do we time evolve? Well, we just noticed that the Hamiltonian can be split into two parts, one of which is diagonal in the five basis and one diagonal in the pi basis. So we write the Hamiltonian as the sum of two such terms, one easy to simulate in the pi basis, one easy to simulate in the five basis. And we Fourier transform back and forth every time we take a time step from one basis to the other. So I'm not talking about the Fourier transform in space. I mean Fourier transform of the field variable at each lattice site.
So here's an example of a simulation protocol that we've analyzed. Let's say the input to the problem is some list of incoming momenta, or more precisely, wave packet states with relatively well defined momentum, which are spatially localizable. And what we want in the output is a list of outgoing particle momenta.
And so we could do that in the following way. We could start with the free theory, which we understand very well, and where the vacuum of the theory is just a Gaussian state and we have quantum algorithms for preparing Gaussian states efficiently so we can prepare that free field vacuum. And in the free field theory, we can also prepare wave packet states. One way of doing that is by simulating coupling a two-level atom to the field and allowing the atom to undergo spontaneous decay to create the wave packet.
And then, now we have the wave packets. We need to adiabatically turn on the coupling, because we want to study the interacting theory. And so we evolve forward in time while the coupling parameter slowly ramps up. Now as we do so, the wave packets will tend to propagate and spread. But we can avoid that by in effect tethering the wave packets so that they don't do so.
One way of doing that is we ramp up the coupling for a while, and then we evolve backward in time with the coupling fixed, and then we ramp up again, and then we evolve backward. And that prevents the wave packets from being deformed very much while the interactions are turning on. And once we have the interactions turned on to the value of the coupling we want to study, now we can let the wave packets go and simulate the time evolution with a fixed value of the coupling for a while, allow the wave packets to interact.
And then we need to do a readout. And one way of doing that is to adiabatically turn off the interaction by a similar method to the one we used to turn them on. And then we're back in the free theory, where it's relatively easy to measure the field modes to get the distribution of outgoing particles. There are alternative ways of doing these things, which we've also studied. For example, we could prepare the vacuum of the interacting theory, using the adiabatic preparation method, and then create the wave packet states, for example by turning on modulated source fields, which create single particle states. That would be advantageous if, for example, there are bound states in the theory and we want to study the scattering of these bound states, which aren't easily connected by adiabatic evolution to states of the free theory.
And in the final readout, we could, say, simulate a particle detector like a calorimeter, which measures the energy in a region in order to detect the particles in the final state and avoid the need to do the adiabatic turning off of interactions. It was already noted, of course we have to do this in some fixed frame. It's a Hamiltonian simulation. And we have the usual problem that Wilson warned us about that we have to do some tuning of parameters in the Hamiltonian in order to make the mass of the particles small compared to one over the lattice spacing. So we have to do some fine tuning to get close to a phase transition to have interesting physics to study.
So the sources of error that we need to worry about in that type of protocol are we have a non-zero lattice spacing, we evolved in a finite spatial volume, we have to discretize the fields and conjugate momenta, as I describe. We'll do Trotter evolution, a series of discrete steps in time, and that step size being non-zero is another source of error. Now, we also have to worry about the-- if we're using adiabatic state preparation, the deviations from perfect adiabaticity, if we evolve slowly enough then we guarantee that the evolution is adiabatic. But that might take way too long, too many gates, too many resources. So we'd like to get by with the fastest evolution we can without a significant amount of diabatic excitation.
So here's an example of the kind of thing we've done. Let's say we want to talk about the self-coupled scalar field in two spatial dimensions, three spacetime dimensions. So the first thing we need to understand is how the error is scaled with the lattice spacing. And for that we can do a perturbative analysis. And that tells us how many qubits we'll need, or how many lattice sites we'll need, to encode a given physical volume. And so we can figure out how the number of qubits scales with the error that we're willing to accept coming from the non-zero value of the lattice spacing.
We have to worry about the cost of the preparation of that Gaussian state for the free theory. And doing that involves this matrix arithmetic, and so we can borrow results about the complexity of doing matrix arithmetic. Although actually in this case, because the state is translation invariant, there are tricks for speeding things up a bit. But that's kind of a standard algorithm that has been much studied. And now we'd like to know how the complexity scales with the energy. If we want to consider a more energetic process, the simulation will be more costly. And in the estimate that we did in this case, we get a rather unpleasant scaling with the energy, energy to the sixth power. And those powers of energy come from several different sources.
One is that, as we increase the energy, the Trotter errors become more significant. Or if we want to keep the error fixed, then we have to choose a smaller Trotter step size. Another is that, if we're going to study higher energy processes with a fixed error, we have to choose our lattice spacing to be smaller. And yet another problem is that, as the energy goes up, we have to worry more about the diabatic errors. If we prepare some wave packet state and then we want to address it by turning on the interactions when the energy is higher, the energy gap that the system has to jump across to produce unwanted particles during that ramping up of the coupling gets smaller. And so to keep that under control, we have to evolve more slowly. And that means more gates.
And so although we haven't costed this algorithm very carefully, if we were just going to study a process in two spatial dimensions, where two particles collide and produce four particles in the output, we would already need thousands of logical qubits. In fact, that would be true even in one dimension, if we wanted to get an error, say, at the 1% level. So it's expensive. This is the number of logical qubits that we need. And if we were going to do this in an error corrected device, the number of physical qubits could easily get into the millions with current state of the technology.
AUDIENCE: That 2.273 [INAUDIBLE].
JOHN PRESKILL: Yeah, it's, you know-- it's the asymptotic speed up in matrix multiplication. It requires-- yes. Yeah, so you can read that as 3 if you want to. Yeah, it's a statement about asymptotic scaling. OK.
OK, so now we can ask the question, is an algorithm like this one really solving a hard problem? Or at least, what can we say about the hardness of the problem that we're solving? And the computer scientist would phrase this as the question, are we considering an instance of a problem which is BQP-complete? BQP, that means the class of problems that can be solved efficiently with a quantum computer, BQP-complete means as hard as any problem that can be solved by a quantum computer. And it probably wouldn't surprise you to hear that quantum field theory is BQP-hard to simulate. But it's nevertheless kind of interesting to ask what are really the crucial ingredients for making it a hard problem?
So we can show that even in one spatial dimension, and even when the theory is very weakly coupled-- so to some extent, things can be analyzed using perturbation theory-- simulating real time evolution is still a BQP-hard problem.
AUDIENCE: So does this mean that you might be able to break down the answer with paper and pencil but still be hard, because it's hard to simulate? Or you want to solve problems that you can't, that are intrinsically complicated, where there's no analytic approach?
JOHN PRESKILL: I want to solve problems that are intrinsically complicated. Of course, you can always write down the answer. The question is, how hard is it to find the answer by computation? And so the claim that I'm making is that simulating a one-dimensional field theory is hard if you believe that simulating a quantum computer is hard. If quantum computers can solve problems efficiently that classical computers cannot, which is a conjecture, which nobody can prove from first principles, then there are instances of the simulation problem for a field theory in one dimension, which are hard in that same sense.
And what makes it hard is that even though the theory might be weakly coupled, we can consider a process in which, if we have many particles, the particles scatter many times. And as a result, the many particle state becomes too highly entangled to describe classically or to simulate classically. The way we would turn that into a formal argument is we could imagine introducing as an input to our problem sources that couple to the field, which can be functions of both space and time. And I'll consider the case in which I have sources which both couple linearly to the field and also quite quadratically to the field.
And then we can formulate a decision problem. I should say that this is an input, which can be expressed in terms of some finite number of bits. If I assume that the sources are confined to, have their support in a finite interval of time and a finite value volume and also have a bounded bandwidth in frequency, and then they can be represented by a finite set of real numbers by the Nyquist-Shannon sampling theorem, and we can express those numbers to some number of bits of accuracy. And so that would be the input to the problem. And then the question we want to answer is, if we start out in the vacuum of the theory and we let these sources turn on and off again, do we stay in the vacuum or not? And in particular, because whether you stay in the vacuum or not is a probabilistic issue, whether you stay in the vacuum with a probability greater than 2/3 or less than 1/3, if you're promised that one or the other is the case.
And how do you show that this problem is hard? Well, what you do is you-- for any quantum circuit that's polynomial sized, which you would execute to solve some problem with a quantum computer, we would like to encode that problem into a choice of these sources. I would have preferred not to have the source which couples quite dramatically, since to a field theorist, the linear coupling seems more natural. But we weren't able to get that to work, so we included both types of sources. The quadratic source you can think of as a mass for a particle, which depends on space and time. And that creates a potential well in which a particle can be trapped, so we can find a particle inside such a well.
And I can encode qubit, for example, by having a pair of wells and a particle which occupies one well or the other. And we could then, by letting other sources change with time, have those qubits be manipulated. For example, I could have the height of the wells change so that there could be tunneling between the wells, which would cause the state of a single qubit to vary. Or I could have one of the wells be a little bit deeper than the other, which is another single qubit transformation. And then in addition, because there are interactions in the theory, if I bring wells close to one another, there will be some phase shift that depends on whether those two wells are occupied or not.
And those operations suffice to give us a universal set of quantum gates. So we can simulate any quantum circuit efficiently using just such operations, which means that once we have particles in the wells, just by choosing the right J2, I can simulate any quantum computation I want. And in order to turn this into a vacuum-in, vacuum-out problem, we introduce this source which couples linearly, which can create particle excitations and remove particle excitations. And so the output can be determined just by observing at the end whether we're still in the vacuum or not.
Furthermore, this is a problem which we can solve efficiently using our quantum algorithm if these sources have bounded bandwidth. And then we can use our Trotter evolution tools to simulate the field theory problem. So that means it's what we call BQP-complete complete. The problem is NBQP. It can be solved with a quantum computer. And it's BQP-hard. It's as hard as anything we can solve with a quantum computer.
AUDIENCE: Right. [INAUDIBLE] you've got quantum circuiting. You're using simulated field theory, which in turn, you've shown can simulate a quantum circuit.
JOHN PRESKILL: Right. That's right. Thank you. Now, what would be an interesting thing to study in a one-dimensional theory, which might give us some physics insight, I'm one to consider one dimension because that's something that we might be able to do in the nearer term using realistic device. And the thing is, in one dimension, we have pretty powerful classical tools for simulating a local Hamiltonian. Where those tools fail is when the state becomes highly entangled. So I could consider a process in which two particles scatter as sufficiently high energy in one dimension to produce a lot of outgoing particles. And when many particles are produced in the final state, that state is generically very highly entangled.
A crude way to think about this is two particles collide at high energy, they make kind of a hot spot. Then particles boil off for a while, and we see some thermal entropy carried by the particles moving to the left, and the same for the particles moving to the right. But really, the overall state is pure, so that entropy arises as entanglement between the left movers and the right movers. So there's a lot of entanglement in the state if many particles are produced.
That entropy scales roughly with the particle number. And once we get up to, say, 10 particles or so, the amount of entanglement, or the dimension of the system on the left, which is entangled with the system on the right, is exponential in the entropy. And so that would become unmanageably large if we produce a particle number which is, say, large compared to 10. So it's efficiently high energy. This is a problem that we don't expect to be able to simulate with known classical tools, and it's something that we could try to study with a quantum computer.
So in particular, we thought it would be interesting to study the theory in it's broken symmetry phase. If we can consider the theory with the Z2 phi goes to minus phi symmetry, depending on the parameters in the Hamiltonian, that Z2 symmetry could be either manifest or spontaneously broken. In the case where it spontaneously broken, it's kind of interesting for several reasons. One is the adiabatic state preparation method that I described won't work in an obvious way, because I can't adiabatically go across the phase transition from the free theory in which the symmetry is manifest to the broken symmetry phase.
And in addition, in the theory with broken symmetry, we have a topological excitation, a kink, which interpolates between the two vacua. If I have a positive expectation value of the field on the right, a negative on the left, somewhere in between there's a kink. We can study that kink in the weak coupling limit using classical and semi-classical methods. So we know it's heavy when we're at weak coupling, heavy compared to the fundamental scalar particle in the theory. And so if we had a kink and an anti-kink and allowed them to annihilate one another, a lot of energy would be released, which semi-classically would mean producing lots of scalar particles.
That's something that we could study pretty well at the level of solving classical field equations when we're in the weakly coupled theory. But when we get close to the phase transition, when the mass gap is getting small, the theory becomes strongly coupled. And then it would be very hard to simulate classically. And so what we thought would be interesting to set up is a simulation where a kink in an anti-kink in the strongly coupled theory annihilate one another at high energy and produce a lot of outgoing stuff.
So here's where we come to another aspect of the Wilson legacy. There are good classical methods for studying systems in one dimension, which are not too highly entangled. Back in the '80s, there was a graduate student at Cornell named Steve White. He worked with Ken Wilson and John Wilkins. And then a few years after leaving Cornell, he developed what's called the density matrix renormalization group for studying one-dimensional physics. And then some years later, it was appreciated why DMRG works as well as it does in many cases of practical interest. And that is that the low energy states of a typical one-dimensional system, especially one that has a mass gap, are not so highly entangled.
So if we consider cutting the system in half and ask, how much entanglement is there between the left half of the system and the right half of the system, in a gap one-dimensional system described by a local Hamiltonian, that's just some constant amount of entanglement which doesn't depend on the system size. So a nice way of capturing that is using matrix product states, which means that we take a system of, in principle, many lattice sites, although I've just indicated four here, and those physical indices corresponding to the variables on the lattice sites are the black lines in the picture. Each one of the blue boxes is a tensor with three indices.
And the blue indices are contracted with one another, and the black indices describe the many-site quantum state. You can represent any state and sites this way. But what makes the matrix product state ansatz particularly useful, and this is the key to DMRG, is that when the states are only slightly entangled, the contracted internal index, the one shown in blue, can have a small dimension. And we can still get a very good description of the state. Now what's useful for us is that once we have a matrix product state description of a state, we can translate that into a quantum circuit for preparing the state in a quantum computer in a series of steps. And the number of blue boxes that we need in that quantum circuit still scales only linearly with the size of the system with the number of sites. And the complexity of each box is manageable if the bond dimension, the blue index, doesn't have-- it isn't a very large dimension.
So we'll have an efficient way of describing the system, which is not too entangled, and that translates into an efficient circuit for preparing the state. So in our case, we need several things. We would like to find, first of all, a matrix product state description of the vacuum in the theory. And there is a little issue that you have to worry about, which is that the variables at the sites are the field variables, which in principle are unbounded, but remember we were going to represent them with a finite number of qubits. So we have to worry about how many qubits we need.
And here it's helpful that we're just in one spatial dimension, where the field fluctuations have a size which scales like just a logarithm of one over the lattice spacing. So the local dimension that we need isn't too bad. And then we can actually rely on some rigorous results that tell us the DMRG, which is a variational search for the ground state. Using matrix product states will actually converge in a way which can be achieved by an efficient classical computation.
So what we're trying to do here is a kind of hybrid classical quantum scheme. We want the classical computer to do a lot of the work in order to reduce the quantum resources that we need. And one thing we can do is prepare the ground state by doing DMRG. That's a classical computation. Once we know the result, we know what quantum circuit to use to prepare the ground state.
But I don't want to compare the ground state. I want to prepare a kink, a topological excitation. And so the way to do that is to, instead of the vacuum, where I can choose the state to be translation invariant, so that all the tensors have the same form-- I'm not sure you can see my subscripts, but I have a description of their two degenerate vacua, one with a positive expectation value of the field and one with a negative one. They're actually related by the Z2 symmetry, either one of which is translation invariant.
So I have local tensors describing the field variables at each site, A plus for the positive vacuum expectation value vacuum, and A minus for the negative one. And then in the case of a kink, I will want to have an interpolating operator between, say, the minus vacuum and the plus vacuum. And that can also be described by a matrix product state, no longer translation invariant, will have A minus on one side, A plus on the other, and some other tensor in between, which describes connecting the two vacua together in the kink solution. And in the strongly coupled quantum theory, we can give a general argument that we can get a good approximation to the kink, where that tensor B doesn't act on a single site but acts on a constant number of sites.
And then we can have an error, which gets exponentially small once the support of the tensor B is small enough. So if I want to find the case of a single kink, which here I've done with open boundary conditions, but we could also do it with twisted boundary conditions, so there would be just one kink. The idea is to variationally minimize the energy subject to this translation invariant ansatz. We'd be finding the zero momentum state with the lowest possible energy, which has a kink number of 1. And that would give us a description of the kink, which we could load into a quantum computer if we wanted to initialize the state.
But what I really want is a kink and an anti-kink. There's a similar description of that where I started out with the minus vacuum described by A minus tensors, go to the plus vacuum and then back to the minus vacuum with now two interpolating tensors. And the nice thing that helps us out here is that, because of the Lorentz covariance of this theory, once we know the right tensor at zero momentum we can figure out what the right tensor is for other values of the momentum just by applying a Lorentz boost. And that means we can also prepare wave packet states.
So what I'm describing here is a description, which would apply in the strongly coupled theory of a kink in an anti-kink that are widely separated from one another. We have an accurate description with a small error that we can estimate of that initial state. As a matrix product state, that can then be translated into a quantum circuit, which describes how to initialize my quantum computer with this state. And then we can evolve it forward and let the kink and anti-kink collide. Yeah.
AUDIENCE: So swept under the rug there, if I understand where you're talking about the support of this operator B that interpolates between the minus and the plus vacuum relates to how it was founded. And so presumably the accuracy of your simulation depends on, in your discretization, how many qubit sites you've got covering that operator. So what's the bandwidth, exclusively what you have to do-- how many-- how fine does your discretization be in order to be able to simulate this with the accuracy you want?
JOHN PRESKILL: Well, at this point, I can't give you a very precise number. We can make use of a result that tells us that if there's a gap between the kink sector and the sector with more than one particle, which will be the case if the kink has a gap and also the scalar particle has a gap, then it's possible to get a good approximation to a single particle state with a local operator. This the mathematical physicists have proved. And that once the operator is big enough but still constant size, the error gets exponentially small. But I don't have that number.
So then we could evolve forward--
AUDIENCE: The error is exponentially small in the support of the kink matrix B?
JOHN PRESKILL: Yes. So then we can evolve, and then we have to decide what observables we want to measure. But for example, it's interesting to measure the time dependence of the flux coming out of the interaction region. How long does it take this fireball that we create in the collision to evaporate, something that we really don't have any way to compute analytically. But in a simulation, by essentially simulating a calorimeter and running for different amounts of time, I would be able to infer the time profile of the outgoing flux.
OK, so I guess we're past 2:00. Let me just summarize some of what I've talked about. What have we done so far? Well, we've done-- we have some results about the polynomial resource scaling for simulating scattering in scalar field theory. And we've done a comparable analysis for Yukawa Theory with scalars and fermions interacting. We have an argument for the BQP-hardness, that is, the problem being as hard as anything a quantum computer can solve. In the case of a one-dimensional weakly coupled simulation, we've been analyzing-- and this is work still in progress with my student [INAUDIBLE] and postdoc [INAUDIBLE]-- this hybrid classical algorithm where we use DMRG to help us with the preparation of the initial state, and then do the dynamical evolution or simulation using a quantum computer.
Now meanwhile, there's a lot of other work that's been done. I have a reference here that gives some of the references to recent papers not done by us. But there have been a number of studies now using these classical DMRG methods of quantum electrodynamics in one dimension, the sort of simplest gauge theory there is. It's a little bit too simple, so this isn't as hard as you might think, because in one dimension, the gauge field isn't really dynamical. There's no photon in one dimension. It's really just a constraint. You can integrate it out, and it becomes a spin system. And you can use DMRG methods to study that system.
And you can, by classical methods, find the spectrum of the low lying excitations. And you can also do classical simulations of dynamics, for example, the process in which you introduce a background electric field, which will eventually decay because of the nucleation of charged particle pairs. And that's all been analyzed by others using these tensor network tools in one-dimensional QED.
And there have also been some simulations done with real quantum devices, but just at the level of a few sites, both of static properties and dynamic properties. I mentioned one of those in my colloquium, the recent calculations by the Ginsberg Group, again, for one-dimensional QED with 20 sites using an ion trap. And there are various proposals for how we might use analog simulators based on ultra cooled atoms to study gauge theories, including non-abelian ones in one dimension. And what's still in progress is understanding how to extend such simulations to two dimensions and beyond, where we probably have a lot more to learn. So-- mhm.
[? AUDIENCE: ?] I'm a little perplexed now by the-- because you were emphasizing your quantum circuits and therefore you needed error correction.
JOHN PRESKILL: Yeah, those people don't have it. But they have-- so actually what they do is kind of interesting. They need some mitigation against the noise. The circuits are small. And in the case of the dynamical simulations, in order to mitigate the noise, they do an extrapolation to the zero noise limit. What they actually do is they perform the circuit several times, but each time they increase the gate time, which means a higher error rate. And then they look at the trend for how the errors-- or look at how the results scale with the error rate in the individual gates, and then they try to extrapolate to zero error rate.
And this doesn't work too badly in these very small systems with just a few qubits. In the case of the studying of static properties in the ion traps, which has been done with 20 sites, that's a really good device. And they have not seen any indication that decoherence is a limitation at the 20-site level. They're going to try to do it with 50 sites, and we'll see how well it works.
So there are a lot of things we could do better. I told you about some of our results about resource scaling. That could certainly be done more carefully. I think our results are conservative and most likely way too pessimistic. One thing it would be interesting to study is using renormalization group improvement of the Hamiltonian to get closer to the continuum limit with a larger value of the physical lattice spacing, which would reduce the resource cost. We're really just getting started in understanding how to simulate gauge theories. And there, if we could figure out how to encode the gauge in variant states as economically as possible, without introducing unneeded gauge degrees of freedom, we'd be able to save a lot of qubits that way. And I think there's an opportunity there.
We sort of began the study of topological defects in said simulations and the kink simulations that I told you about. We haven't really tried to study systems with massless particles where these adiabatic methods run into the difficulty that you're going to produce lots of soft stuff when you change the Hamiltonian slowly, if there's no mass gap. And I think the key thing there is to formulate the right observables, which are insensitive to the very soft quanta that are created by those non-adiabatic effects. We'd like to know more about how to simulate conformal field theory, which would be the path towards simulations of quantum gravity and ADS through holographic duality.
And there may be other ways to use the power of a quantum computer to teach us things about quantum field theory. For example, there are very beautiful results that have been obtained recently using the conformal bootstrap method, which is really a way of solving a bunch of constraints by finding a solution to a semidefinite program. And that's something that quantum computers can probably speed up. So there may be an opportunity for quantum advantage there.
And it's really important to think about, what are the interesting things we can do with near-term devices? One reason I like this kink-kink scattering problem is that you don't really have to be close to the continuum, I think, for it to be interesting. You can consider a collision between domain walls and a spin system at high energy, which would be hard to simulate but might be interesting to study and would be easier to study in near-term devices.
We really would like to have fresh ideas about how to build some noise resilience into the simulations with near-term devices to make them more likely within reach in the near term. So this is the type of enterprise that I'd like because it's quite interdisciplinary. I think to make progress, one needs to combine the things that the quantum algorithm experts know with things that the field theorists know.
And I don't know, I don't want to over promise how soon we'll get some real physics input out of simulating quantum field theory on a quantum computer, though eventually I think it's going to be very informative. But even in the near-term, I think thinking about simulating quantum field theory on a quantum computer gives us a different perspective, which is likely to be fruitful and lead to new insights. Thanks for listening today, and for the rest of the week.
[APPLAUSE]
AUDIENCE: I like this general story of trying to use digital quantum computers to get interesting quantum mechanical things like field theories. But I can't help but feel that this example that you give of a kink-kink collision just feels arbitrary, right? I mean, there's an infinite number of dynamical things you could set up that are going to end up being-- it gets too complicated at the end. And so I guess what I'm wanting to have is some organization, organizing structure, to think of what sort of dynamic problems that could be interesting. And maybe this one is, but your articulation of it isn't quite there.
JOHN PRESKILL: Yeah, I was looking for something where it would be one dimensional, and you could argue would be classically hard. So I wanted some process in which you generate a lot of entanglement, where if you-- there's a tool for simulating dynamics in one dimension. You can use the matrix product states, but update the state. Sometimes people call it time evolving block decimation to update the matrix product state as it evolves. And that grinds to a halt when you generate too much entanglement, and the bond dimension blows up. So I was trying to think of a process in which that would happen.
You could also consider some kind of quenching, where you start out in a low entanglement state, and the quench gives rise to dynamics which will generate a lot of entanglement. But maybe partly because we were already on this program of studying scattering, I was interested in one-dimensional scattering. Now, we didn't really have to make it kink-anti-kink scattering. I just threw that in because there was another interesting conceptual question about how to do the state preparation. And, well, a lot of the effort in this project has been to try to figure that out.
PAUL GINSPARG: Even with that problem, notice how eagerly careful he was to qualify that he did not expect to be able to do it in this era.
[LAUGHTER]
JOHN PRESKILL: Well, I didn't want to be attacked. So I-- yeah. Yeah.
AUDIENCE: Is there any prospects for, instead of trying to simulate dynamics, which simulate things like RG flow using this kind of technology?
JOHN PRESKILL: So the question, I think was, could we simulate RG flow rather than dynamics in real time? I don't know. You got a good idea about how to do that?
AUDIENCE: I don't even know what the tools are, what kind of novel I have access to?
AUDIENCE: Are RG flows real, though? What does it mean to--
AUDIENCE: Well, you said you can simulate the imaginary Hamiltonian evolution, which I don't know how to do either, if I only have access to unitary gates.
JOHN PRESKILL: I mean, we have something like that in AdS/CFT. You know, if you fall from the boundary into the bulk, you're going to a longer and longer scale. And that bulk dynamics is really telling you something about renormalization group flow. I can't think of anything more concrete than that.
AUDIENCE: Can you speak a little bit more on the gauge theories? Because I know that's very closely associated to the astro-prep program.
JOHN PRESKILL: So can I say more about simulating gauge theories? Was that the question? Yeah. The-- well, as far as the work that's been published, it's almost all in one dimension, where the way people do it is they integrate out the gauge field, which you can do in one dimension, because it's just a constraint. So I have been thinking about the following question-- you know, how do you do the state preparation in QCD? Like, what if I wanted to study Hadron scattering in QCD and wanted to follow the rough outline I described, in which we adiabatically prepare the state and then evolve it and then read something out?
But we don't want to start with the free theory for QCD, because that's a very singular limit of QCD. What's more natural is to start with the strongly coupled theory, where we can construct analytically the vacuum for the Kogut-Susskind Hamiltonian on a lattice in the limit of infinite coupling. And then we would like to study how, as you-- I'll let the lattice coupling drift down, and then the confinement scale is getting bigger and bigger in lattice units. What is the resource cost of preparing a state which you could use as an initial state in QCD? I've been thinking about that a little with my student Alex Boozer. But we don't have very concrete results yet.
AUDIENCE: Could it be useful to start from some exactly solvable limit, and somehow [INAUDIBLE] the Hamiltonian away?
JOHN PRESKILL: Yeah, I mean, that's the program. The question is what exactly solvable limit do you want? So the two I've just mentioned are the free field limit and the strongly infinite coupling limit. But if you have another one I'd be happy to try it.
AUDIENCE: The problem [INAUDIBLE] but if some other gauge theory-- not [INAUDIBLE].
JOHN PRESKILL: Oh, I see. For other theories? I don't know. I'd love to hear ideas about that.
AUDIENCE: [INAUDIBLE].
JOHN PRESKILL: Yeah.
AUDIENCE: You know the state, and you apply the field, and you go away from the exactly solvable limit.
JOHN PRESKILL: Yeah, that's sort of analogous to what I call the infinite coupling limit, because when the Kitaev model is exactly solvable, the correlation length is zero. And then if you perturb away from that limit, say, by introducing a magnetic field, then you generate a finite correlation length. And I guess-- and the theory's gap, so you can do adiabatic state preparation. And then I'd have to figure out what to do with that state. I mean, maybe you have a good idea.
AUDIENCE: I don't know, I just thought of it. Maybe I'll go think about it a little more.
PAUL GINSPARG: It's a high bar, though, because you're expecting to learn as much from this as Wilson did from [INAUDIBLE].
[LAUGHTER]
JOHN PRESKILL: Hey, it is a high bar. But you know, I don't know whether I should take comfort or be daunted by this. But when you think about it, and you know this very well, Paul, Ken got the ball rolling on lattice QCD. But you know, if you talked to him in the 1980s, he would have said it's going to be decades before there will be enough computing power to do really state-of-the-art hydronic physics on a computer. He was right, but now a lot of interesting physics is coming out of lattice QCD.
But it's actually been 40 years, right, since the late '70s. And so, you know, I'm 66. And-- oh, all right. Well you-- well, you've got a chance. But I think, you know, if people-- remember how exciting it was, or at least I was excited when Creutz calculated the string tension? That was in the late '70s, and we thought, wow, you know, we're going to learn a lot about QCD. And maybe it won't be-- Ken was wiser. He said, no there's not enough computing power. You know, that's what Feynman thought, too. He said, no, it's not going to happen because there's not enough computing power. But it's a good thing people worked on it. They pushed it forward. And it took decades. And now we're learning about physics from it. And I think this will be like that.
PAUL GINSPARG: And you're planning to live until the age of 100.
JOHN PRESKILL: At least. You know, thanks to advances in medical technology, perhaps. Quantum technology may extend our lifetimes, and then I got a shot.
PAUL GINSPARG: OK, I don't see any more questions. John will be around for the remainder of the afternoon, and he's still young and energetic, so he'll take questions.
[APPLAUSE]
Forthcoming exascale digital computers will further advance our knowledge of quantum chromodynamics, but formidable challenges will remain. In particular, Euclidean Monte Carlo methods are not well suited for studying real-time evolution in hadronic collisions, or the properties of hadronic matter at nonzero temperature and chemical potential. Digital computers may never be able to achieve accurate simulations of such phenomena in QCD and other strongly-coupled field theories; quantum computers will do so eventually, though I'm not sure when. Progress toward quantum simulation of quantum field theory will require the collaborative efforts of quantumists and field theorists, and though the physics payoff may still be far away, it's worthwhile to get started now. Today's research can hasten the arrival of a new era in which quantum simulation fuels rapid progress in fundamental physics.
As part of the Spring 2019 Hans Bethe Lecture Series at Cornell, Physicist John Preskilll presented the LEPP Joint Seminar, "Simulating Quantum Field Theory with a Quantum Computer," April 12. Preskill is the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology, and director of the Institute for Quantum Information and Matter at Caltech. He received his Ph.D. in physics in 1980 from Harvard, and joined the Caltech faculty in 1983.
The Hans Bethe Lectures, established by the Department of Physics and the College of Arts and Sciences, honor Bethe, Cornell professor of physics from 1936 until his death in 2005. Bethe won the Nobel Prize in physics in 1967 for his description of the nuclear processes that power the sun.