share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
PAUL GINSPARG: OK. So while people are still coming in, I'm going to provide the filler and announce the lecture. Welcome to the second of three lectures from John Preskill, director of the Institute for Quantum Information and Matter at CalTech. This is the lecture intended for the general public, which means we have the ritual slideshow.
Hans Bethe had already retired when I arrived here as a graduate student in the late 1970s, but he remained a fixture over there in Newman lab. So I did get to interact with him a bit. I'm going to give my own take on the slides that have been shown here frequently.
We start with-- he was born in 1906 in Strasburg, Germany. And I just was pausing here. I have a note to remind everybody to pay attention because when I ask random undergraduates at Beta house on the West campus why it's named after him, they all think he must have been some kind of big donor without realizing that he was a scientist.
So that's why we do this. His father's academic position had his family moving around a bit. They ultimately settled in Frankfurt. And he began his higher studies in Frankfurt, in 1924, in chemistry. He ultimately switched to physics at the University of Munich, where he completed his doctorate with Arnold Summerfield in 1928.
This timeframe, incidentally, put him among the first of the young scientists to explore applications of the then new quantum theory. This was in the late 1920s. And, indeed, at age 24 he wrote a classic article on spectroscopy, one of the first applications of group theory in quantum mechanics.
During this period of his late 20s, he went to a few places, including a few places in Germany. He spent some time in Cambridge, UK, and he also spent some time as a visitor in Rome, working with Enrico Fermi. So he got to know the major physicists at the time.
Things took a downturn when he took a position as assistant professor at the University of Tubingen. But it was government funded. And during an edict that was issued shortly after he arrived, he was forced to be fired due to having Jewish ancestry. And reading the writing on the wall, in 1933 he left Germany for good.
While briefly back in England, he was recruited to Cornell. And this is a photo of him. He arrived in 1935. There he is. And this is on the steps of this very building we're in right now. This is before either Clark, of course, or even Newman lab. We'll see the groundbreaking for that in a moment.
And he started at a salary of $3,000 per year. But it instantly put the physics department here on the map. This was before the surge of European physicists that made US physics preeminent after World War II.
Within a few years of arriving, he explained how stars burned hydrogen to become helium, starting the field of nuclear astrophysics, and that's work for which he won the Nobel Prize somewhat later, in 1967. Now this photo-- oh this is him calculating. That odd implement that he's holding, I think, was used for astrological purposes. I'm not sure.
[LAUGHTER]
This photo is the one I worked hard to find. It shows Bethe with Werner Heisenberg. Oh, let's [INAUDIBLE] the mic. And they are discussing the wonderful facility that young physicists have with applied technology.
[LAUGHTER]
You had to-- there you go, right? OK. He took a few years of leave from Cornell during World War II to be head of the theoretical division at Los Alamos. He reported to Oppenheimer, who he later supported in testimony in his defense during the-- when Oppenheimer had his security clearance removed.
Feynman-- Richard Feynman, who reported to him during this period-- and this I have to read verbatim-- described him as, quote, "a battleship surrounded by an escort of smaller vessels--" the younger theorists-- "moving majestically forward through the ocean of the unknown."
When I was an employee-- oh, there he is. When I was an employee at Los Alamos, 50 years later, it was nowhere-- no longer anywhere near the ocean, as far as I could tell.
[LAUGHTER]
Live mic. OK. Here is the photo I mentioned-- the groundbreaking for Newman Lab, over there somewhere, around 1946.
A year after this, famously, he went to a conference at Shelter Island in 1947-- any physicists of my generation still have this etched in our memory, even though it was before we were born-- where he heard about this anomalous shift in the frequency known as the Lamb Shift of the first two excited states of the hydrogen atom.
And on the train back from New York City, he did a calculation, which we now would describe as mass renormalization. And it was able to exactly calculate this effect. And this paved the way for the revolution in quantum electrodynamics. Oh, there he is.
Afterwards, he got-- well, he continued to science, of course. He got involved in a variety of things. Following World War II, he was one of the founders of the Federation of American Scientists. This is at nuclear test ban talks. And he played a role-- a leading role in the public debate about nuclear weapons defense policy and nuclear power.
He was-- oh, making sure we see who he is-- He was advisor to several US presidents on national security policy. He opposed the development of the hydrogen bomb. And his work on arms control helped lead to the limited test ban treaty of 1963. That was the one that banned underwater, atmospheric, and space testing. And I think-- yeah, that's [INAUDIBLE] the left.
1967-- the Nobel Prize I mentioned. He continued, afterwards, to be a critic of defense policies, including opposing the Star Wars antiballistic missile program. He promoted peaceful applications of nuclear energy. I heard him give a number of talks about this.
And while he retired from Cornell in the mid 1970s, after 40 years on the physics faculty, the informal collegial environment that he fostered here remains one of his legacies. But he did not stop working. In 1986, at the age of 80, he helped solve the solar neutrino problem that had mystified stellar astronomers for 20 years.
And, overall, he wrote a major article-- at least one for every decade of his 70 plus year career, including an article during his 90s about supernova explosions. He passed away in 2005 at the age of 98. But I think he got the last laugh.
OK. So now, moving to today's speaker-- oh that took long-- whom I first met in 1975 when we took a group theory course together at Harvard given by Sheldon Glashow, a Cornell alum. John had just arrived following his undergraduate years at Princeton.
And he went to on went on to complete his thesis in 1980, advised by Steven Weinberg, yet another Cornell alum. He remained at Harvard as a member of the Society of Fellows then as junior faculty, left for CalTech in 1983, where he's remained since. He's currently the Richard P. Feynman professor of theoretical physics.
Feynman having been one of Bethe's recruits to Cornell in the late 1940s. And with all these connections, we consider John a virtual Cornelian at this point. He began his career in particle physics and cosmology. But in the 1990s, became tantalized by the possibilities of using quantum physics to solve otherwise intractable computational problems.
He's been a leader and major proponent of this ever-expanding field ever since. And then, finally, I do have to make one apology to the speaker. We were unable to use his proposed abstract in its entirety. Apparently his quantum supremacist reputation having preceded him here.
The total abstract actually had these two additional words [INAUDIBLE] minds. And the powers that be asked me to if they could excise the word feeble. I actually have no idea what the issue was, but-- I still don't. But I looked at my thesaurus and made a few suggestions. I tried--
[LAUGHTER]
I tried--
[LAUGHTER]
And a few other inspired possibilities, but they were all mixed. And then I said, look, it's not as though he wrote what I would have written.
[LAUGHTER]
Still, no go. So after agonizing over this for months, I just left it, and I said, OK, fine, you win. Excise both words. So there. So, sorry about that, John. On the other hand, they did permit the final sentence here, verbatim. Although, you know, frankly, I'd say the jury is still out on those two points. So let us all welcome John to make the case.
[APPLAUSE]
JOHN PRESKILL: Well, thank you very much, Paul. It's a pleasure to be welcomed as a Cornelian by a scientist I much admire and have known for a long time. I think it would be uncontroversial to say that Paul Ginsparg is one of a kind.
And like many physicists, I very much admired Hans Bethe. In the years after he officially retired from Cornell, he was a frequent visitor to CalTech. And he never lost his zest for science or, as Paul said, his ability to rack up scientific accomplishments, even quite late in life.
He was a remarkable man and scientist. And I'm honored to be this year's Bethe lecturer. The topic of my lecture is quantum physics, a topic that Hans Bethe knew well. And it's also about information.
Now, we have information technology today, which is essential in our daily lives. But we all recognize that today's impressive technologies are to be replaced in the future by new technologies we can scarcely imagine today. It's fun, just the same, to speculate about future information technology.
I may not be the best person to engage in that type of speculation. I'm not an engineer. I'm a theoretical physicist. And perhaps I'm not extremely knowledgeable about how computers really work.
But, as a physicist, I know that the crowning intellectual achievement of the 20th century was the development of quantum theory. And it's natural to wonder how the development of quantum theory in the 20th century will impact the technology of the 21st century.
Quantum physics is a rather old topic by now. But some of the deep ways in which quantum systems-- systems obeying the rules of quantum mechanics-- are different from classical systems, we've only come to appreciate relatively recently. And those differences have a lot to do with the properties of information carried by physical systems.
To a physicist, information is something that we can encode and store in the state of some physical system, like the pages of a book. But, fundamentally, all physical systems are really quantum systems. They obey the rules of quantum mechanics. So information is something that we can encode and store in a quantum state.
And information carried by quantum systems has some notoriously counter-intuitive properties. That's why physicists like to speak of the weirdness of quantum theory, and physicists relish that weirdness.
But we're beginning to take more seriously the idea that we could put that weirdness to work to exploit the unusual properties of quantum information to perform tasks that would be impossible if this were a less weird classical world. That desire to put the weirdness to work has driven the emergence of a new field, what we call quantum information science.
And that subject derives much of its intellectual vitality from three central ideas-- quantum entanglement, quantum computation, and quantum error correction. And the goal of this talk is to explain those ideas.
So let's start at the very beginning. As you know, ordinary information can be expressed in terms of indivisible units-- bits of information. And you might think of a bit as an object-- let's say, a ball which can be either one of two colors. Let's say red or green.
I can put a bit inside a box. And then, later on, when I open the box, the color ball that I've put in comes out again. So you can recover a bit and read it. Now, information stored in a quantum system-- what we call quantum information-- can likewise be expressed in terms of indivisible units-- what we call quantum bits, or qubits, for short.
And for many purposes, we can envision a qubit as an object stored inside a box. But where now we can open the box through either one of two possible doors, where those doors represent two complementary ways we can prepare the state or observe the state of a qubit.
And I can put a ball inside the quantum box through either door number one or door number two. And then if, later on, I open that same door again, the colored ball that I put in comes out again, just as for classical information.
But if I put information through door number one of the qubit and then, later on, open door number two, then what comes out is completely unpredictable. The color is chosen uniformly at random. So to read the information in the quantum box, you have to know what you're doing. If you do it the wrong way, you'll damage the information and it can't be recovered.
So one consequence of that we can appreciate arises if I think about copying a qubit. Suppose I had a cubic copy machine. What would that mean? It would mean that I could put information in the qubit through door number one and then make the copy. And then, when I open door number one on the original and the duplicate, the color that I put in would come out of both boxes.
And likewise, if I put information through door number two-- put the colored ball through door number two and made a copy-- and then opened door number two of the original and the duplicate, the color I put in would come out of both boxes. But, in fact, there's no such machine, no such device that copies qubits is allowed by the principles of quantum mechanics.
And the reason is that in order to make the copy, the copier has to probe inside the box. And if it happens to guess right and open the same door that I use, then it could copy the information just as though it were classical.
But if it guesses wrong and opens the wrong door, then it will irrevocably complete damage the information and there will be no way to build a high fidelity copy. So we might be able to clone a sheep, but we can't clone a qubit.
Now, as you see, I like to think of qubits in sort of an abstract way. But a qubit always has some physical realization. And that physical realization can be chosen in many ways. Just so you have something concrete to think about, I'll give one example now and mention some others later in the talk.
The qubit could be carried by a single photon, a particle of light, which has an electric field which oscillates. And if that oscillating electric field is either horizontal or vertical in orientation, those correspond to the two possible states we could observe through door number one of the box.
But if I consider the polarization is 45 degree rotated, the electric field along those directions corresponds to what I would see if I look through door number two of the box. So I could prepare a horizontally oriented photon and then observe along the rotated axis. And, in doing so, I would just observe a random bit.
Now, the really interesting ways in which quantum information is different from classical information we can appreciate only if we consider states of more than one qubit. So let's suppose we have two qubits. And they could be far apart from one another. Let's say one is in my lab at CalTech in Pasadena. And the other is in the custody of my friend in the Andromeda Galaxy.
But these two qubits, a long time ago, were both on earth. And they interacted in a certain way to establish a correlation between their states, which has unusual properties. Namely, I can open my box in Pasadena through either door number one or door number two.
And, either way, what I find is just a random color-- could be red, could be green, with equal chance. And the same thing is true for my friend in Andromeda. He can open either door number one or door number two and just finds a random bit. So it seems that neither one of us, by opening a box, can acquire any information about what's inside the boxes.
And that's kind of peculiar because with two qubits we should be able to store two bits of information. Where is the information hidden, in this case? And the answer is, for this particular state of the two qubits, all the information is in the correlations between what happens when you open the box in Pasadena and you open the box in Andromeda.
For this particular state, if I open door number one and my friend opened door number one, we'll always see the same color. It could be red or it could be green, but it's guaranteed to be the same. And that's true, as well, if we both open door number two. I see a random color-- could be red or green. But if my friend opens the same door, he's guaranteed to see the same color as I do.
Now, there are actually four perfectly distinguishable ways in which the qubit in Pasadena could be correlated with the qubit in Andromeda. We could see either the same color or opposite colors when we both open door number one or door number two. And by choosing one of those four ways, we put two bits of information into the boxes.
But what's unusual is that neither one of us, locally, in Pasadena or Andromeda, can acquire that information. It's shared non-locally between these two distantly separated boxes.
And that property of quantum information, that it can be stored in this non-local fashion shared between two distantly separated objects, is what we call quantum entanglement. And it's the really important way in which quantum information is different from information in ordinary classical objects.
Now, correlations themselves are not unusual. We encounter those all the time in daily life. My socks are ordinarily the same color. So if you look at my left foot and observe its color, you know what color to expect before looking at my right foot. And it's kind of similar with these quantum boxes.
If I want to know what my friend will see if he opens door number one in Andromeda, I can open door number one in Pasadena to find out. And if I want to know what my friend will see if he opens door number two in Andromeda, I can open door number two in Pasadena to find out.
So you might think, really, the boxes are just like the soxes. But they're not. The boxes are fundamentally different than the soxes. And the essence of the difference is, there's just one way to look at a sock. But because we have these different complementary ways of observing the qubit, the correlations among qubits are richer and more interesting than ordinary correlations among bits.
And this phenomenon of quantum entanglement was first explicitly discussed over 80 years ago by Einstein and collaborators. And to Einstein, quantum entanglement was so unsettling as to indicate that something is missing from our current understanding of the quantum description of nature.
Now, that paper elicited some interesting responses, including an especially thoughtful one by Schrodinger. The way Schrodinger put it is, the best possible knowledge of a whole does not necessarily include the best possible knowledge of its parts.
So what Schrodinger meant was that even if we know exactly how that pair of qubits was prepared-- we know as much about the pair of qubits as the laws of physics will allow us to know-- we are still powerless to predict what will be seen if I open the box in Pasadena or the box in Andromeda.
And it was Schrodinger who suggested using the word entanglement for these unusual correlations. And he also said it's rather discomforting that the theory should allow a system to be steered or piloted into one or the other type of state at the experimenters mercy, in spite of his having no access to it.
And what Schrodinger meant is that it seems odd that it's up to me to decide, in Pasadena, whether to open door number one and so know what my friend will find when he opens door number one in Andromeda, or to open door number two and then know what my friend will find if he opens door number two in Andromeda.
But Schrodinger understood that these correlations, although different from ordinary classical ones, do not allow instantaneous communication between Pasadena and Andromeda. Because when my friend and Andromeda opens his box, he just sees a random bit. And that doesn't convey any information about whatever action I may have performed on my box in Pasadena.
Now, the theory of quantum entanglement did not advance very much for the next 30 years, until the work of John Bell in the mid 1960s. And beginning with Bell, we started to think about quantum entanglement in a different way-- not just as something very strange, but as a resource that we might exploit to do useful things.
I won't go into the details, but what Bell described as a kind of game that two players can play-- Alice and Bob. It's a co-operative game. That means Alice and Bob are both on the same side. They're trying to help each other win.
And the way this game works is that Alice and Bob both receive inputs and the object is for them to produce outputs, which are correlated in some way that depends on the inputs that they receive. But under the rules of the game, Alice and Bob are not allowed to communicate with one another between when they receive those inputs and when they produce their outputs.
They are allowed to make use of correlated pairs of bits that might have been distributed to them before the game began. And for this particular version of the game, if Alice and Bob play the best possible strategy, they can win the game with the success probability of 75%, if we average uniformly over the inputs they could receive.
But there's also a quantum version of this game. In the quantum version, the rules are exactly the same, except Alice and Bob are allowed to make use of entangled pairs of qubits that might have been distributed to them before the game began.
And by exploiting that shared quantum entanglement, they can play a better quantum strategy and win the game with a higher probability of success-- about 85% instead of 75%. So they can use the entanglement as a resource to do something-- win the game-- with higher success than they could if they didn't have quantum entanglement.
And experimental physicists have been playing this game for decades now, and winning with the higher probability of success, which Bell pointed out, the rules of quantum physics allow. So these unusual stronger than classical correlations really are part of nature's design.
Einstein had derided quantum entanglement. He called it spooky action at a distance, which sounds even more derisive when you say it in German. But it doesn't really matter what Einstein thought. Nature is as experiments reveal her to be. And we should learn to respect and love her as she truly is.
So boxes are not like soxes. Quantum correlations are different than classical ones. You can win a game with a success probability of 85% instead of 75% if you have quantum entanglement. Is that really such a big deal? Yeah, it's really a big deal.
And we begin to appreciate why it's a big deal if we think about systems with many parts. We can imagine a book. Let's say it's 100 pages long. And if this were an ordinary book written in bits, every time I read one of the 100 pages, I would learn another 1% of the content of the book.
And after I had read the pages one by one, and I'd read all 100 pages, I would know everything that's in the book. But suppose, instead, it's a quantum book written in qubits instead of bits, with pages that are highly entangled with one another.
Then, when I look at any one page, I just see random gibberish which reveals, essentially, no information that distinguishes one highly entangled book from another.
And if I read all of the pages, one by one, after I'm done, I know almost nothing that tells me about the content of the book. And that's because in the quantum book, the information isn't written on individual pages. It's written almost entirely in the correlations among the pages.
To read the book, you have to make a collective observation on many pages at once. That's quantum entanglement. And it's very different from any notion of correlation that we normally encounter.
Now, what's really interesting is that these correlations are extremely complex if we try to describe them using classical language. So if I wanted to write down a complete description of all the correlations among just a few hundred qubits in a typical generic state, I would have to write down more numbers than the number of atoms in the visible universe.
It'll never be possible, even in principle, to write that description down. And that property-- that we can't hope to describe quantum information using classical information-- was very intriguing to the physicist and former Cornelian, Richard Feynman.
It led him to make the suggestion in the early 1980s that if we could build a quantum computer, which processes qubits instead of ordinary bits, we might be able to perform tasks which surpass what we can do with any digital computer.
Feynman's idea was that if we can't even write down, can't even express using ordinary bits, the information content of a few hundred qubits, then, perhaps, by processing the qubits we'd be able to perform tasks which an ordinary digital computer would never be able to emulate.
And that vision was rather spectacularly supported some years later by the computer scientist Peter Shor. Shor studied the problem of finding prime factors of large composite integers.
And he discovered that if you had a quantum computer, the problem of factoring, which we believed to be a hard problem for digital computers-- with a quantum computer, it becomes an easy problem, not much harder than multiplying two numbers together to find their product.
And when I heard about that discovery in 1994, I was really awestruck because I realized what this means is that the difference between hard problems and easy problems-- the problems that we'll never be able to solve and the problems that we can solve with sufficiently advanced technologies-- that difference between hard and easy is different than it otherwise would be because we live in a quantum world, not a classical world.
And I thought that was one of the most interesting ideas I'd ever heard in my life. And thinking about it eventually led me to change the direction of my own research from elementary particle physics to quantum information.
Now, does anybody care whether factoring is a hard problem? Actually, a lot of people do. Because the difficulty of factoring and other number theoretic problems is the foundation of the public key cryptosystems that we all routinely use when we want to protect our privacy when communicating over the internet.
Some decades from now, when quantum computers are in widespread use, we won't be able to protect our privacy using those same cryptosystems. Alternatives do exist, but it's still not very clear what will be the best way to protect our privacy in the coming post-quantum world.
The important thing that we learned from Feynman and Shor is that there are problems which are classically hard and quantumly easy-- problems we can't solve with classical computers but can solve with quantum computers.
And it's important to understand what are those problems that are quantumly easy and classically hard. And we've learned some things about that, but I think we have much more to learn about it.
From a physicist point of view-- and I think this would have been Feynman's point of view-- what seems most significant about quantum computing is that we expect that with a quantum computer we'd be able to efficiently simulate any process that occurs in nature, which isn't true for digital computers which are unable to simulate very highly entangled quantum systems.
So with a quantum computer we'd be able to probe more deeply into the properties of complex molecules or novel materials. But we'd also be able to explore fundamental physics in new ways.
For example, by simulating the high energy collisions of strongly interacting elementary particles, or the quantum behavior of a black hole, or the conditions in the universe right after the Big Bang.
So a lot of work has been done on understanding what problems quantum computers will be able to solve. A colleague of mine from graduate school, who Paul also knows, Eddie Farhi, has worked for decades on developing new algorithms for quantum computers.
And after he wrote one of his characteristically brilliant papers on the subject, I was moved to send him this poem. We're very sorry, Eddie Farhi. Your algorithm's quantum. Can't run it on those mean machines until we've actually got them.
And the poem goes on, but you get the point that smart people have been working very hard to find algorithms that we can run on a machine that doesn't exist. At least up until now, quantum computers haven't been able to actually run the algorithms we've been thinking about.
Now, why is that? What's taking so long? Feynman called for the development of quantum computers in the early 1980s. It's been almost 40 years. Well, the reason we don't have quantum computers yet is that it's really, really, really hard to build them. And part of the reason that it's hard is because of a phenomenon that we call decoherence.
Physicists like to imagine a cat which is both dead and alive at the same time. But we never see that type of coherent superposition of macroscopically distinguishable objects in real life. And we understand why that's the case. It's because a real cat will unavoidably, and very quickly, interact with its environment.
And those interactions with the surroundings will, in effect, measure the cat and project it onto a state which is completely dead or completely alive. That's decoherence. And it helps us to understand why. Although quantum mechanics hold sway in the microscopic world, nevertheless, classical physics provides a very adequate description of ordinary, everyday phenomena.
A quantum computer, although otherwise it might not be much like a cat, would also, even though we might try hard to prevent it, interact with the environment. And that could cause decoherence, which would mean the delicate quantum state in the quantum computer would be damaged, and our computation would fail.
So we don't expect to be able to run a complex quantum computation unless we have a way of protecting a quantum computer from decoherence and other potential sources of error.
Well, errors are a problem even in the classical world. We all have bits that we cherish. But all around, there are dragons lurking who take fiendish pleasure in flipping our bits. In the case of classical information, we've learned some ways of protecting ourselves against those dragons.
If I have a very valuable classical bit and I don't want it to be damaged, I can store backup copies of the bit. And then the dragon might come along and change the color of one of those balls, but I can ask a busy beaver to frequently check the balls, and whenever one is a different color from the others, repaint that so that all three match again.
And then, if the dragon hasn't had a chance to flip the color of two of the balls, the information will be protected. Because it's been redundantly stored, it has some resistance to errors. And we'd like to use that same principle-- that redundant storage of information provides protection against errors-- in the case of quantum information.
But, at first, there seemed to be difficulties. Because, as already noted, we can't copy unknown quantum states. So I can't, for example, store a backup copy of the state of a quantum computer in the middle of computation in case the original gets damaged.
And in the case of quantum information, there are more things that can go wrong than with classical information. It might be that a dragon will come along, open door number one, change the color of the ball, and then re-close the box. And that would be like a bit flip that occurs in classical data.
But, alternatively, the dragon might decide to open door number two and change the color of the ball and re-close the box. That's what we call a phase error in quantum information, which really has no classical analog, but we need to be able to protect against that type of error, as well.
There's another way of thinking about these phase errors, which is that the dragon might open the door number one of the box. And instead of flipping the bit, observe the bit and remember it. And by keeping a record of the color of the ball, the effect will be that if I look through door number two with the color, there's likely to be an error.
And in many experimental situations, it's easier for the environment to remember a bit than to flip a bit. So these phase errors are especially pervasive in a number of hardware platforms.
So, really, the problem is that when you observe a quantum system, you unavoidably disturb it and disturb it in some uncontrollable way. So to prevent decoherence we have to prevent any information about the state of our quantum computer from leaking to the outside world.
We need to keep it almost perfectly isolated from the outside, which sounds impossible because our hardware can never be perfect. But we've understood, in principle, how to do it. And that's the idea of quantum error correction. And the essential idea is that if I want to protect a qubit, I can redundantly encode it but in a different way than what I described classically.
I encode it in the form of an entangled state of many qubits-- five qubits, in this case. In such a way that if you look at those five qubits one at a time, the encoded information can't be acquired.
So the dragon might come along and manipulate one of the five boxes, make any observation he wants, but that won't allow the dragon to access or irrevocably damage the encoded information because it doesn't reside in that individual box. Just like that 100 page book I described earlier, the information is encoded in a collective form among the five boxes.
And then the beaver can come along and make a clever collective observation on the five boxes, which will allow the Beaver to determine what type of damage has occurred and reverse that damage, again, without acquiring any information about what the encoded state is, and therefore without damaging it.
So in the quantum world, too, it is possible to find redundant encodings of information that protect the information from damage when it interacts with the environment.
The essence of the idea is that if I want to protect quantum information, I encode it in this highly entangled form so when the environment interacts with the parts of the system one at a time, as when we tried to read the 100 page entangled book one page at a time, the information can't be accessed and is not damaged.
And, furthermore, we've understood how to process information which is encoded in this highly entangled form so we can operate a quantum computer protected against the damaging effects of decoherence.
So we developed this theory of quantum error correction in the mid 1990s and it was very exciting. We imagined that, although we might not have a real cat which is alive and dead at the same time, we should be able, in the not too distant future, to encode a cat in that type of delicate superposition state and store it in a quantum memory for as long as we please.
So my student at the time, Daniel Gottesman, wrote a quantum error correction sonnet which read, in part, we cannot clone perforce, instead we split coherence to protect it from that wrong. It would destroy our valued quantum bit and make our computation take too long. And, of course, it's a sonnet so it goes on. But you get the idea. We were very excited.
Of course, that was over 20 years ago. And we're just now getting to the point where quantum hardware is capable of testing, developing, and improving these quantum error correction ideas under real laboratory conditions. Now, another hero of this subject is my CalTech colleague Alexei Kitaev.
We first met in 1997. And the day we met was one of the most exciting of my scientific life. When I heard his seminar and made these notes, I thought, I was hearing from Kitaev ideas about quantum error correction that were potentially transformative.
The key thing I learned from him was the connection between quantum error correction and topology. Topology is a word the mathematicians use for properties of an object that remain unchanged when we smoothly deform that object without ripping or tearing it.
And if we want a quantum computer to correctly process information, we would like the way the quantum computer acts on the protected information to remain invariant when we deform the computation by introducing some noise. So it's natural to think about processing information using physical interactions that have some topological character.
And physicists have known about such interactions for some time. We know, for example, that if a charged particle like an electron is transported around a tube, which encloses magnetic flux, that the quantum state of the electron changes as a result of the electron circumnavigating the flux tube in a way that depends on the magnetic flux inside the tube, even though the electron never directly visits the region where the magnetic field is non-zero.
And furthermore, that change is topologically invariant. If I deform the path that the electron takes, the effect of winding around the flux tube is the same. The only thing that matters is the winding number-- the number of times that the electron wound around the flux tube. And this type of topological interaction can occur in other situations.
And, in particular, such interactions have a very rich potential structure in two dimensional systems. Like, for example, if I confine a system of electrons between two slabs of semiconductors.
So the electrons effectively live in a two-dimensional world, then it's possible, if the electrons find some very highly entangled state, for that system to support particles which we call anyons. They're particles in this two dimensional medium.
And they have the property that if I have a system with many of these particles, many anyons, there are a great many different quantum states which can be obtained by stitching these anyons together. But all those states look identical when you visit the anyons one at a time.
And this is just the kind of encoding of quantum information that we would like if we wanted to be protected against decoherence. The environment interacts locally with the anyons one at the time. But in doing so, it can't acquire any information about what's encoded and, therefore, can damage it.
And, furthermore, we can process that information in a simple way just by having the particles change places-- by swapping their positions. And so we can imagine operating a topological quantum computer, as Kitaev proposed, we would initialize it by producing pairs of anyons in some two-dimensional medium that we would engineer.
And then we would process the information by performing successive exchanges of neighboring particles so that the world lines followed by these particles in two plus one dimensional space-time would form a braid.
And then, at the end, I would read out our result by bringing the particles together in pairs and observing whether those particles can annihilate one another and disappear or not.
Now, the beauty of this idea is that as long as we keep the temperature very low so there are no unwanted anyons diffusing around, and as long as we keep the anyons far apart from one another except at the very beginning and the very end of the computation so they don't interact with one another in any undesired way-- then, as long as we do the right braid, we're guaranteed to get the right result.
So I was very excited when Alexi explained this idea to me. And, therefore, wrote a poem which read, in part, Alexei exhibits a knack for persuading that someday we'll crunch quantum data by braiding. With quantum states hidden where no one can see, protected from damage through topology. Anyon, anyon, where do you roam? Braid for a while before you go home.
And the poem goes on, but you get the idea. It's a very exciting, beautiful idea-- a theorists dream of how quantum computer might someday work. But can you really make it work in the lab?
Well, here, too, Alexi had an important idea which involves splitting an electron into two pieces. Now, that sounds ridiculous because everybody knows an electron is an indivisible elementary particle which can't be divided. But in a highly entangled world, amazing things can happen.
Now, we can imagine a wire at very low temperature, which is superconducting. That means it can conduct electricity without any resistance at all. But there are actually two kinds of superconducting wire-- the ordinary kind and what we call a topological superconductor.
And I can have a segment of topological superconductor with ordinary superconductor on either side. And where the two types meet, there resides something we call them a majorana mode. Now, I can introduce an extra electron into the topological superconductor. And if I do so, that electron dissolves and disappears.
And, at the same time, the state of this pair of majorana modes at the two ends changes. But that change in the majorana modes is not locally detectable if you look at one end of the segment or the other. It's really a collective property of the two ends.
So if you can only observe the wire locally, you can't tell whether I added an extra electron or not. You can't tell whether the total number of electrons is even or odd. So that's just the type of encoding we would like for quantum information to be robust against decoherence because the state is not locally detectable.
Now, this type of topologically protected storage of quantum information has been studied experimentally, now, for seven years. And there've been a series of increasingly impressive experiments indicating that it really works the way I described. But those experiments are still not completely conclusive. So further experiments are needed.
Of course, we would like to do more than just store quantum information. We'd like to be able to process it in a reliable way. And that might be done by configuring a network of wires with T-junctions, as shown here. And in order to process the information, I would like these majorana modes, which behave like anyons, to be able to change places.
I could do that by adjusting electrical voltages under the two-dimensional system, which move the majoranas around. So the one on the left could be parked around the corner. The one on the right moved over to the left. And then the first one unparked so that the two swap places. And that would be one step in a protected quantum computation, which precedes topologically.
Now, this type of experiment hasn't yet been conducted, but we hope that it can be in the next couple of years. And if and when that happens, it'll be not just a step towards a potentially powerful new technology, but really a remarkable milestone in physics.
Now, I don't want to give the impression that the only effort currently underway to develop quantum computing hardware is based on this topological approach I just described. No. That's far from the case. There are a number of different ways of physically realizing qubits, which are currently being developed.
I mentioned already the possibility of storing a qubit in a single photon. We can also use as our qubit a single atom which could be either in its lowest energy state or some long lived excited state in order to store the information. Or a single electron, which has a magnetic field which could be oriented either up or down, providing the two possible states of the qubit.
Now, these are all remarkable encodings in that they involve soring the information at the level of a single particle. But there are also more complex encodings that are being studied. For example, I could store the information in a superconducting circuit.
Not the type of exotic topological superconductor I just described, but an ordinary superconductor where, for example, we might imagine that the persistent circulating current in the loop is either oriented clockwise or counterclockwise, corresponding to the two states of the qubit.
And that's also a remarkable encoding because it now involves the collective motion of billions of electrons and yet, for information processing purposes, it behaves as though it were a single atom.
So where's the technology now? We are on the brink of what has been called quantum supremacy. That means quantum devices performing some task that surpass what we can do with the most powerful existing digital computers.
And I thought it would be useful to have a word for this new era of information processing, which is just opening up so I suggested a word NISQ-- N-I-S-Q. It stands for noisy intermediate scale, quantum.
Intermediate scale means that we're talking about devices that are big enough, have a number of qubits-- let's say, greater than 50-- so that we can't, by brute force, simulate what the quantum device is doing with the powerful supercomputer.
But noisy emphasizes that these devices will not be error corrected and therefore the noise that is the errors in the gates processing the information will place limitations on how large a computation we can do and what problems we can solve.
Now, for physicists, NISQ is quite exciting. It means that we will have new tools for exploring the behavior of highly entangled states of many particles in a regime, which was experimentally inaccessible up until now. And this technology might also have applications that other people care about-- perhaps commercially viable applications, but we're not sure about that.
We shouldn't think of NISQ as something that's going to change the world by itself. Rather, it's a step towards more powerful quantum technologies of the future. I feel confident that quantum technology will have a transformative impact on human society, eventually.
But we really don't know how long that's going to take there's a kind of emerging paradigm of how near-term quantum devices might be used to solve problems-- a kind of hybrid quantum classical strategy. It makes sense to use our powerful classical computers to the extent that we can and then try to boost that power using a quantum device.
One way we might do that is run a small quantum computation in a NISQ device, measure all the qubits, then feed the outcomes of all those measurements to a classical computer which would then return instructions about how to slightly modify the quantum computation, which is then repeated.
And that process is iterated until convergence with the goal of finding an approximate solution to some optimization problem of interest. Now, we don't actually expect that quantum computers we'll be able to efficiently and exactly solve very hard optimization problems, but it's possible that they'll be able to find better approximate solutions to such problems and to find those solutions faster.
Now, should we expect that NISQ devices will be able to outperform the best classical computers for some optimization tasks? I don't know. But it's a lot to ask because the classical methods that we use are well honed after decades of development, and these NISQ devices are just becoming available for the first time.
But we're going to try it and see how well it works. And as we experiment with the devices, we hope to find more powerful applications.
Now, as I've emphasized, these NISQ devices will not be error corrected. And that will place limitations on their computational power. As I emphasized earlier, eventually, the solution to that will be quantum error correction, which can make a quantum computer reliable, even though the physical gates are noisy.
But quantum error correction has a heavy cost in overhead, in terms of the additional qubits and gates that we need. How high that cost is depends on the quality of the hardware that we use and also the problem that we want to solve.
But, for example, if we were trying to solve some problem in chemistry which surpasses what we can do today with digital supercomputers, we'd probably need over 100 protected logical qubits. And all together, to have good error correction, millions of physical qubits.
So there's a big chasm to cross between where we expect to be with quantum technology in the next few years with [INAUDIBLE] 100 qubits to the millions of physical qubits we would need for a really scalable, error corrected technology. And we don't know how long that's going to take.
But, meanwhile, continuing to advance the quantum gate technology, the systems engineering, designing better algorithms and error correction methods will bring us closer to that day when we have truly scalable quantum computers which can solve hard problems.
So I've emphasized three questions about quantum computers. Why would we build one? Well, the best answer we have to that is that we think that with a quantum computer we'd be able to efficiently simulate any process that occurs in nature. And we don't think that's true of digital computers which are unable to simulate very highly entangled quantum matter.
Can we really build large scale quantum computers? We don't know of any obstacle that will prevent us from doing so, now that we understand the principles of quantum error correction. And how will we build one? What will the hardware be? Well, we don't really know for sure.
As I've emphasized, there are a number of different approaches to realizing qubits experimentally and building scalable hardware platforms that are under development. And it's important to pursue these different technologies in parallel for now because we still don't really know what technology is going to have the best long-term prospects for scalability.
Now, these three questions, I think, make up a compelling research agenda. And a lot of my research over 20 years has been aimed at addressing these questions. But I'm not an engineer. I'm a theoretical physicist.
And what I really find most exciting are the ways in which our deepening understanding of quantum information gives us leverage and new ideas about attacking problems of interest in physics.
The way I like to look at our field of quantum information is that we're really in the early stages of exploring a new frontier of the physical sciences. What we could call the complexity frontier or entanglement frontier.
We are now developing and perfecting the tools to create and precisely control very complex states of many particles-- many qubits-- highly entangled states which are so complex that we can't simulate them with the most powerful digital computers that we have or describe their behavior very well with existing theoretical tools. And that opens new opportunities for discovery.
One notable trend is that there's been a surge in interest over the last few years in quantum information and quantum computing concepts among the community of physicists who work on problems in elementary particle physics and gravitational physics.
People in this community have been working hard for decades to understand at a deep level the properties of black holes and, in particular, the quantum aspects of black holes. And that effort has given rise to a remarkable idea that we call the holographic principle.
What the holographic principle asserts is that contrary to naive expectations, all the information in a room like this auditorium-- all the information in our brains, and smartphones, and so on is actually encoded, but in a very scrambled form which is exceedingly hard to read on the boundary of the room, on the floor, and the ceiling, and the walls.
And what we're starting to understand is that information about geometry-- about who in the auditorium is sitting close to who else-- is encoded in the quantum entanglement of that highly scrambled boundary description. So, in a sense, it's really quantum entanglement that's holding space together.
Recently, at a CalTech event, we had a talk by the director of the Institute for Advanced Study, Robbert Dijkgraaf And I was struck by this slide that he showed at the end of his talk, which was intended to illustrate how different principles of theoretical physics are related to one another.
I thought it was very interesting that he put quantum information right at the center of things. Dijkgraaf might not have done that a few years earlier because this idea of quantum information as a unifying principle of physics is really just starting to gain traction.
But where I would differ from Dijkgraaf is, I would cross out the word theoretical because quantum information is really an experimental subject.
And if it's really true, as we increasingly believe, that the geometry of space is an emergent property arising from quantum entanglement in an underlying quantum system, then we should be able to gain deep insights into the quantum structure of space time by doing laboratory experiments with highly entangled quantum systems.
By doing experiments on a tabletop, in a lab, in a place like Cornell, we should be able to create and study space-time geometries that never existed before. But whether that prophecy comes true or not, I think we can say, with high confidence, that many rousing surprises await us as we start to explore the entanglement frontier. Thanks a lot for listening to me tonight.
[APPLAUSE]
As part of the Spring 2019 Hans Bethe Lecture Series at Cornell, Physicist John Preskilll explained quantum entanglement, and why it makes quantum information fundamentally different from information in the macroscopic world in his talk "Quantum Computing and the Entanglement Frontier," given April 10 in Schwartz Auditorium, Rockefeller Hall.
Preskill is the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology, and director of the Institute for Quantum Information and Matter at Caltech. He received his Ph.D. in physics in 1980 from Harvard, and joined the Caltech faculty in 1983.
The Hans Bethe Lectures, established by the Department of Physics and the College of Arts and Sciences, honor Bethe, Cornell professor of physics from 1936 until his death in 2005. Bethe won the Nobel Prize in physics in 1967 for his description of the nuclear processes that power the sun.