share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
DAN RALPH: So welcome to this semester's Bethe Lectures. The Bethe Lectures series honors and celebrates Professor Hans Bethe, who is one of the foremost physicists of the 20th century and an eminent member of the Cornell physics department for 70 years. And I can't really do any better in introducing such a series than what Ira Wasserman did a few years ago, so I've adapted some of his remarks.
Hans Bethe began his career in the 1920s when he was among the first physicists to explore applications of the quantum theory, including understanding the properties of electrons and crystals in the beginning of solid-state physics. Upon leaving Germany in 1933 to escape persecution for his Jewish heritage, Hans moved first to Great Britain, where he began his lifelong work in nuclear physics. Hans moved to Cornell in 1935 as an assistant professor. About four years later, he explained how stars burn hydrogen to helium, and he received the Nobel Prize in physics in 1967 for this work. For the assistant professors in the audience, no pressure.
During World War II, Hans first contributed to work at the MIT Radiation Lab for the development of radar. And I'm amused that, despite all his groundbreaking work in nuclear physics and astrophysics, his most cited paper emerged from this period, "Theory of Diffraction by Small Holes." This laid the foundation for much later development of near-field optical microscopy. He then moved to Los Alamos where he was director of the theory division for the development of the atomic bomb.
Over his career at Cornell, Hans worked on many subjects, famously explaining the Lamb shift and hydrogenic spectrum in terms of quantum fluctuations of the electromagnetic field, developing theories for properties of nuclear matter, and understanding properties of neutron stars, black holes, and other problems in theoretical astrophysics. He also helped build our physics department to a world-class status it continues to enjoy today. He fostered an informal collegial environment that remains one of the defining characteristics of the Cornell physics department.
Hans retired from Cornell in the mid-1970s after 40 years on the faculty but continued to research actively, almost right up until his death in 2005. During this period, he led the worldwide effort of astrophysicists to understand supernova explosions, and he solved the solar neutrino problem that had bedeviled stellar astronomers for 20 years.
Throughout his career, Hans also exemplified personal integrity and courage. During the anti-communist hysteria of the McCarthy era, he was an early opponent of the development of the hydrogen bomb, he helped protect Cornell physics colleague Philip Morrison from being dismissed as a result of his vocal opposition to the Korean War and his purported communist sympathies, and he defended J. Robert Oppenheimer, former head of Los Alamos, in his notorious security clearing hearing. He was a forceful and effective advocate for the Limited Test Ban Treaty, which forbade testing nuclear weapons underwater, in the atmosphere, and in space. He was a formidable opponent of the Star Wars anti-ballistic missile program, and he was a relentless proponent of peaceful application of nuclear energy.
How did Hans sustain such a high level of achievement for so long? He loved solving problems. He was brilliant, of course, but physics was fun for him. For his entire life, he was as enthusiastic and forward-looking as a new graduate student. So this week, let us celebrate Hans Bethe. We are lucky he gave so much to Cornell and left us with such a wonderful legacy.
So I'll introduce our speaker, John Martinis, in just a moment. But just going over the schedule for this week, let me remind you of this week's lectures. Today we'll hear about "Quantum Error Correction for Mortals." And after the talk today, we'll have our usual meeting of the speaker with students and post-docs in room 403 PSB. And I want to emphasize that undergraduates as well as graduate students are welcome to them.
Tomorrow at 4:00 PM in Clark 700 and also on Zoom, we'll hear "My Trek from Fundamental to Industrial Research-- Quantum Systems Engineering," and this is in addition to our regular last [? AEP ?] talk, which will have a different speaker during lunchtime tomorrow. And then, Wednesday evening will be a public lecture back here at 7:30 PM and also on CornellCast livestream, "Building a Quantum Computer."
OK, now for our speaker, John Martinis. John is a leader in one of the most exciting recent developments in physics, the use of superconducting devices for quantum computation. He did pioneering experiments in superconducting qubits starting in the mid-1980s for his PhD thesis at University of California, Berkeley. During his career, he has worked on a variety of topics on low-temperature device physics, including applications of SQUID amplifiers and the development of superconducting transition-edge sensors, now an important tool for astronomy, including some of the research efforts here.
Since the late 1990s, he has focused on quantum computation. He was awarded the London Prize in low-temperature physics in 2014 for his work in this field. And from 2014 to 2020, he worked at Google to build a useful quantum computer, culminating in a quantum supremacy experiment in 2019. He was awarded the John Stewart Belle Prize in 2021. So let us welcome John and hear about "Quantum Error Correction for Mortals."
JOHN MARTINIS: Thank you for the kind invitation to speak to you today. We've been trying for a couple of years to get me here. And now, hopefully COVID has died down enough we can do that. I especially appreciate the Hans Bethe series and having me lecture under that. I enjoyed hearing the very amazing resume and feel greatly honored to try to emulate a little bit what he tried to do.
Let's talk about quantum computers. And quantum computing-- the idea behind a quantum computer has been around since the 1980s or so. And a wide variety of physicists, mathematicians, computer scientists have been looking at this and trying to figure out, can we build a quantum computer, what it would take. And it's taken many decades to get to the point right now where it's pretty interesting, where people have fielded quantum computers. And they're working and you can run them on the cloud. And it's really an exciting time because this kind of dream has become a reality.
So this is all very nice science and technology at this point, but the quantum computers that we have right now are a little bit less than what we want to be doing practical useful calculations, and so we need to improve them. And today, what I want to do is I want to talk about not just how great quantum computers are-- I'm not going to emphasize that too much-- but really emphasize, what do we have to do to make them better, and what do we have to do to make them useful? And I'll give some examples of what's going on in the field to make that clear.
And then, in the end, we're going to want to do error correction with a quantum computer, and I'll explain kind of where that's going, at least what we understand right now, and hopefully do it in a way, just the basics of the physics that we can understand here, and then give you a sense of what a more complex system will be. I'll kind of explain what it is. It's kind of hard to go down into the details too much. In particular, we're going to look at something called the surface code, which I think is one of the leading candidates for what we're going to do now and not too hard to understand. OK?
So let's just first start with, why are we interested in quantum computation? And that's because we can store/manipulate in a really new powerful way taking advantage of the laws of quantum mechanics. Now, you know, when you use a classical computer like your cell phone, that's based on storing/manipulating individual bits that are 0 or 1. However, we know that nature allows us to do something more sophisticated, and that is you can build a quantum state that's 0 and 1 at the same time.
And what you can imagine doing with such a state-- which I give here, 0 plus 1-- is you can imagine running that one state through a quantum computer. And you can evaluate the answer to 0 and the answer to 1 at the same time. And it's kind of like a parallel processor, but OK, it's a factor of 2. You're looking at two states at once. That's nice, conceptually nice, but OK, how useful is that going to be? But the way it gets useful is to build bigger quantum computers.
And for example, if you have 2 qubits, then you put each of the states in 0 plus 1, now you're running through all four possible input states to the system and you double the computational power from 2 now to 4. You have 3 qubits, there are 8 states-- 4, 16, 5, 32-- and you see that the computational parallelism in the quantum computer is growing exponentially. So this can get really powerful if you get enough qubits to work properly.
AUDIENCE: Everything you've said also applies to a classical computer.
JOHN MARTINIS: You can't do-- you're using 3 bits, a few bits, to run an exponential number of states in parallel. That does not happen in a classical computer. You have to try one at a time. So what gets interesting is, by the time you get up to around 50-- and 2 to the 50 is a big number, and it's more or less the size of classical supercomputer memory. So you start taxing or exceeding what you can do at supercomputers. And at 300, you have 2 to the 300 parallel computation. And that's a number that's bigger than the number of atoms in the universe. So clearly, you're doing something you can't do classically.
Now, there is a problem with the quantum computer in that, as you know, in quantum mechanics, you have to measure. And when you measure, you kind of collapse the information and you get n bits of information, which is much smaller. So it means that you have to build algorithms so, when you measure, you're obtaining some information from the system that's going to be useful for you. And it means that not every algorithm will be useful quantum mechanically, but there are some. But for a co-processor for certain applications, this would be very interesting and very powerful.
Now, that's all very nice at a conceptual level. I'm now going to talk a little bit of an example of qubits and what they look like so you get an idea. And then we'll go abstract again and talk about why errors are so important. So I'm going to talk about superconducting qubits, which is something that I worked on for my thesis, which is building artificial quantum systems.
Now, we're very used to, in the real world around us, having electrons and atoms and nuclei and photons obeying quantum mechanically. But in the 1980s, there was this conjecture out there, actually by Anthony Leggett, that could you build a quantum system out of a macroscopic object? And in this case, the macroscopic object is an electrical circuit. So this is our superconducting qubits. That's a fraction of a millimeter. It's something you can see with your bare eyes, at least when you're young. And you have a collection, so a huge number of electrons flowing around here. And the question is, can you describe the current and voltages that you think about describing how this system works, can that behave quantum mechanically?
And at the time, people didn't know whether these macroscopic systems would obey. You thought they would be, but there was some conjecture that maybe you wouldn't do it, so we wanted to prove that. And of course, once you can build something very large like this, then you can engineer complex quantum circuits. You can use electrical engineering concepts to figure out how to couple them together and get it to work. So it's a really nice new kind of artificial quantum system. And yeah, so it's macroscopic, easy to control, engineered properties, built like a computer chip. So once we build one chip, we can build hundreds or thousands of them.
So this is a picture of this qubit. In the dark, this region here is aluminum metal. And over here is where the aluminum metal has been removed and you're forming-- there's kind of a capacitor structure here with these Josephson junctions. And note there are these tiny little wires coming in here, which is how we're going to excite them with microwaves. So it's an example where the device is way, way bigger than the photons that you can do it, and that means you can engineer very complex circuits out of that.
To understand how it works, very simply, this is just a nonlinear inductor-capacitor oscillator. And you have a capacitance of this cross to the ground. OK? And then there are Josephson junctions right here, which are just a strip of metal oxidized and another strip put on top. And then there's electrons tunneling through that. And in the superconducting case, that tunneling of the electrons looks like an inductor. It's almost like a kinetic inductance.
But what's very important about that is it's nonlinear so that you build a nonlinear oscillator, which in fact, maps exactly to a pendulum. And if you remember, the pendulum, the oscillation frequency, gets slower as you go higher and higher. Of course, all the way to the top, the oscillation frequency goes to 0. But being a nonlinear oscillator is very important because then, when you quantize the energy levels in this oscillator, it means, let's say, the 0 to 1 transition might be designed to be around 5 gigahertz or so.
But because of the nonlinearity and the softening of the oscillations, it means like the 1 to 2 transition is going to be at a lower frequency like 4.8. 2 to 3, 4.6, so it will be a lower frequency. And that means, if you use your microwave controls to push this system back and forth to excite it, if you use a long pulse, you'll excite the 0 to 1 transition, but there'll be no spectral components on the 1 to 2 And you can stay in that qubit manifold of 0 and 1 from the nonlinearity. And then, with superconductivity, there's no dissipation from superconductivity. This is a long live state.
So you can think about the encoding of the qubit like we talked previously. The 0 is in the ground state, no energy. Of course, there's a wave function associated with this. And then the 1 state is one quantum of energy semi-classically thought this way, and you would have some kind of almost harmonic oscillator-looking wave function. And then you could see quantum mechanics with the system. OK?
So one of the experiments that you do right away is you create what are called Rabi oscillations, and it's very simple. You just start in the ground state. You drive it with a microwave so you have a 0 to 1 transition. But because you're off resonance to the 1 to 2, then you start doing 1 to 0 transitions and you oscillate back and forth between those. And so that's what you do in the experiment. You just let it relax to the ground state by waiting maybe a millisecond or less. You then put on microwaves for a variable amount of time, which is here.
And then, at the end, you do a measurement whether you're in the 0 state or the 1 state. And of course, when you do a quantum measurement, you get a 0 or 1 and you repeat that a thousand or 10,000 times to get the probability of a 0 state. And then you sweep the time, and then you get the oscillation of this curve, a nice oscillation here. This oscillation probably took us 10, 15, 20 years to get it to look that nice, but eventually figured out how to do that.
So let's talk about what's going on here. It's initially in the 0 state. And then, as you put on the microwaves, you do a transition to 0 and 1. And then you put more transitions. The 1 state goes back to 0 and then oscillates back and forth between the two states. Now, if you pulse this for 20 nanoseconds from here to here, that takes you to a 0 to 1. And if you pulse another 20 nanoseconds, that takes you to a 1 to a 0. And that, classically, you would treat as a NOT gate, 0 to 1 to 1 to 0. And that's kind of a classical operation with a quantum computer. We wouldn't use it that way, but you can think of that.
However, if you pulse that with half the length, then it's a pulse from 0 to what is now the 0 plus 1 state, so you measure it 50% 0 50% 1. And then another 10 nanoseconds, you get to a NOT. So you can think of this kind of algebraically as a square root of NOT operation, because the square root of NOT and then another square root of NOT equals to a NOT. In fact, you see that it's a continuous operation, a NOT to some power where that power is defined by this time. OK?
Now, classically, you don't know what a square root of NOT operation is, but quantum mechanically, it makes sense. You talk about how it's going into the superposition state. And because it has a very definite effect on the state, you can do computation with it. And in fact, what I talked about previously showed you how you can rate this very complex state in which to do some computation. So this is kind of like you have an expanded instruction set from what you get from classical logic, and thus, you can do these complex quantum operations.
Now, one of the things you might want to note here is it oscillates back and forth. But right here, this is slightly higher than what you were at here. This is slightly lower. And that's because, over time, there is errors in the system. There's a quantum bit decay. There can be some dephasing noise. Whatever it is, quantum circuits are not perfect. And the imperfections you see here, this is due to a coherence time of maybe 10-20 microseconds. So you can maybe do a thousand oscillations, but it's not going to be perfect. And this is what is the dominant error source that we're going to have to talk about to get a quantum computer to work properly. And that's what-- this talk is about understanding how to at least think about it, and eventually, how we're going to fix that.
Before I go on too much, let me just say that, when you have individual qubits, you can operate, but you have to connect them together and have them do a logic operation. And the basic way you do that is you just take two qubits and you put a capacitor between them. And you can think of this as a coupled spring, couple swing problem where you have, let's say, one photon in 1, 1 qubit, and it's oscillating. And because of the capacitance, that oscillation swaps to the other one and then goes back. So it's just a standard coupled oscillating mode. You can understand this classically.
So what we do here is we take the two qubits and we move them off resonance, different frequencies so that they won't talk to each other very strongly. We put a photon, a NOT gate on one of them. Then we bring them on to resonance and then you start getting this coupled oscillation. And if you look right here, you can see it's bright where the second qubit is initially 0 population, and then it goes blue at the transfer. It goes white and blue, and you see the oscillations there. And then we build this more complicated circuit with another qubit, which is off resonance from this qubit.
And this coupling from here to here to here is off resonance, so this is virtually excited and then drives the other one. And as we change the frequency of this qubit, the amount of coupling here changes. And we can arrange everything so, at a certain frequency of this qubit, you turn off this effective capacitance coupling here. And then you don't get coupling between the two, which is one of the operations you want to do. You want to turn it off. And then, if you change the current through here and change this frequency, you can then turn it back on again and turn it on very strongly.
So just like, with a single qubit, we drive it with microwaves, here, we drive it with a current into here that changes the qubit frequency. And we can turn on and off the coupling between qubits. And that allows us to couple two qubits together and then to make logic gates. And from that, you can build an arbitrary quantum computation out of it. OK? I'm not going into the details here, but that's the basic idea. It's actually pretty simple physics to understand how that's working.
So with that in mind, how to do single and two-qubit operations, you can then understand conceptually how to build a quantum computer. And I like to start first with classical circuits, let's say based on CMOS or some logic you're going to do. And this is what's called a full adder where you have two input bits and a carry-in, and then there's a sum of those two bits and a carry-out. So if you want to build an adder in classical logic, you build it out of what our NAND gates and exclusive OR gate and OR gates, logic operations between two qubits. And for example, an OR gate can be made from a NAND gate and a NOT gate. So you can build up the logic from very simple operations to do more and more complex things.
Now, what happens here is this has laid down these devices in space. They're actually-- yeah, the transistors are in space. But then you kind of think of it as you change the input here, and then those inputs kind of flow through all these devices and time. There are small nanosecond delays. And then you get the answer out a few nanoseconds later, say. OK.
Now, in quantum, there's a similar kind of language to build algorithms, and you build it up from the basic gates of the NOT and square root of NOT gates that I just kind of talked about, single-qubit gates. And then the two-qubit gates where two qubits are interacting in the way that I just showed you, maybe a little bit more complicated, but that's the basic idea. And just like you can build up an arbitrary computation here with a NOT gate and an AND gate, for example, you can build arbitrary computation here with an arbitrary NOT or square root of NOT, some variation of that, and then coupling them together.
So as a hardware designer, we're just trying to build that up and figuring out how to do it. There is a difference though. And in here, notice what we have is individual qubits. And then, versus time and not in space anymore, we're running a program. So think of one vertical line here as an instruction set. We did H's on all the qubits. Then we did a single qubit here and two qubit here, so these are kind of instruction sets, kind of like a regular computer, but now we're operating at the fundamental level.
The other interesting thing about qubits and why this geometry is used is, here, you see you have a wire and it breaks up into two wire. You split, you copy classical information. In quantum mechanics, in quantum qubits, you can't copy information. Because, if you did, from one copy, you can measure the bit part of it. The other part you can measure the phase part. But you know that those two, you can't measure them simultaneously accurate.
So the Heisenberg uncertainty principle forbids you from splitting a quantum state. So what we do is we have a quantum state here and then we just operate it with neighboring qubits and measure at the end. And it turns out that any calculation you can do classically you can do quantum mechanically in this way. It's just going to be a little bit more complex because you can't copy information, and that makes it harder.
So OK, that's the basic idea on how to build quantum algorithms. And you have to take your idea and put it into this language. Now let's start talking about errors. That's the introduction. Let's talk about errors. In the 1960s, you could build these out of transistors, a few hundred, a thousand transistors or so. But the error of fabricating these devices was maybe 1 in 100 or 1 in a thousand, so you couldn't build very complex devices.
So when I was growing up, you had these TTL devices and you put them together and you can do something complicated. Of course, now, what is it, 60 years later, the error of building these devices is maybe a part in 100 billion. So your M1 processor in your Apple phone is really complicated and it just got really good. So the size of it is determined by what's the error rate of building things. And the same thing happens with qubits.
These qubits have fundamentally-- they have errors. And let's just take an example, kind of modern. You have 100 qubits, OK? And let's say the error rate of one of these particular boxes here is 1%, which is a reasonable number, what people are doing now. So if you have a width of 100 but each operation is 1%, that means, in one instruction set, one depth of one instruction set, the total error is going to approach 1. And that means your quantum program can be one instruction set long. OK? That doesn't sound very good, right? Maybe two, maybe some of the times it'll be 2, but that's not so good.
Now, if you want to have 100 qubits and you want to have 100 instructions or so, then you have to get the error down to about 10 to minus 4, and you have to work really hard at that. So you see, there's this fundamental trade-off between the complexity of your circuit and the error just like classically. Here, you could at least test them and build it. But here, this is intrinsic to it. So this is a problem. And this is actually a problem on all the qubits, all the systems that are out now. I'm talking about 100 qubits, a 1% error being a problem.
If you look at what's going on in the field right now-- so this is from Rigetti. It's called a SPAC. It's something that you put together a slide deck for investors and you don't want to hype that because people will sue you if you get the information wrong. So this is the closest we have to truth in advertising, because people can sue you. And if you look at the various computers that are out there-- Rigetti, IBM, Google, IonQ-- they show that the fidelity is good, but I'm talking about the error here. The error is 2 and 1/2%, 1.3%, 2.5%. They're really quite large, so it's worse than what I told you. So maybe, for 50 qubits, you can do one or two operations or something like that. OK? So this is a problem.
Now, in the Google group, because I was really emphasizing we had to get errors, we actually had pretty decent errors as of a few years ago. And that's why we could do the quantum supremacy experiment. But this is [? this effect. ?] Notice, even though this is a SPAC, whatever, there is still some hype here. You suggest you might find this interesting. They talk with the fastest rate of progress, and that's because you went from 7% to 1 and 1/2%. The difference in these two bars on these other ones are smaller, especially tiny at Google. So you see what the problem is. If you start really crappy, then OK, you grow fast.
So be very careful when you read press releases. There's a lot of fake news out there. First of all, they'll talk about the absolute best result and not average like what's realistic here. And they might only talk about two qubits instead of in a 50 or 100-qubit system. So you have to be very careful here because people are trying to be optimistic about what they're doing. But this is really an issue you have to deal with. You have to make our qubits better. OK?
So this kind of summarizes the fact that both the number of qubits matter-- more qubits is better-- and the limiting error rate that I was talking about also matters. And kind of most of the current qubits are out here. The quantum supremacy was an experiment here. And if you want to have less than-- you want to run multiple numbers of instructions being more than one, as you grow the number of qubits, you want to have the limiting error rate scaling down with that. So this is called a near-term intermediate-scale quantum computer [INAUDIBLE]. And below 50 or so, you can run a supercomputer on it. And this is where it gets computationally interesting.
So the quantum supremacy experiment I'll talk about Wednesday just got its toe and dipped it into this [? NIS ?] region and that was the design of it, but what we really want to do is go down here. Now, OK, if we want a bigger and bigger thing, it means our errors rate goes down. That's really bad. So that's what comes in with error correction. And I'll be talking about that next. Basically, you encode your quantum information, what would be one qubit, in many qubits, a hundred or thousand or more qubits. And then you do some measurements on that and do error correction. And then you can get very small error correction.
So this is a line for 10 to minus 10 error correction. It says that, if you have a 0.1% error on your qubits, which is lower than what we're doing now, you need maybe about a million qubits to start running an experiment-- an algorithm has 10 to the 10 operations, that's a lot of operations-- and start doing something really powerful. So that's kind of the roadmap of what people are doing.
So at Google, we started out in this direction and wanted to go here. I'm going to say other people are content to scale out right now at these high error rates, but I don't understand what people are going to do in the long term. But they're building systems and that's good. But you see, you really want to worry about errors, both to do something right now and then do something long term. OK? So that's what we're getting at here.
So for example, when we did the quantum supremacy experiment, we built this 2D array of qubits. I'll talk about this on Wednesday. It's forward compatible to doing this error correction, so we wanted to build something where we can do that in the long run. Clearly, there's a lot of improvements that have to be made, but that was the idea. So at this point, let me switch to a second talk or second set of slides, and I'll start talking about how this error correction work. OK?
Let's talk about classical information. Why are the errors for quantum qubits so lousy? So let's first talk about classical information, which you could talk about as, let's say, a coin on a table. I'm sure no one has any coins in their pocket, so I'll do the demonstration with my cell phone. And you can imagine the cell phone in the up position representing the 0 and the down position representing 1.
And in classical information, these are very stable states, because if you were to vibrate this a little bit, you might have the edges come up a little bit. But there takes a lot of energy to come up, and then there's dissipation so this gets clamped back down to the table right away. So the fact that there's large energy changes involved in flipping your bits makes it intrinsically stable. And one can build electronics such as this where you don't have much errors at all.
Now, in quantum mechanics, it's a little bit different because you have this amplitude and phase. And you can talk about a single qubit at least in terms of a Bloch vector. And I hope people have taken that in their quantum mechanics. But basically, you can think of the Bloch vector pointing up being a 0 and pointing down being a 1 but pointing 90 degrees being 0 plus 1. And of course, it can be any angle in between. And then, in this direction, you have an angle which is the phase between the 0 and 1.
So there's an amplitude and a phase of a qubit. And in that sense, you can think of this as your coin now with a Bloch vector sticking out of here and this rotating free. And you can see, instead of being pinned at the table to a 0 or 1 by gravity and dissipation, you can think of this now as kind of free floating in space. And any kind of force put on this thing will cause this to start to rotate and give you an error. So this kind of coin in vacuum kind of gives you an idea that you get-- you're very sensitive to having small rotation errors and giving you an error.
Now, the other difficulty here is you could say, well, could I measure this and stabilize it? So when it's 0 or 1, it's easy to measure that. I look at it, I [INAUDIBLE] it, I might jiggle the table, but it stays 0 or one. But quantum mechanically, if you go to measure whether it's up or down, you're going to disturb the phase going around this direction. Or if you want to know which direction it's pointing in this direction, you'll change the amplitude. So a qubit, you measure the amplitude, it randomizes the phase. If you measure the phase, it randomizes the amplitude. And this is from the Heisenberg uncertainty principle.
So OK, that sounds like a real disaster, doesn't it? But fortunately, there's something that helps you. Now, to understand what's going on, let's talk about error digitization. And this is actually a very important concept. And we're going to talk about this Bloch vector. Let's say it's in the 0 state and there's some kind of small microwave tone or noise or something that causes this to rotate a little bit by an angle epsilon. So you're now not exactly in this up state and it's an error.
Now, what you can do is you can measure that state and the state 1 plus epsilon X. This is a rotated state. You have two possible outcomes. Either it goes back to the perfect state in the 0 state with probability 1 minus epsilon squared or it goes into the 1 state with probability epsilon squared, which of course, is small. And in this sense, you have this error. But after doing this measurement, you know that you're either in the 0 or 1 state. It's just probabilistically going to those two states.
And what you've done is you've taken this kind of continuous rotation of the Bloch sphere, which is kind of a Schrodinger picture, and then you've talked about it in terms of this kind of Heisenberg picture where now it's 0 or 1, but there's a random probability of it being between the two. And although you can kind of see this works well for a measurement, it turns out this works perfectly well if you're doing some kind of operation in the middle of a big algorithm. OK? And there are some theoretical reasons that says you can do that if it's uncorrelated errors. And there's also good experimental evidence that this works better.
So instead of worrying about the Schrodinger picture, you're just worried about you having probabilistic bit flips or phase flips of your device. And that makes it a lot easier to understand what's going on with the system. It's a conceptual simplification. It's a digitization of errors. And I think it's really remarkable that quantum mechanics does that, but that's what we see. It happens, and here's a simple example, but this makes it possible to do this.
OK, so how would you go about using this, how to think about this some more. So first of all, we said that, if you measure the amplitude, you're going to mess up the phase. OK? Let's talk about that. That basically says that the commutation of this bit flip and the phase flip-- so the bit flip takes a 0 to 1 and a phase flip takes a phase from this direction all the way in the other direction, so that's X and Z. And the Heisenberg uncertainty relationship says that these two operations don't commute, and thus, you can't treat them as classical independent variables.
So the bit flip is 0 to 1 and vice versa. The Z operation is 1 to minus 1 and vice versa. So let's check this, OK? So let's take the 0 plus 1 state, and then we do a bit flip, and that'll just take us to the 1 plus 0 state. And then we do a Z flip and that'll take this 1 to minus 1. And let's do the opposite where you first do a Z flip, so that's 0 minus 1, and then you do a bit flip, and that's a 1, 0.
So in the case, it's both a 0 plus 1 state, but you can see that this is the minus of this. They don't commute. In fact, they anti-commute, if you remember from your quantum mechanics class. And this anti-commutation is really important. This is how you do error correction, at least in this way. So how does that work? Well, error correction is very simple. And now what you do is, instead of encoding quantum information in one qubit, you will encode it in two and you'll measure parities. OK? And I'll talk about that in a second.
So we have two qubits, and now we have this parity operator, X12, which is X1, X2, and Z is Z12. And you can just think of that as measure mean of the two things and then seeing what happens to it and multiplying the two results. And now, when you take your commutation relationships, you write that out. You get a minus sign from flipping the Z1 and X1. And from the amazing mathematics of minus sign times minus sign is equal to a plus sign, they cancel out and now they commute. OK?
And what it means is that, if you have two quantum bits and you measure their parities in bit or phase, they're going to commute, which means you can treat them simultaneously like classical variables. And they won't change in time if you're not interacting with it, and everything will work out properly when you do that. So for those of you who know a little bit about quantum information, the eigenstates of these parity operations are called the Bell states. And that's why quantum information people are always talking about how great Bell states are, because they have this really interesting property.
So how does error correction now work when you're measuring parity? So what you do is you take two qubits and you measure the parity in Z and the parity in X. And let's say it's plus and minus 1, OK? Since these two parity operators commute, when you go measure the Z after the X, you're going to get the same value as Z. It's not like the amplitude is messing up the phase anymore. OK? And you measure the X again, you'll get a minus 1, and then you'll keep going and doing that.
Now, let's say you had an error in one of the qubits. It flipped. OK? Then you'll measure one of the parities to be different. And you'll say, aha, something changed and I know there was an error. So it does error detection for you in a simple-minded way. OK, now, of course, you say, well, that's great. You know there was an error, but which qubit was an error? You don't know that. So there's a way to encode things to know which one happened. And this is just talking now about bit flips just to be simple. We're just going to talk about bit flip errors.
And we're going to have three qubits encoded, and we're going to measure the parity of 1 and 2 and the parity of 2 and 3. And let's say we started out with 0, 0, 0. And then you measure the bit flips. The parity is 0 and 0. OK? Now let's say the first one flipped and then the parity is 12 and 23 or [INAUDIBLE] is 1 to 0. And for these three single-qubit flips, there's a unique decoding to say, OK, there is a single qubit error somewhere. So you can decode. Now that you have three qubits and two measurements, you can uniquely decode it, except if you have two errors. And now, if you have the 1, 1, that's the complement of this and you'll get the exact same parity.
So if it's more than one error, then you're going to decode it. You're going to say it's this but it really was this and you're going to make a complete error. Now, if your error rates are small, then most of the time it's going this and you're going to be OK. But a small number of times, it'll be two errors, and then you're going to have an error in decoding.
So how do you think we deal with the fact that there's going to be some small error for two flips? So the basic idea is we're just going to encode more and more qubits and going to do more parities. And as you do more parities, let's say add two more as another qubit. Now, a single-qubit or two-qubit errors you'll decode properly, but three, you will not. And you add another qubit.
And it keeps decoding properly until you get to so many errors that the probability of that error is exponentially small and you don't worry about it. So it's a way to get exponentially small errors just by spreading the coding of the state into more and more bits. And this is kind of what classical error correction coding does. It's not quite constrained as this, but it's the basic idea. OK?
So this is fine. It tells you how to do bit flips. And what I'm now going to do is just go on to, what does the full surface code look like? Now, remember we have two variables. You have to worry about bit flips and you have to worry about phase flips. Those are kind of the two fundamental variables that describes what's going on with the qubits. And it's maybe not a surprise you have to go to a two-dimensional array to deal with the bit flips in one direction and the phase flips in the other. OK? So that's essentially what you do, but we'll have to describe a little bit more what's going on here. It's more complex. You'll have to see the topology of this, but it's basically based on just what I told you. OK?
And what you do here is, in the open circles, they're called the data qubits, that's where you're storing the data. And then the black ones are called the measure qubits, which are the parity measurements I told you before. And the parity measurements are to the four nearest neighbor qubits. And here are the Z parities and here are the X parities. And you measure the parities using some kind of sequence and time given these two-qubit operations. But basically, they're just giving you these parity operations now with four instead of two. OK? And then you just run this over and over again.
So that's what it is. Now let me explain a little bit how it works. And please, if you have more questions about this, we have a nice paper written in 2012 that tries to go through this step-by-step and you can really see how it works. I'm just giving you kind of a flavor for how it works, but these are the essentials.
So you're running this thing. And the first thing I want to say is that all these measurements of the X's and the Z's commute. Remember we said that's what was magic about the parity. So clearly, the Z from over here and the X from over here is going to commute. It's only when they're next to each other. And if you have this X and this Z-- you see there's two of them that are connected here and that's what I put in red dots. But because there's two, you're going to get this minus minus sign getting to a plus and they're going to commute again.
So it's constructed so that all these parities are going to commute with each other. And that means, when you do the measurement of all of these states and there are no errors, that means your parities will not change in time. That's what we saw previously on the previous two slides. If it's not changing, it's just the same number over and over again. OK? But then, if there's a change, you're going to see that the measurement changed somehow. And what will happen is-- let's say this qubit flipped. Then you're going to see a change in this one here because this X measured it and this one here as this X measurement. And if you see one here and one here, you say, oh, it's the one in between that measure, so it's easy to decode.
Same thing here. If there was a measurement of a Z-- if we see one of these three changed and then one of these four changed, clearly it's the one in the center that changed. So you can decode those errors just like we were talking about before. So there's the first one, there's the second one, and you can match up these errors like we did before and decode what's going on. OK?
The only problem happens when you get a bunch of errors given by here and you see they're really dense. Maybe you know how to do that, but there's dense here and what's going on. And then what happens is, when it's large density, you're not quite sure exactly how to back out the errors, and then you can have a problem. And backing out the errors, you'll not be unique and you have a logical error. And just like I said before, logical error goes down exponentially with the size. So you can make very small logical errors, but you do have to worry about that. OK? So that's the basic idea and you just have to learn how to match these up.
I do want to talk-- OK, so that's how decoding it. You may want to say, well, you know, what's the qubit? Again, this is getting to the point where you probably want to look through the paper and get all the words right, but you can kind of understand it this way. If you look at this array here, we have 41 data qubits and 40 measure qubits. So you can kind of imagine you have all this information encoded in 41. 40 of that information is encoded in doing the error correction, but you have one degree of freedom left over to encode your qubit. And in fact, that's what happens. And the qubit is actually encoded, and you could measure the qubit by measuring the X operators over in this direction and the Z operators over in this direction.
So going from left to right and top to bottom, those are operators. And you see that these particular operators, they only cross once. So if you do XL ZL, you're going to find that it anti-commutes because it only crossed once and you get one minus sign, and that gives you a logical state. And then it turns out that XLs and ZLs commute with all the measurements-- I won't show that-- so it acts as an extra degree of freedom. So in this encoding, of course it's spread over the whole array, but there's this extra degree of freedom that acts like a logical qubit. I know this is getting a little bit complicated. There's the paper, but I just want to give you an idea of what's going on.
Since we're running out of time, we can skip this. You can calculate what these error rates-- and they're exponentially small-- using high school statistics and high school [INAUDIBLE] that's fairly accurate. And then you can use those formulas to say, look, if you want to have a 10 to minus 10 error rate-- so that's basically running one error a day and you're 1/10 below some threshold, which is around 0.1% error-- you're going to need to encode that in about a thousand qubits. So it tells you how big the array has to be in order to work properly.
And then, finally, let me just say you have to do logical operations. And the way you do logical operations is you have a whole array where you're measuring what these two, four qubits are doing. And all of a sudden, you turn off the measuring and then you've created a qubit in that space there with a value initialized to the value you measured before you turned it off. And then, if you take-- two of these holes create a qubit. Two holes create another qubit. You turn on and off these and stretch out the qubits into states so that you braid these holes around each other. You then form a logical CNOT to do these logical operations.
Again, it's in the paper. But what's nice is, once you build this big array of qubits and can do these operations, you're now forward compatible to doing logical operations and calculations just by turning on and off things in a careful manner. And that's one of the nice things about this is you just know what to build as you get it more and more complicated. I'm kind of running out of time, so I'm going through this quickly.
There have been a variety of experiments of doing that. Here's a picture of the device we made at UC Santa Barbara before going to Google. This is a linear array of qubits that are capacitively coupled to each other. And we just did the experiment for the bit flip error correction that I talked about. So here's the line of qubits. We're doing these parity measurements between this qubit and the next to find out the bit parity, whether it's 0, 0 or 1, 1 or the opposite parity, 1, 0, 0, 1. And we're measuring the parity and we're doing that a bunch of times. And then we're measuring it at the end and we're seeing how that works.
This just gives you an idea of the experiment where these are the waveforms that are going into the experiment. You see there's these oscillations where we have to do the NOT operations and various operations where we're [INAUDIBLE] and measuring it. And for all the qubits, we do this many, many times to do the experiment. So there's a lot of calibration involved in this.
And the experiment-- this is an example run of one out of a million runs that we did. You start out with all 0s. You then run this over time. And in between all the qubits, you're measuring the parity. This is 9 qubits. So for 5 qubits, you're measuring the parity, which [INAUDIBLE] done here. And then finally, we measure the results. From those parity changes, as you saw the errors, we decode what was an error. And that's given in this column. So you see, normally, the parity between these two is 0, but the parity in here was changed as a 1. So we decoded an error, an error. Same thing down here, same thing here. Here, this is a measurement error.
And then what we can do is saying, if you pair these up, you had a bit flip on this qubit right here. And these two are paired up and you had a bit flip right here, a bit flip here. And here, you had a measurement error at this particular cycle. So when you look at this, nothing happened here, two bit flips, nothing, one bit flip, nothing here, one here. And this said that it changed all the 0 states to 0, 0, 1, 0, 0, and that's what we measured at the end. So that means that we did the correction to the bit flip properly and we decoded properly. Of course, there are some examples where we didn't decode properly and that would be a logical error.
OK, so this is an experiment we did at UCSB. You can show that the qubits themselves decay in about 30 microseconds or so. And then, if we just have five qubits with two error correction in between, we get the blue line and you see that the logical error rate is getting smaller for the number of cycles. And then, if we go to the full nine array, we're going to error correct better and then we're going to see the red line here. So it basically says that this bit flip error correction is working properly. As you have more qubits, it works better.
And then there's a recent result. This was on the quantum supremacy chip with 21 qubits. We did the same experiment and we have different amounts of error correction. Now it's one, two, three, four, five orders of this. And you see that the logical errors are going down exponentially with the code distance, which means, if they just continue making it bigger and bigger, we can get the logical error rate as exponentially small as you want, or at least if you believe that the extrapolation will work, something you have to test. So it shows that things were getting a little bit better and we were able to do bigger experiments over time.
AUDIENCE: Was that two or four-quibit stabilizers here?
JOHN MARTINIS: Yes?
AUDIENCE: Was that two-quibit stabilizers or four?
JOHN MARTINIS: These were only two-qubit stabilizers just to-- they're working on the full X and Z right now at Google and the experiment's coming along. It's a hard experiment, but at least you see that the errors are going down exponentially. And all you have to do is just make a lot of qubits and it should work great, but that's hard too. What you really want to do is make sure this is steeper by making your qubits better. That's the really important thing to do.
OK, so it looks like I've run out of time, and let me summarize this. So quantum computers are really exciting. We might be able to do some great calculations with that that we can't do presently with supercomputers, and so it's very exciting people are building these systems. But I'm going to just really try to emphasize here that understanding errors and minimizing them and building these complex systems is really what we have to do to get better. And we stand now building good machines, but we really have to make them better, bring down their error rates to be able to do something.
And what's interesting though is, as you saw here, that you can kind of conceptualize and simplify the results of errors just thinking that as a bit flip or phase flip that's randomly put into your circuit. And it's actually-- you can think about this is very complicated, think about what's going on with errors on a quantum computer, but this particular assumption that it's random bits and phase actually seems to work very well. You expect it theoretically, but experimentally, we see that. And that means you can really kind of engineer and understand that. So it's interesting.
Quantum mechanics has to have amplitude and phases to subscribe what's going on. But if you're talking about errors and you're talking about the probabilistic bit and phase flips, the error calculation is classical. It's high school statistics. So it's really interesting that, even though you have this powerful amplitude and phase quantum calculation going on, the errors are kind of classical and simple, at least in certain limits if you build a quantum computer properly. And I find that really fascinating and interesting, and it always bothered me if this was really true.
And this kind of theory and assumption is basically something we checked about in the quantum supremacy experiment. And I'll sneak by that-- this is a general audience. I'll kind of sneak by that data to show you on Wednesday, because that's actually one of the important things that we discovered in that experiment, that all this kind of error correction language and thinking about it is really what's going on in the system. It's a good description for the system. And with that, thank you very much for your attention.
[APPLAUSE]
DAN RALPH: Want to start with questions or comments from the room? Please speak up, please.
AUDIENCE: [INAUDIBLE]
JOHN MARTINIS: Right.
AUDIENCE: But [INAUDIBLE]
JOHN MARTINIS: So the question was, can you use light as a computing resource? You can't use classical light as a computing resource because you don't get this large Hilbert space that you get in quantum computing. But people are working on building quantum computers out of photonic systems where you're generating either a 0 or 1 state or you're generating non-classical squeezed states, and there's a big effort to do that too.
Now, the nice things about photonics is that the photons can travel a long way, but it's very hard to get two photons to interact with each other. Whereas, you see here, that's a relatively straightforward thing. So each kind of experimental systems has its pros and cons, and you just kind of have to let things play out and people build things. But there are serious efforts to build a photonic quantum computer based on entangled state-- non-classical states.
AUDIENCE: You showed a very nice oscillation at the beginning of the talk. You showed a very nice oscillation at the beginning of the talk for the probability of the 0 and 1 state. And to me, it seemed like the error for the 0 state was smaller than the error for the 1 state. And I would want to know if there's a physical explanation.
JOHN MARTINIS: Thank you. I like when people take our data very seriously. That makes me very happy, and I think you're talking about this one.
AUDIENCE: Yeah, exactly.
JOHN MARTINIS: Yeah, OK. So yeah, this is a little bit higher. Of course, this is raw data. It's not corrected at all. So you can expect, for example, maybe the 0 state you measure a little bit more accurately than the 1 state because it's not decaying. The 1 state can decay. So there are small little errors like that happened that give you these little asymmetries. So this is actually pretty old data now. I think it's a little bit better. But yeah, the errors on the 0 measurement and 1 measurement is a little bit different. And you have to, of course, take that into account when you do everything.
AUDIENCE: First, just a comment on that question. We can't split [INAUDIBLE]. You still can't copy the quantum state of a photon. [INAUDIBLE] John's answer. The question that I have is, you emphasized how important it was to get down the error rates. Even before you did the error correction, you showed just how critical the [INAUDIBLE] number of qubits [INAUDIBLE]. And in recent years, I've noticed that, [INAUDIBLE] asymptote. And what I'm curious about is if we're getting some fundamental [INAUDIBLE]. I mean, not a fundamental science from it, but some technological limit that is going to impede progress in the next few years, and what are you anticipating in that regard?
JOHN MARTINIS: So the question was whether we're reaching some asymptotic limit. And yeah, this could be a problem. And I think it's going to be more technical than fundamental, but yeah, this could be a problem. And you just have to give people time to work out their problems and make it better. Of course, everyone says that they know how to make it better and you can do plots, but yeah, it's a big problem.
The thing that concerns me is that, when I go give talks at conferences and the like, people are always talking about what's working great, but there's not much discussion on how to make it better. And if you know anything about a big industrial project and system engineering, you have to talk about your problems in order to fix it. And if your whole team doesn't know that they're going to lose their job unless they fix this, then they're going to go off and do other things. Right? I mean, this is human nature, to be a little extreme, but you understand. So that's one of the reasons I talk about it, because I think, if we want this to be a viable great field, we have to fix these problems.
Now, I think, for the superconducting qubits, that Google's doing pretty good. And there are some recent results that makes me think we can do maybe 10 times better. So we have a lot of optimism, but we have to prove that. It's hard. And unless you fix that, yeah-- it's interesting now. And at some point, these will stop being so interesting instruments.
AUDIENCE: Can you say anything about the materials and the device design that [INAUDIBLE]?
JOHN MARTINIS: Well, it's algorithm waveform design, material design, qubit design. You build a qubit, it doesn't work right, and then you have to figure out what went wrong, which is what experimentalists do, and it just takes a while to do that. So what we're doing right now, I'm trying to figure out how to make it better. We're looking at materials, we're looking at design, especially fab. We're looking at cleaning up the fab process and a whole suite of things. And if you've been thinking about something for 15-20 years, you have a lot of ideas on what to try. That's why I feel optimistic. There's some good results out there. But yeah, it takes a lot of work to figure that out.
AUDIENCE: So once we start to have these [INAUDIBLE] qubits, physical qubits as logical qubits--
JOHN MARTINIS: I can't hear. Yeah, thank you. Can you pull down for a second just so I can hear you?
AUDIENCE: Yeah. Once you start to have these physical qubits [INAUDIBLE] qubits as logical qubits and we start to do logic operations on it, does these global sizes matter [INAUDIBLE]?
JOHN MARTINIS: Yeah, OK. So the question is, as you go to logical qubits and you have more qubits, do you still have to have good physical? And yes, you really want to keep your physical errors low as you build it bigger and bigger. OK? And then, for each isolated patch, which is your logical qubits, you then know it's low physical error. And then when you go to do the logical operations, you have to do them carefully big enough so that the errors are low. But yeah, and this is a big challenge. As you make it bigger and bigger, if your qubits gets worse, you're in trouble, and that's the problem.
What I would say, though, is-- you do one qubit, that's hard. And then you get two, and that's harder yet. And then you do five or 10, which we did at UCSB, and you have to figure out problems. And then, maybe at 50, you get new problems. But by the time you're getting up to 20 or 50 or 100, you kind of know what the problems are and you have to solve them. And then we think that, as you grow bigger and bigger, it'll be OK. Of course, something's always going to go wrong with experimental life, and you just have to figure that out. But we think we know a lot about what we have to fix right now, and these are good prototype systems.
And I would say, if someone were to make even a 20 qubit that had 10 to minus 3 or 10 to minus 4 errors on the whole array, that would be enormous progress, the biggest progress that I've seen in a long time. So you don't have to make a lot of qubits. So in terms of an academic group, making 20 is a lot, but you could make a big impact by showing you know how to do that well. So I greatly encourage people to think about that if they have ideas.
AUDIENCE: So for superconducting qubits, do you have a focus or do you have an idea of what the main source of noise is, like the decoherence is related to environment noise or is it constructing the entire system together?
JOHN MARTINIS: OK, so what's kind of the problem with the superconducting qubits at the time? If it were one thing, someone would have figured it out and it would be better. OK? And people have been working on that. The problem is I think it's two or three things that are maybe subtly interacting with each other or something like that.
And it takes a long time to figure out when it's a couple things-- materials, it could be interfaces, the way you do the fab, maybe your control isn't good enough, there's some crosstalk issues you have to deal with. And people just slowly have to work through that and figure it out. But given the progress of the field, it's kind of slow to figure it out, but people are making steady progress. And we think we have some ideas on how to make it good really fast, but we have to try them.
DAN RALPH: It's late enough I think we should [INAUDIBLE] more questions, feel free [INAUDIBLE].
JOHN MARTINIS: Yeah, please come up.
DAN RALPH: [INAUDIBLE] please [INAUDIBLE]. You'll have plenty of time to discuss it. Let's thank John.
JOHN MARTINIS: Thank you.
[APPLAUSE]
Quantum bits are intrinsically error prone, and thus a powerful quantum computer eventually will need error correction to perform complex computations. Here John Martinis explains the basic concepts behind error correction, showing it fundamentally arises from the anti-commutation relation of the qubit measurement operators, and the commuting nature of pairs of qubits. John Martinis then shows how error correction actually works for the surface code, which presently is the most practical architecture for building a quantum computer.