share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
SPEAKER 1: All right. Good afternoon, everyone. So it's my great pleasure to introduce John Martinis for the second of three talks. John was introduced at the colloquium yesterday. My dad, who gave somewhat of a more formal introduction of John's career, as he really [INAUDIBLE] of really good.
He's won a number of prestigious prizes, including the London Prize, the Nobel Prize, and has been involved in setting the quantum properties of superconducting circuit devices since the 1980s. And he's done many pioneering works in this field over the past several decades, including since the early 2000s, work focused on the use of superconducting devices for the construction of a quantum computer. And as we heard in the talk yesterday, or we knew that more from general knowledge previously, John was the head of Google's quantum computing team, where his team developed the first system that was capable of reaching quantum computation supremacy.
It was a major milestone in the field of quantum information processing. But I'll give [INAUDIBLE] one more slightly more personal introduction of John. It's that the first time I came across John in person was at a summer school in [INAUDIBLE] in 2011.
And it was a month-long summer school. And very few professors stayed for the whole month. But I think John was one of them. And John really stuck out to me as someone who was sitting in every lecture, and always asking deep and probing questions, really trying to get to the bottom of how things work.
And that was really an impression that stuck very, very strongly. The other thing that came out of that workshop as an impression of mine of John that I got was from the other people in the superconducting circuits community. I learned that John had a very strong reputation as someone who was deeply involved in the technical detail of all the experiments that his group did, and as an expert in everything from the development of the cryogenic systems through to the microwave engineering, the control electronics, and all the way to the middle, and of course, the cubic design of the circuits and the control protocols, as well as the deep understanding and analysis of the experimental data.
So a full stack quantum scientist and engineer, if you will. And if I may editorialize a little bit [INAUDIBLE], John's full stack nature there-- part of it is the story of how his team that he led at Google, which was so successful, was the first to demonstrate quantum supremacy despite the strong competition from many other teams. So I came up with that introduction without actually seeing John's talk title.
But I think it's a nice [INAUDIBLE] to the fact that John is going to tell us about quantum systems engineering. And he's really-- I think it's fair to say-- perhaps the best person on the planet to tell us about this. Well, I'm very much looking forward to the talk. And thank you very much.
JOHN MARTINIS: Yeah, thank you.
[APPLAUSE]
So I walked into the door here a half hour ago. And I suddenly recognized this room. Because in 1987, I was invited to give a talk here. At the time, I had just come from the famous March meeting where we had the high temperature superconductivity.
And I gave a talk at 4:00 in the morning on the first observation of the Josephson effect. So that was very exciting. And I was able to come here and talk about what was maybe much more esoteric physics at the time on macroscopic quantum tunneling in Josephson junctions, OK? And that was some basic experiment we were doing here.
But of course, over the years, this has turned into, from a variety of researchers around the world, a quantum computer, and a superconducting quantum computer. And it really pleases me very much to span the history in this room, where it was a very esoteric thesis experiment thing to something now that's a big, industrial program. There's lots of companies.
There's a lot of startups. All these people around the world-- international competition to build a quantum computer. And I think it's really neat. Because there's a lot of job opportunities for, let's say, the students here, if you want to do that.
And these kind of job opportunities to do basic physics just weren't really around back then in quite the same way. But of course, these job opportunities are to build a quantum computer. And I wanted to talk a little bit today about, what is it like in a more industrial situation to do, in the end, what is still some basic physics?
And there's different thought processes that you have to do to think about doing it this way. And I'm going to call-- systems engineering is the official name for this, and how to do that in quantum systems. And like I said, it's nice that quantum computing is in the systems era. We're building very complex systems.
And the purpose of this talk-- it's a little bit of experiment. There's a lot of bullet points. And I hope it goes out-- OK, please raise your hand if you have questions. It's a somewhat short talk.
But I want to motivate scientists to think differently about doing science when your goal is to really build something. And I hope this is interesting, to think about that. And maybe it's just a skill.
You might not be in quantum computing-- but a skill that's useful. What I want to point out is that you have to understand principles of system engineering to think differently about what you want to do. And also you actually have to do better physics. You have to think about better and simpler physics models in order to build these big systems.
And I'll explain what that means. Finally, I love this book by Peter Thiel, Zero to One, which describes a startup culture, which is what we're doing here, although you might be in a big company. And also in terms of systems engineering, here's a book by Wasson that I happened to pick up and read. And I really resonated with a lot of ideas that he talked about system engineering.
It made sense for me. And I've used that a lot to explain what the ideas are. So here's the outline of the talk.
The first thing-- I hope I'm going to grab your attention by saying that the scientific method, which we all know, and love, learn-- that really fails for system engineering. So OK, well, what does that mean? I'll have to explain that.
I'll talk about the mission and focus. I'll talk about the stack. This is alluded to.
It's actually a three-dimensional stack. It's pretty interesting. Inductive versus deductive reasoning, technical readiness levels-- let's learn about that. Software, leadership-- testing is really important, reliability.
And then, I'll give some examples along the way-- how something's unusual about these quantum systems that you have to take into account. OK, so why does the scientific method fail? So when I read Wasson, the first thing he said is, the scientific method for organizing a big project fails.
And it's like, well, what the heck? What does that mean? And he kept on repeating it over and over, just to get it into the reader's mind that you have to think a little bit differently. And I was trying to understand what exactly that means. And it's a little bit subtle.
But the scientific method, he calls-- this is the standard way you do that. You, let's say, have an experiment you have in mind. You specify it.
Then, you design it. And then, you build. And then, you test.
And then, given what you do, you either feedback to do it better-- or of course, in science, you write a paper, OK? So why does this thing fail? So let me just read this.
"The traditional, ad hoc, endless loop Plug & Chug"-- you see he doesn't really like this way of doing things. So OK. This paradigm here "wanders around inefficiently and ineffectively 'performing activities.'"
[LAUGHTER]
OK? That's good. We get employed, OK? "Then, when the system goes into System Integration, Test, & Evaluation, incompatibility and interoperability issues emerge due to poor design integration."
That's the whole building, the whole system, OK? "Additionally, the quantity of latent defects is greater, which results in significant costs and schedule overruns." OK.
And really, he's telling you some of the things that goes wrong. But he's really basically saying that when companies use the scientific method to organize their system engineering, it's not very efficient. And things go wrong, OK?
So my job here is explain what goes wrong. It's unintuitive. And the basic idea-- it's not that this is a bad idea to do. And you certainly use this.
It's more of a workflow. It's what you do. But it's not the big organizing principle of what you're going-- and basically, what you need is a better decision-making process than just looping around doing experiments and taking data. It has to be organized. It has to be a little bit more systematic.
And we'll talk about what that means. That's what this talk is about, in fact. And you can imagine that you have a technology. You can imagine an infinite number of research projects to do, OK?
That's good. We get all employed for that. But you need to focus on the end goal and product, and really know what you're doing, and organize yourself to get to the right goal. And then, when you have to make decisions to get there, you have to have all the data to do that.
And you have to do the experiments to get the data, OK? So I hope this is pretty reasonable. But mostly, it's just the statement that the engineering knows that you have to organize a little bit differently, just practically.
OK so, what does this mean in detail? So the first thing I want to do is talk about mission and focus to make sure that everyone understands what they're doing. Now, in academia, our mission and focus is to write papers, get citations, write your thesis, graduate, get onto a job so that you can do more research, and write papers, and get publications, and blah, blah, blah, OK?
And it's nice. The academic world is great. Because we get to research. We get to do new things, discover what's new and interesting.
And then some set of those are useful. And then, you go ahead. And you make systems. And you try to do something with that.
Important-- but the scientific academic is more about exploring. And that's a good thing. But you really want to have a mission and focus. And if you're doing scientific research, that's always good to have, anyway.
So at Google, the first thing we did is said, come up with a really simple mission statement, which was to build a useful quantum computer. So that's five words. That's pretty clear. And that's a big organizing principle.
Now once, of course, you do that, you have to decide how in the heck you're going to go about doing that. And you have to start narrowing down your focus, and understanding what's going on. And in fact, one of the big things that happened in my career with superconducting qubits is, 2009 or '10, we got together with some theorists and started understanding the surface code, which I talked about yesterday.
And then, we were able to put together a paper and then a vision of what a quantum computer would look like. It was very daunting and big. But at least for the first time I knew what was the end goal, and what was a possible useful quantum computer.
And then, by doing that, we would say, OK, given that, we knew what to build. It was a two-dimensional nearest neighbor architecture with a grid of qubits on a chip. So we knew that we could build that. We knew what to build.
And then, we could talk about the specifications. And again, I talked a little bit about that. And there's more details we'll get into.
But it made it very clear. We had an abstract goal to make something useful. And then, what do we have to focus on in hardware? And once I knew what the focus on in hardware, it was very clear what was the series of experiments to do. So that helps.
And of course, although you have this, this could be wrong. And you might have to rethink everything. But we knew what we were doing was more or less in the right direction. And we could pivot if we had to.
And then, specifically when you talk about the critical issues, one of the first things we understood from this was qubit errors. And I gave a talk on that yesterday. It's so critical.
I will spend an hour trying to explain to people why that's so important. And there are some interesting things that we learned along the way that tells you what's going on. And for example, if you make qubits with an error of about 2% per operation, instead of operating a quantum computer, what you can do is use a regular computer, and use an approximation technique called matrix product states. And that gives you some errors; that's around 1% or 2%.
And instead of building a quantum computer, you can just program it on your regular computer. So there's no reason to have a quantum computer if it's more than 2%. So it's like, well, if we want to do anything useful at all, at least we have to be that good, OK?
And of course, it's probably better than that. But OK. Doing that. And then, we thought about the quantum supremacy experiment, which gave us some goals in what we had to do.
If we want to do near term quantum computers, we maybe need an even smaller error. However, we can do error correction, which is a larger error, but then a lot of qubits. But what happened is, very quickly, we had specifications of what we had to do.
And we knew what to focus on, OK? So that's the big thing about mission and focus. It gets you to really understand what you have to build and in a realistic way.
So we have to get low errors. But we have the scale up at the same time. And OK, so we have to figure out how to do that, OK?
So that's the big thing about mission and focus is, it narrows down all the possible things you can do in the world, and narrows down to, OK, we have to do something with some basic requirements here. And then, you have to figure out, OK, what are you going to do next to get there? OK?
So now, let's talk about the intellectual structure of this. And it's very-- usually, people talk about the intellectual structure as the stack. And I'm going to show you there's three dimensions to the stack.
So the normal stack is to think about what you're building. And you say, OK, at the lowest level, we have clean room processing. And then, we build a chip.
And then, we might need some readout measurement system with circulators and quantum-limited amplifiers, or whatever. Then, we need wiring. And then, we need the control system. And then, we need the software drivers.
And then, we need to have the logical control, and blah, blah, blah, and then the user interface at the top. So you think about all those stacks, OK? And what I find very interesting is, I often go to talks.
And quantum computing people talk about the stack. And they spend a long time on that. And what's funny about that is, for systems engineers, the stack is the most trivial thing. It's like saying to physicists, f equals ma is an important equation that you all have to know about.
Of course there's a stack, OK? And you always break it up in there. But what's interesting is-- and when you think about the stack, it's really the interfaces between one level to the other that's the interesting thing, not the fact that you needed it, OK?
You have to figure it out. But that's pretty-- but exactly how you're going to do the interfaces. And in fact, when you think about that stack, you spend a long time defining those interfaces so that people working on one subsection of this know what the bill. They know how to connect lower. And they know how to connect upper.
And that gives you constraints that you have to work on. So it's the interfaces that are important. However, there are two other degrees of freedom, or dimensions here, that the people-- well, one of them that the system engineer will tell you. But I've added another one, because it's so important.
And one is a description of complexity, or levels of abstraction, OK? So that's a very high level concept. But basically, you have a subsystem. And you're going to have people who are working on that subsystem, who are really experts on this.
And they know it. And they've written 50-page papers about what's going on here, OK? And you need those people, OK?
But the problem is, everyone else on the stack, especially people above and below you, need to understand what's going on. And you don't tell them, here's my 50-page paper. Go read it. And you don't tell the whole group, hey, here's my 50-page paper. Go read it.
That's too much information. It's too detailed. And the people want to know what's going on, and know how their system interacts with another system, and figure out the interfaces and the like. What you have to do is go a layer above in abstraction. You have to make it simpler.
So you have to come up with simpler models that are correct enough so everyone knows what's going on. But you don't get lost in the details. And then, people can understand it. And you might actually have to have one level above for people in your subsystem, one level above for the people next door, and even one level above for the whole group.
And this is really important if you're building a quantum computer. Because it's not just physicists you're talking about, you're working with. You're working with software engineers, and hardware engineers, and technicians who are building it.
And you need to communicate what's going on, to some degree. So you have to do it. And abstracting this and making it simpler-- simple, but not too simple-- is a hard job, OK?
So I think about Richard Feynman as someone who knew how to do that. And you have to work hard to make that happen. OK, and let's see.
And then, the third one is, in fact, this is a project that changes in time, OK? So you have to figure out, what are the simple things we're going to work on? We get that done, what are the next things, the next things?
Of course, people are working in parallel. And then, you have to merge them. So there's a time dimension to this as you have to organize it, OK? And that actually is very important. Because you have to make sure that you're doing something that's going to advance the technology, but not too far, so that there's a chance it'll fail.
You want it to work. So you have to organize that very well. So yes, I talked about why the abstractions were so important.
You have to make good and timely decisions. And everyone has to understand what's going on. It also describes the team structure, not job classification. One subteam, you might have software engineers, and hardware engineers, and physicists. And they belong together not according to their job.
So basically, for doing research, the clear communication-- maybe you need a one- or two-page slide to describe what's going on, or a page so everyone can know what's going on. And you have to spend time doing this, OK? So that's the idea.
In terms of the abstraction, I want to give an example that I give when I try to explain what a qubit is-- a superconducting qubit. And it's a way-- physicists might say, that's not too hard of a problem. I can solve that with quantum mechanics.
The problem is, you have a lot of hardware engineers and software engineers who haven't taken quantum mechanics. And you have to describe what's going on that's close enough to the physics. And I'm just going to give you an example of what I did here.
And it requires some physics knowledge. And we start with the energy of the system in terms of the charging energy, and in terms of the current flowing through the Josephson equation. And you can derive this easily from the Josephson equations. But basically, you have a cosine potential, which actually is equivalent to a pendulum energy, OK?
And you can go to dimensionless coordinates. And then, you have the number of electrons and this phase difference, or dimensionless flux, if you would like, OK? So you want to understand, how do you think about quantum mechanics of this?
So what I've done here is just classical calculation. You put some energy in the system. And you numerically calculate, what's the time it takes to oscillate back and forth?
And what you find, for example with this cosine potential, because it flattens out here, is as you increase the energy, this oscillation frequency drops from the initial oscillation here. And then, when it gets the energy all the way up so that the pendulum's the top, the oscillation frequency goes to 0. Because it goes to the unstable equilibrium.
OK, so basically, you introduce the idea of a nonlinear oscillator, which if you remember, I told you about yesterday. And then, you can do the poor man's quantum mechanics by saying, if you look at this energy, you know that photon system's harmonic oscillators basically have quantum states that are separated by h bar omega 0.
So if this was a pure parabola, it would be a space by this. But if you look at these particular energies, the oscillation frequency is dropping because of the nonlinearity. And it drops according to this, to here, to here, to here.
And it basically says, as an approximation, that as you go up in energy levels in the qubits, those energy levels are getting smaller, and smaller, and smaller. And it's a rough approximation of that. In fact, the exact answer for debatably, quantum mechanically, is this line here. And it more or less captures the physics properly, OK?
And of course, you have to do the quantum mechanics to understand everything. But if you explain things that the qubit is nothing but a nonlinear oscillator with quantized energy levels, there's lots of things you can imagine in your brain in terms of how this thing operates. And I'm going to say, in the end, it's about 70% classical E/M of coupled nonlinear oscillators.
And then, you have to throw quantum mechanics on that. And because of that, you can understand things. And for example, I talked about the adjustable coupler yesterday.
That was not invented by writing down a Hamiltonian. That was invented because we understood this was a nonlinear oscillator. And if you linearize that circuit we talked about, you can turn things on and off. And that's how you invent things, in fact, from simpler descriptions that you intuitively understand.
And then you check it by doing the full quantum calculation, of course. OK. So that's an example. OK, here's another example.
How do you talk about error correction of your qubit to a hardware CMOS engineer? They build systems. You have to explain what's going on in there in some way that they can understand.
And so what you want to do is use the intuition they already have. So this is the way I explain it when I give talk to hardware engineers who know about digital logic. And if you look at a classical computation system, it basically consists of D flip-flops so that whenever the clock pulse goes high, the edge-- whatever is on your D here then gets latched into the queue, and then holds it there until the next clock goes high again.
And then, this has many, many big bus here. And then, this is basically the logic that takes one state here of many bits, and then converts it to the next state. And this is called a state machine.
And all digital computers are built this way. It's a way to abstract it, OK? So if you want to explain a quantum machine, you can use this analogy. And basically, what you have is a D flip-flop, which are your qubits, and the big qubit array that I told you.
And then, what's happening is, you get some state here. And then for the next state, you put in some logic into here, which then manipulates the qubit states and goes to the next state. So when I gave that diagram, this is slices and time where you change it from the other, from one state to the other.
But what's different here is here, there is a particular clock. But here, the clock is actually the qubit states. They're resonating at a few gigahertz.
And you have a self-clock of the qubits. And of course, all those clocks can be a little bit different. They can drift over time. The clocks are going to drift. And that's going to give you phase noise.
And that's why you have to correct for both amplitude and phase, because there can be drifts in the clock. And like here, this being imposed from the outside to get everything synchronized-- here, the clocks are running internally. Well, that sounds like a big mess.
But what you do is, you use the error correction parody to decode the phase errors. And then, those phase errors go back in the clock, into the logic to resynchronize the clock, OK? And then, that's what the error correction does.
So it corrects the amplitude errors, and resynchronizes the clock to get it so that you can think about it in this logic. And then, that's a nice description. Because then, they can understand what's going on, OK?
So that's another example of how you would try to treat something in a more simple way, in a different way-- very different than what I talked about yesterday. But I think digital engineers-- they understand that better. That's a good way to explain to them.
OK, so with that, now, let's get in onto the next topic. And that is inductive versus deductive reasoning, OK? And this is something I think we take for granted.
But when you're thinking about system engineering, you want to understand this concept. So as physicists, we're really familiar with deductive reasoning, OK? So we have a Hamiltonian.
And then, from the Hamiltonian, we can then deduce what the system is going to do. We reduce things to F equals ma, and then we know what the dynamics of a mechanical system are doing. So we're always trying to reduce things to first principles.
And for example, for superconducting qubits, we have the trans-Hamiltonian. And from that, you know everything that's happening, OK? And then, all the way down at the bottom is something you would call inductive.
And you would use inductive reasoning when, let's say, you do a first experiment. And you observe some physical phenomenon. And you say, OK, this physical phenomenon-- it's how it works.
You may not know how it works yet. But at least you can repeat the experiment, or try different things, and get it to work. And for example, in superconducting qubit, right now, everyone's talking about tantalum qubits being really good. And that's great.
And everyone's making it, having good results from that. But I would say, people don't know why tantalum is better than aluminum. So this is inductive piece of information, not a deductive. Now, and people are hypothesizing why it's better.
But we have to prove it. And basically, to prove this, we have to do experiments. And to do that, whatever our inductive reasoning, it just gets more and more deductive over time.
And then at some point, we can put it near the top here. And that's what we're doing in physics. We're trying to get that.
Now, one has to always be reminded, although we're physicists and we're always saying we understand something, we may not. And for example, I don't know how many people here have done clean room processing. This is a recipe, right?
And you do that recipe. And it works. And you get a result.
And maybe for some of us, that's what cooking is like, right? You have the recipe, and something works. Materials are a little bit like that. You might know a little bit more about materials.
And then over time, of course, we're trying to make it better. Algorithms may be higher. The most interesting thing is the quantum mechanics of this system, OK?
The quantum mechanics-- we understand quantum mechanics. We know the Hamiltonian. We pretty much know how it's supposed to work. And so that's pretty high in deductive.
And of course, it gets better over time, as we do it. Let me just say something very curious about this. If you go ahead and you do an experiment, where you run some algorithm, and you find the time crystal, or this, or that, you get a nature or science paper.
But in the end, all you've done is integrate the Schrodinger equation on a machine that you could have run on your laptop, OK? So if you're up here, and you're very deductive, and you do an experiment, it's great. But it's so deductive, we know what was going to happen.
If you're down here, and you invent a new qubit type, or new process, or a materials, or whatever-- it's much harder to get into a good academic, or good publication. A lot of times, you have to fight for it. And maybe people don't really appreciate what's going on there.
And yet this is the hard thing to do. Because you have to measure nature, and figure out. So when I run programs, I always remember this, and remember that these are nice. And you do that.
But a lot of the development you're doing is going down here and trying to understand better what's happening as you build your system, or operate it, OK? So you really have to focus on this. So in terms of science going from inductive to deductive, it's basically, you can think of going up as increasing your knowledge.
And here, you might have an idea and do a demo. And it's inductive. And then, you reproduce it, and make a recipe, and have theories, and models.
And it just gets better and better as you go up to deductive. It's more predictive, OK? And there's actually a mechanism, or a thing that people do that describes this. And this is called the technology readiness level.
And there's various ways people talk about this. I like this one, because this is in NASA so that you start with basic research. But at the end, you need to launch, OK? And at NASA, the highest level of technical readiness is that you launch something, and it's been in space, and you know that it works, which is great.
And you have to move up from here to here. And that's what we're doing. In this case, we're doing better and better science.
But here, these upper technology things are whether you can build systems, modules, and systems, and test it, and whether it's working right. And just because you have something working doesn't mean it's going to launch OK. So you have to test it well.
So this is a good practical tool to understand where you are. And I would say in superconducting qubits, we're somewhere in the middle here. We have a lot of good science behind that.
But in terms of building a big system that will launch and work properly, we have to get better and do a lot more testing. But it's good. At least in superconducting qubits, we're in this mid section for conventional qubits.
So the point here is, you need to be able to predict. That's what happens when you launch, right? You can predict that if you launch again, it should work, OK?
And this does not really happen with the scientific method methodology. That's one of the reasons. And you optimize and scale. And I'll also say, you tend to overestimate how much deductive knowledge you have.
You know something works, but you don't know if it's going to work when you make it bigger, or scale it, or make a million qubits, or whatever. And so you have to be very careful to really understand where you are here. And I once went to a talk at the March meeting where someone was talking about the blueprint of Majorana qubits.
And the talk was-- he was saying, oh, we can build this. We're up here. And it was just an idea, OK? And then, you found out a few years later that maybe Majorana qubits aren't so great.
So you really have to be very careful about your knowledge. OK. So one of the things that was really interesting working for Google is, it's a software company. And they really understand that you use software to scale.
If you want to build a million-qubit system, or even 50-qubit, or 100-qubit system, million-qubit system, you have to scale with software. There's just no other way to do that, because that's an automated technology. And you really want to turn as much as possible into software to scale.
So my interesting story here is, we went to this hardware symposium at Google to see what other people were doing. And we went to some of these talks about digital circuits. And all it was about coding your digital circuits and software so that the computer could automatically place it, OK?
So it's just, really, digital circuits now are software. Now of course, in some hardware, you can't do that. You actually have to build something. And to do that, you need a step-by-step recipe.
And really specify it very, very well so that you can do it, repeat it over and over again. And this is actually something that's hard when you're in graduate school. Because you want to get your thing done, and get your PhD, and go onto the next thing.
And I know I was the same way. And then, what I learned over time as a scientist is, I tend to build things over and over again. So I started really documenting carefully so that next time I wanted to build something slightly different, I'd go back to my documentation. And I could build it quickly.
Or if someone else had to build 10 of them, they could do it. So that's actually a skill that takes a while to learn. Because you aren't really thinking about this in grad school. But it was very important. Did you have a quick question?
AUDIENCE: Yeah. What do you mean by scale? So in this case, are you talking about going from 50 to a million qubits?
JOHN MARTINIS: Or going from 2 to 50 qubits, or 50 to a million.
AUDIENCE: --where how qubits-- I guess I'm missing how that's not--
JOHN MARTINIS: So if you want to build a big chip of qubits, what you do is you learn how to make a cell. And then, you use a step and repeat function of the software to do that. If you want to have 100 circuit boards, you go to a printed circuit board manufacturing house, where they have machines that will do that for you.
And you try to automate as much as you can, which in the end, those PCB stuffers are operating with software. And people all throughout the industry have done that. And it's interesting.
Because you have to think in a new way. Well, if I want to build something, how can I automate building that? And the problem is, many of the things you do in graduate school to build something is one-off, and rightfully so.
But you have to rethink all of that. So yeah. And some of the things, you can't. And some, you can.
And you try to get as much as you can. Yeah, and then, even if you have your software to divine the qubits, when you go to testing, you want a definition of your qubits to go to your test program so you don't have to re-enter all those parameters for your test. There's another example.
And it takes time to do that. But in the end, it keeps you from going crazy, OK? I'm going to talk about-- this is an interface example.
A little bit different here. What we do is, we build waveform digital-to-analog converters that put out a waveform-- let's say a microwave pulse-- that goes down into our qubit. And you want to define that waveform so that your qubit does a NOT operation, OK?
Now, there's two ways to build this hardware that generates the waveform. One way is to send in a signal that says, I want a NOT gate. And then inside that FPGA, whatever, it says, OK, I want to have this waveform and do it.
So you calibrate that once. You set a NOT, and you do it. And that's very efficient, right?
Because you just send a very simple command. I want this is NOT gate in there. There's another way to do that in that you do this.
And this is just playing the waveform. In another computer, you say you have a NOT gate, which gives you the waveform. And maybe you program that in Python. And then, that sends that long waveform over to here.
And then, it sends it down. So you can either compile the waveform here in the FPGA, or here in your Python, in your programming computer, OK? And you could say you can do it either way.
OK. So which way is better? OK? You have two options. And this really has to do with system design.
The way where you're programming in the NOT gate is nice, because you have a simple interface here. But the problem is, things can go wrong in your experiment. And the NOT gate for this computer can then have some cross-talk.
This qubit can have some cross-talk to go to another qubit. And you may have to make this waveform more complicated to compensate for that. Because you can design a system, but you have no idea what your experiment's going to do, right?
And then, if you design it one way, and then something goes wrong, then you're in trouble. So you want to do it. So both these ways are done in the field.
Some, you program the waveforms to the FPGA. Some, you program the gates to the FPGA. But then, this is more general.
So this is the way I did it. And yeah, we have cross-talk. And we have to fix it. And we don't know what's going wrong.
So it's a more general way to build the interface. But that decision in systems engineering is very important in terms of the long-term health and speed that you can do anything, OK? And it has something to do with reliability.
Things are going to go wrong. And can you fix it? And you don't want to have to build new hardware. Yes?
AUDIENCE: You said slower to send the waveform to the FPGA. Is this lag something that's going to lead to errors on your device because of--
JOHN MARTINIS: Yeah. And you want to-- there's a bunch of-- there's cross-talk between the microwave lines. And that'll lead to an error if this microwave pulse went to another qubit, too. You can get errors.
And you have to correct for that. So there's a lot of things you have to worry about, because it's not a perfect system. So my statement, the way I always design a thing is, try to overdesign it when you start. Because you don't know what's going to go wrong.
And then over time, you might find, OK, there's not a lot of cross-talk. So then, I can simplify it. And then, you go ahead and do that. And there's a variety of ways you can organize your electronics.
But yeah. You don't know what's going to go wrong. And if you build some hardware and it doesn't do what you want, it could take you a year to redesign that hardware. Or you buy a piece of hardware from a company, and it doesn't do what you want.
And then, you're going to have to spend another $100,000 to buy it from someone else. So you have to worry about it. And when you design the stack, you have to be very careful what you're doing, what the interfaces are, not to get too elegant. You want to make it general.
And also reliability is key to scaling. And that's why you try to do it in software. You need good specifications and documentation testing for software. You need to have code review. Someone else looks over your code who's a much better software engineer than you are.
[LAUGHTER]
And then, this just doesn't happen in grad school, OK? I understand. That's OK. But when you go to a company-- and what's interesting about code review is, once you learn how to make better code, it more or less takes you the same amount of time to code.
Because someone's looking at whatever. But you write much better code. And everything is more reliable. And you spend less time figuring out some bug that is going to delay you lots of time.
So you just get better at it, OK? And the software engineers will help you get better, and learn how to code better. That's one of the things I learned at Google. I really appreciated that.
And in the end, there's something called continuous software. So you develop software. You might have a hardware emulator.
Whenever someone makes a change, they do a full suite of tests to make sure nothing obvious breaks. Now, something subtle may break. But hopefully, everything is tested, and it'll be OK.
And that's how modern big computer systems work. And this is why, for example, like with Google, your Chrome-- they're updating it every few days, or a week. Because they're doing continuous upgrades of it.
And it's just the way software is built these days. OK. Does anyone here fabricate in the clean room? Yes, yes? OK, so I'm going to speak to you.
[LAUGHTER]
Clean room fab is critical and difficult. And you tend not to get credit for the amazing things you do, OK? And there are good historical examples.
And you're never appreciated enough, and we must cherish you and reward you, OK? I really feel strongly. And part of this goes back to my inductive deductive.
Because you're doing something that's inductive, and it's really hard. And it takes a long time. And you have to know what's going on, have a lot of experience.
So you just have to appreciate them. And then, you have to figure out how to integrate very carefully with product goals. And especially challenging because it's inductive-- it's hard to predict if something will happen. You change one part of your process, and it can all go bad.
And then, you have to spend a long time fixing it. So fortunately, the fab people are used to it, OK? But it's really hard.
And then, the other thing to realize-- this was really funny to realize this at Google. There's two different activities. One is research and development. And physicists are really good for changing the parameters, and trying to optimize and get something good.
And that's really wonderful. But the problem is, physicists always want to change and optimize. So at one point of time, if you have a process, and you give five junior students a process, over a year, that process will diverge in five different ways.
And you'll have five different processes that hopefully work. And if you're trying to stabilize a process and know what you're doing, it just becomes a mess, OK? And the problem is, physicists can't help but to try to optimize, and tweak, and whatever. And that's really good, OK?
But at some point, it goes too much. So what happens-- this is true in regular production environment. Basically, you have to use technicians.
At some point, once you know your process, you give it to a technician. And the technician does not want to change it over time. And the fact that it's not defined-- they're going to complain to you, which is good.
And you're going to define it better. And they're going to do it the same way. And then, of course, what you do is, you talk to the technicians. And you might carefully say, change this step to this step, and do A/B testing, and figure out what's better.
But it has to be organized. And they have to know exactly what to do. So this really gets your reliability up. And that's what you want to do.
Now, the problem here is that if you tell them to stick to some recipe that's not the best recipe here, what you've done is maximize reliability to make a lousy device, OK? So you have to be careful not to do that. And in fact, I've talked to people where this has happened in industry.
And that's not good. But you do have to be smart about that. But you do need to use technicians. Physicists just can't do this; just admit it, OK? You can't not change anything. It's the nature of our being, OK?
OK, so system engineering leadership-- this is the management of this. So the project engineer, system engineers maintain intellectual control of the problem solution, like what you're trying to do. Otherwise, this paradigm prevails and leads to confusion and chaos. And people are doing it.
This is the best quote. "The focus of making timely and informed technical decisions instead of producing documents"-- OK, I really like that. So that makes me happy for system engineering.
If you have some new ideas and you want to try something new, this is not a Twitter exchange where you give your favorable argument for doing something, OK? This is hard to do. You have to give your pros and cons of doing it.
Why is this great? But what could all the problems be? OK? And my rule is if someone pitches something and they don't give any cons to it, then it's like, well, go back and work on it a little bit more. Because you really have to think about what goes wrong, OK?
Now, when you do academic research, you do something new, something magic may happen. That's great. Not quite so important.
But when you have a big project, you have to understand that. And that way, you can argue the ideas, and then gather in agreement. One of the interesting things-- a system engineer is about, I think, 10% to 15% of a total project. That's 1 person and 7.
So there's a lot of people doing this and making sure that it stays on-track. And the other thing is, if you don't have system engineering and you don't have much supervision, it's more of an academic, and sometimes a corporate style. You can get into trouble.
Let's say you have a project with 50 critical tasks. And there's 50 subgroups working on it, but not much oversight. That means you have 50 points of failure.
And what's the chance that all 50 will work, OK? No, you need the system engineering. Check on, make sure what's going on. If something's not working, you put more resources into it and get it to work.
And you can't have many points of failure. It just will never work. And something things always going to fail. People know that who have done experiments, OK?
So here's an example of systems specs for error correction. We talked about that yesterday. And mostly, people are talking about, hey, the surface code has so many qubits. That's bad.
So let's try to look at other error correction codes, and whatever. And that's really good. I'm glad people are doing it.
But what happens is, people are looking at the academic side right now, which is, how do we correct a logical qubit and make it a little bit better? In the end, you want to build a quantum computer. And there's a huge number of system requirements you have to think about.
What is the threshold? What's the actual physical errors that you have, qubit number for some thing? Again, this is too large. But is the architecture complex?
Can you build it? That's super important. What is the error decoding complexity? Some codes are NP complete for decoding, which means you can't do it, or you need a computer to do it, OK?
It doesn't make any sense. Can you build logical gates? Can you make a CNOT gate in a simple way?
How does physical measurement work? Can you build parallel logical gates, which is how you speed up the quantum computation? Do you have long distance gates?
You have to distill some states to do non-Clifford operation. How efficient is that? In fact, in the surface code, 90% of your quantum computer is for making these special states, OK? Only 10% is for the real operations.
Sensitivity to correlated errors, blah, blah, blah, blah, blah. OK? So this is a partial list. So when someone comes up with I, it's great that people are thinking about this. But you have to think about the big picture of what you're doing to make sure all those things line up.
And sometimes, you don't know. And then, you have to go ahead and figure that out, and move ahead. But you really want to know from a systems point of view how it's going to work.
And one of the reasons why I like to surface code is, basically, all these things here are OK. You can imagine doing it. And if someone comes up with something better, that's better in all of these things, not just one or two numbers, but all of them, then we change over as long as it's new, not too different. So that's fine.
OK. Finally, I'm almost running out of time. This is probably the most important thing to do-- testing and reliability. And in terms of the scientific method, you always test your device and figure out what's going on.
But it's OK if it's hard to use and whatever. As long as you get your data, you can write the paper and go on. It's not like that when you're building a system.
You have to make sure it works really well. And one of the things that's very hard to understand is, when you do a system, typically 40% of the effort is for testing. It's a lot of testing.
Typical in the scientific method, it's maybe the last 10% of what you do. But you have to do a lot of testing, and do it often, and have a big group doing it. And in fact, when you want to write the best software, the way they tell you at Google is, you write the test programs first. And then, you start coding whatever you're doing.
And then, you always check with your test program whether you're testing OK. And although you spend a lot of time writing your test, it helps you understand what's going on. So by the time you write it, you get really efficient.
And then, the most important thing is that since you're always testing it, it's reliable. And that's what you want in code. Yeah, so do you understand the fundamentals?
You do small testing, and then larger testing, a full system test. And you really understand what's going on. And reliability is key. It's very expensive to fix.
So let me say here, there's a principle here that's very well known in the software industry, and in other industries, that you want to do user tests. They develop some software. They do an internal test.
But until they actually launch and get user data, they don't know it's working well. I think in hardware, you can think about airplanes, right? You have a new airplane. You have a test pilot.
They check it all out. But you don't really know how safe an airplane is until you run millions of flights, and you test it, and you see all the crazy things that can go on. And unfortunately, sometimes, they fail.
And the nice thing about the airline industry is when they fail, they learn. And they improve their procedures and hardware, OK? So you want to do that.
And what I think is particularly interesting in the quantum sphere is that companies are putting their quantum computers online for people to use. And then, people are continually testing them, and getting them to work, and improving things.
And I was really disappointed to find out-- Google this year, we had put it on the cloud for some users. But they've pulled back on their cloud service, are no longer doing it. And that means they're no longer testing at this user level whether their quantum computers are working properly.
And it's sad. Because if you want to build a million-qubit quantum computer-- eventually, in 10 years-- you should be testing 50-qubit quantum computers right now to see all your vulnerabilities in your hardware and software, OK? And but I think they have some plan to do better.
But you really need to be testing all the time, and at this big level. And it's a really important part of system engineering. OK, a few more minutes, almost done.
So let's talk about quantum system engineering. And for one device, the qubits have to have a long coherence time. You have to couple them together strongly, measure them fast, low errors.
These are competing requirements. If you want long coherence, you isolate the qubit from the rest of the world. And you have a long coherence time, but you maybe can't couple them together.
So and practical things, good control, each qubit, room for control circuitry, reprogrammable, flexible architecture, scalable, general purpose. Lots of things you have to get right at the same time. I would say what's particularly hard in quantum is, you can't copy quantum information, OK?
And this really makes things hard for you. And let me just give an example of a classical CPU, where you have a classical computer. You have a CPU. You have some input devices.
They go into here-- output devices. They have some memory-- internal, external, whatever. Because you can copy classical information, you can shuffle data around from here to here and do all the things you want.
And each of these units can be designed by a different team. And you write some interface specifications. And you're fine.
However, in a quantum computer, you can't copy information. So the person who's designing this is fighting with the other people to say, I don't want to let go of my qubit. We need to get both of these to work at the same time. And they have to be communicating all the time to make sure their specifications are right.
Because they're sharing the qubit. And this makes it hard, at the qubit level, to get all these things right, OK? The other thing is, you have control systems-- so qubits, some kind of control system for the qubits.
In the experiments we did at UCSB-- and Google is even more complicated-- you have about 100 parameters per qubit that you have to set to make it really work, right? This is working at a fraction of a percent error. Lots and lots of parameters.
And you have to write code to do that. And this basically means you have a large area in volume-- let's say even CMOS-- to do all that. And so your control system is large.
Now, people want to simplify the control system. But remember, we're limited by the errors of the qubits. And if you had a low enough error qubits-- say, below 10 to the minus 4-- then you have some engineering margin to simplify control.
But we really don't have that margin here. So this is a really big constraint. And actually, as you talk about scaling up, this is quite the serious issue. I have a small paper on this to describe that.
OK, I think this is the second to last slide. OK, physicists are bad system engineers. You're born and bred to do something different.
What you're supposed to do is, you have all this physical phenomenon. And you separate out some experiment, or some theory. And you focus in on one little thing. And you get that experiment done really right.
So you're focusing on one thing. In system engineering, you have to get a lot of things. You have to think about all the physics that's going on. And you have to rethink about what you're doing.
It's a big change from research to building. Of course, we still need research as transformational. You have to know the overall goal, even if it's not perfectly clear now.
You need clear concepts to interface and optimize, not just long papers. And you have new and complex technology to explain to teams. So you need abstractions and simplifications so that everyone can understand.
Again, for quantum calibration, it's super important-- that's how you're going to get little errors. The control system is ultimately an information problem and a size problem. And the last thing-- this is what I did yesterday-- is you need to advertise what your big problems are. Because that is the only way to fix them, right?
If you're not talking about it-- and a big principle of system engineering is, you solve your hard problems first. Because then, you have a lot of time. If you solve the easy problems and wait for the hard problems at the end, you run out of time. And this is a known failure mechanism.
OK, engineering needed for a complex system. System engineer has templates on how to organize a project. People know how to do that. And for the scientists, you just need to think differently.
You have to go beyond the scientific method to think about the project goals, and how deductive/inductive it is. Think about software, and really documenting your recipes, and lots of testing, OK? And in the end, what I think is interesting about all this is, you just need to do better physics.
You need to understand it better, really make sure that it's right, and it's predictive. And I think that's a nice challenge. It's a different kind of physics. But you need to do it better.
And finally, thank you for listening. If you have any comments or questions, I'd love to talk about it. I think it's a very interesting topic, as physicists go out and try to build real systems. So OK, thank you very much.
[APPLAUSE]
AUDIENCE: If you have time for questions [INAUDIBLE]?
AUDIENCE: Should maybe more people, or more physics students be working over in the accelerator rather than in the solid state area--
JOHN MARTINIS: Yeah.
AUDIENCE: --to learn this sort of thing as students? It seems to me that much of what you're talking about is, even on a small scale accelerator on campus, you're working with 200 people on a project--
JOHN MARTINIS: Yeah. Yeah, and I would say the high energy physics community really understands that. And I'm sure that you're teaching your students that. Because you're working on a big project that's very complex.
So I would say, yeah. I imagine that field knows about this. But in condensed matter, people go into condensed matter because they're small experiments and work independently.
And this is something you have to learn. But one of the reasons I give the talk is, for the community for quantum information, this kind of thinking is a little bit different. But again, I think we can learn a lot from the high energy physics people.
In fact, we should be hiring high energy physicists to work in quantum information. That would be a good thing. Because they all understand how to do this.
AUDIENCE: It's not just high energy. Think about the JWST team.
JOHN MARTINIS: What?
AUDIENCE: Think about telescope team.
JOHN MARTINIS: Yes, yes. And I think the astronomy is quite good at this, too. That's right. Yeah. Yeah, it's big science.
And what's interesting is, condensed matter is typically-- you don't think in this way. And it's a draw for people to do condensed matter. Because it's more personal. Thank you.
AUDIENCE: So how many years are we from the [INAUDIBLE]?
[LAUGHTER]
JOHN MARTINIS: Well, IBM and Google are talking about end of this decade. So 10 years, 9 years. So we'll see.
We'll see if they get there. But yeah, that's what we're trying to do. In some sense, they've already launched some products. But they're little sounding rockets, and not going to the moon yet.
AUDIENCE: Could you say a little about the 1%, or 0.1% error? And is there hope for a significant improvement there? Or is it all going to be--
JOHN MARTINIS: So yes. Thank you, yeah. I should have talked-- OK.
So the target right now is 0.1% in a million qubits. Now, when we did the quantum supremacy experiment, we had very fast gates-- 10, 12 nanoseconds. These are Faster than the single qubit gates.
So they're 10, 20, 30 times faster than gates other people are doing. But the coherence time was only 15 to 20 microseconds. But that ratio between 12, 15 nanoseconds and 15 microseconds factor 1,000. And thus, you could get gates below 0.1% or so.
The nice thing about that is, once you have the fast gates, then you say, all I have to do is make better qubits. And people have made, in big arrays like IBM, with T1s in the 150-microsecond range, and sometimes more, sometimes a little bit less.
So it is possible. Now, the problem is when you look at what IBM did. They're making simpler qubits. So it's a little bit easier to get long coherence time.
And the group at Google-- they were stuck at 15, 20 microseconds, even for a couple of years when I was there. So it's been about five years that they've been stuck. And that's a problem.
The nice thing is, there's a group in China-- one in my old post-docs just showed that with architecture, they were able to get 150 microseconds. So I think it's possible to do 10 times better.
Of course, they have to get everything calibrated up, and the gate errors low. But they know how to make it longer. So say, if you get a longer coherence time by 10, then you can take your errors, which are down to 0.2, 0.3 now, to 0.02 0.03. And that's good enough.
So I'm actually very optimistic this can happen. But my guess is, there's still a lot of things you have to work out, and cross talk, and other physics you have to work out. But I think it's quite possible.
It's almost proven yet. But you have to do the full thing. Because there could be some design issues with the experiment in China. But I really think it can be done.
AUDIENCE: OK, so this may be a different question. You said there's a lot of opportunity for students. What about for Google, these private companies? What about international students? How many opportunities are for people who are not US citizens, I guess?
JOHN MARTINIS: So the people who are what?
AUDIENCE: Not US citizens.
JOHN MARTINIS: Oh, not US citizens. Yeah. The problem is, the US government has declared this a critical technology. And there's a big competition.
And there are certain intellectual property protections put on this. And yeah. That's a problem. And all the companies have certain rules for doing that.
And one has to be careful. Now, at Google, when I was there, they hired someone from mainland China. And they were not supposed to work on certain areas of the project. And they were hoping that person to eventually get a green card so that they could work more freely.
But yeah. If the US government declares that, then it's a problem. Generally, academic research is pretty free and open. But the companies are harder. That's just the way it is.
AUDIENCE: Right.
AUDIENCE: How is it that the research community, these big companies, interface together? I'd imagine if you're designing something in a lab, you can't match the scale of the infrastructure that they're doing at Google. So how do you compare and contrast results from one and the other?
JOHN MARTINIS: Yeah. How do you-- yeah. How does Google and other companies interact with people in academia? Now clearly, the quantum field needs a workforce. They need to hire people.
So they're really working hard to do this properly. The problem is that in the companies themselves, there's a lot of secret information, intellectual property that they don't want to leak out. Because they're going to give-- an example, a lot of times, it's taken years for them to figure out.
So there's always this tension there. Now, I know at Google, they had internships so that people can work at Google and do that. And there's some hardware, and there's software internships.
And that way, you can share it. But most of the companies are just doing everything themselves. So that's stated. And that's because lawyers are involved.
And there's that possible value IP. If you take, for example, the astronomy community, there's not as much money. And people are very much working together to build the instruments.
And that's a good scientific enterprise. But I guess you can say that the industry has more money. So they can run it that way. I'm not sure that this is-- I think a little bit more cooperation would be good. But it's just really hard. The lawyers don't want you to do that, in the end, when you try to do it.
AUDIENCE: Another question. Just a follow-up on that. If you don't write papers, but you have to write patents, and don't infringe on intellectual property, how much time do the scientists and engineers spend on that activity?
JOHN MARTINIS: The nice thing about writing patents is, they have lawyers that will do it for you. So you meet with them a few times. And they put it together, and you do that.
That's not a big burden on you. And you have to learn a little bit about how patents work. People are still writing papers.
Just like in the semiconductor industry, people write papers. But they tend to be on the big results, and not on all the details of what you do. And that's typically how you protect your IP.
And yeah. You're spending less time writing papers, except there are a few big ones. I would say, I've found that not writing papers is actually a problem. And what happens is, when you write papers on your intermediate results, everything's well-documented.
You work hard on it. The fact that there are referees means you have to be clear. The referees catches errors that you might make.
And you have a much better documentation of what you did. When you do things internally, you tend not to spend as much time on that. Because you're busy with other things. And no one really cares.
And I think it can be less careful. And I think that's a problem. And I think, in a company, you have to work hard to make sure that the quality is right.
And also, the referees, although you get annoyed when they nitpick at your paper or whatever-- they're actually doing a good job to make sure you're writing it right, and your logic is good, whereas when you don't have that negative feedback, which is unpleasant-- but when you don't have that, you tend to let things slide. And if you make a wrong assumption on something, or don't describe it clearly, and someone gets confused in the future, that can kill your project.
Or it can waste you a lot of time. So I actually think this has to be done very carefully. And yeah, I would say that was something that we could definitely do better. I would do better in the future on that.
AUDIENCE: Well, I think we should call a timeout. But if you do have more questions, please do come up and talk to John. He's also here another two days this week. So you can talk to him here or later on. So let's thank him again.
JOHN MARTINIS: Yes, thank you for your time.
[APPLAUSE]
Quantum computing has entered a compelling scientific era as now quantum algorithms can be run on multiple physical systems. Building even larger machines with error correction is a significant engineering challenge that will require good systems engineering practices. Here John Martinis discusses some scientific and technical strategies and ideas that will be important to consider when transitioning from scientific research to development of a complex engineered system. Also considered will be constraints specific to quantum computers, for example the inability to copy information and the need for complex control systems.