Jarrod McClean (Bonus): Parsing Logical Qubits

Quantum computing comes with a new layer of concepts. Quantum bits are called qubits, but there’s more. Physical qubits are often grouped to form logical qubits. In our recent conversation with Jarrod McClean, we discussed logical qubits. And we’re sharing that discussion as a Science in Parallel short.

You’ll meet:

From the episode:

People and terms mentioned in this episode:

In case you missed it, check out our full episode with Jarrod McClean.

Featured image by JPNARPHY via wikimedia commons, Creative Commons Attribution-Share Alike International license

Transcript

Transcript prepared with Otter.ai with human copyediting.

Sarah Webb  00:03

This is a Science in Parallel short, an add-on episode where we share insights, explainers and other computational-science audio nuggets. In our last episode, I spoke with Jarrod McClean from the Google Quantum AI Laboratory. At one point, I asked him about logical qubits versus physical qubits, and here’s his full answer, talking about logical qubits and why they’re important in quantum computing.

Sarah Webb  00:39

How many qubits do we need for a logical qubit? Or do we know?

Jarrod McClean  00:42

Yeah, that’s a great question. So to kind of break down a logical qubit. So our team uses superconducting qubits, but there are many physical platforms one can use to instantiate what we call a qubit, which is sort of an abstraction of any two levels in a quantum system, or other quantum systems, you can kind of controllably dial between. But most of these systems that you find in nature, they’re sort of fragile in the sense that the environment jostles them around. It’s very easy to lose information that you put into them, so that makes it hard to do long computations, even short computations, in some cases. And so people were very worried, I guess, maybe as a historical note, how to overcome this problem. And there are at least a few aspects of fundamental properties of quantum mechanics that made people believe that this might be impossible early on.

Jarrod McClean  01:33

So, for example, that measurement destroys superpositions, that there is this other facet of quantum mechanics called the no-cloning theorem, which is, if I hand you an unknown quantum state, it’s generally not possible for you to make a perfect copy of that state without knowing the details of how it was made. And so these are aspects that are, in certain senses, core to how we do error correction, say, classically, on wireless cell phone signals or other communication channels. Which is we repeat the information in some way and then hope that the errors are sort of local, and then we’d kind of measure and do majority votes, but it’s really this like redundancy that helps us. And if we can’t clone an arbitrary state, then how are we going to get some of these benefits?

Jarrod McClean  02:19

And through some creative thinking and innovation, people like Peter Shor came up with methods where quantum error correction was possible. And the scheme is essentially that you kind of spread the information of a single qubit out over many physical parts, and this creates a sort of large, sort of like single entangled qubit that protects its information, nonlocally through entanglement so that small errors on one part can be systematically removed. And so it was actually a big triumph that this was even possible, but the early error-correcting codes, while showing it was possible, were very hard to implement in a realistic system. They had high overheads, and you really had to think hard about the operations being faulty, as well as the bits, which we often don’t account for classically, because our operations are very nearly perfect. And so it’s been a long history of moving these from their original theoretical construction to more and more practical variants, and mapping them closer and closer to hardware. And there are many moving pieces that make it possible.

Jarrod McClean  03:24

But eventually, of course, you want to end up with a robust logical qubit, made of many physical qubits that you can do operations on, and it will live for a very long time, long enough to complete your computation. And that’s determined by, say, how many physical qubits you use to make a logical qubit, your base error rate, how well you can decode them, and details of the code that you put it in. And so all of these factors people are trying to optimize at the same time right now, because their goal is, of course, to get to a useful computation as soon as possible with as little physical resources as possible, and that seems to look different a little bit for each different type of architecture. So in superconducting qubits, we have a much easier time, as do others, manufacturing in, say, 2D with local connections. And this sort of binds us very strongly to these types of error-correction codes called surface codes, which they just live on a surface, meaning qubits can talk to their neighbors on like a 2D surface, but they come with their own sort of predetermined how many physical qubits do you need for a logical qubit.

Jarrod McClean  04:33

And then you also will look at how many gates are in your algorithm, and this will determine the error rate you need. And so this determines how … the encoding overhead. But that encoding overhead is also strongly influenced by many technical factors like the base gate error rates, qubit lifetimes, qualities of our decoders. And so there’s a lot of pieces going into that. Maybe it suffices to say that many of these estimates have come way down in recent years, both due to advancements in error correction and decoding as well as improvements in the hardware. So it might have been, if you talk to someone five or 10 years ago, they might have told you, well, you’re going to need, like, 100,000 physical qubits to make one logical qubit. Well, that number has changed, I would say, pretty dramatically. Even for surface code qubits. I think, you know, 500 – 1000 to one is a more realistic scenario.

Jarrod McClean  05:26

And now people are even looking at higher rate codes like these, LDPC codes are sometimes called these like low-density parity check codes, which have the capability of storing many logical qubits in the same physical patch. So we might see that number come down by another factor of 10 or even 100, and these kinds of advancements are the things that can wildly shift your expectation on timelines. That’s not to say those codes, say, that people are looking into they’re not perfect, yet. They have overheads like they might increase the time it takes to do a gate, but that might be a trade-off you’re willing to make if you need a factor of 10 or 100 fewer qubits. And so I think now that these devices have gotten closer to being realized, they’ve drawn a lot more attention, not just in the theoretical concept of which codes might be nice, but optimizations on specific codes and specific systems. And through that, we’ve seen great reduction in resources and better guidance to the hardware of what to build. And that feedback, kind of like virtuous feedback cycle, I think, is really accelerating our our timeline.

Jarrod McClean  06:35

So I guess that’s a very long winded way of saying it’s tough to give like a direct this many physical qubits makes this many logical qubits, in part because it’s changing and improving every day. And there’s, there’s lots of trade-offs, but I’m hopeful that systems with, you know, on the order of 100,000 physical qubits will have enough logical qubits to do very interesting applications that are beyond classical. And maybe that number could even go down further

Sarah Webb  07:06

To learn more about Jarrod McClean, logical qubits, and Google’s Quantum AI Laboratory. Check out our show notes at scienceinparallel.org. Science in Parallel is produced by the Krell Institute and is a media project of the Department of Energy Computational Science Graduate Fellowship program. Any opinions expressed are those of the speaker and not those of their employers, the Krell Institute or the U.S. Department of Energy. Our music is by Steve O’Reilly. This episode was written and produced by Sarah Webb and edited by Susan Valot.

Scroll to Top