In our seventh season, we’re putting a spotlight on quantum computing, technology that could help speed up high-performance computing and artificial intelligence, shore up cybersecurity, study complex natural systems and much more.
Jarrod McClean works on quantum algorithms and applications at the Google Quantum Artificial Intelligence laboratory, and this conversation links some of the ideas about AI for science from our last season to emerging quantum technology.
Join us for a conversation about Jarrod’s work at Google, where he thinks quantum computing could soon enter the computational science workflow and the mental gymnastics of harnessing hardware that researchers are still designing.
You’ll meet:
- Jarrod McClean is a senior staff research scientist at the Google Quantum AI Laboratory. He completed his Ph.D. in chemical physics at Harvard University in 2015 and was a Department of Energy Computational Science Graduate Fellowship recipient. After graduation, Jarrod was a Luis W. Alvarez postdoctoral fellow in computing at Lawrence Berkeley National Laboratory.

From the episode:
Jarrod discussed quantum computing applications that he finds exciting: learning from quantum data and empowering classical computers with data from quantum computers. He discussed how quantum computers could help generate data about complex natural systems, such as strongly correlated biometallic enzymes, which scientists could use in classical systems to help solve new research problems. The work has grown out of a 2021 paper in Nature Communications.
He also discussed quantum advantage, efforts to build quantum computers that can outperform classical systems or work on problems that are intractable using classical computers. Read more about recent work in this article from The Guardian and from Google researchers.
Related episodes:
- Sunita Chandrasekaran: Computation in Translation
- Prasanna Balaprakash: Predicting Earth Systems and Harnessing Swarms for Computing
- Ian Foster: Exploring and Evaluating Foundation Models
Featured image by Gerd Altmann from Pixabay
Transcript
Transcript prepared using Otter.Ai with human copyediting
Sarah Webb 00:03
This is Science in Parallel, a podcast about people and projects in computational science. I’m your host, Sarah Webb. In our last season, we focused on large language models and other types of foundation models for science. With this episode, we’re starting our seventh season. We won’t leave AI behind, but we’ll focus on quantum computing, another growing and emerging area.
Sarah Webb 00:32
As with AI, researchers are matching quantum algorithms with interesting scientific problems. Add to that a range of delicate and experimental hardware. In other words, this topic is right up our alley. Please subscribe on your favorite platform so that you don’t miss any of our upcoming episodes and share them with a friend or colleague. We hope you enjoy the new season.
Sarah Webb 01:02
To launch this series, I’m speaking with Jarrod McClean, a senior staff research scientist in the Google Quantum Artificial Intelligence Laboratory. Jared completed his Ph.D. in Chemical Physics at Harvard University, and he was a Department of Energy Computational Science Graduate Fellow from 2011 to 2015. This podcast is a media outreach project of that fellowship program.
Sarah Webb 01:28
As the name of his workplace suggests. Jarrod thinks about this interface between AI and quantum computing, particularly algorithms and applications. Join us for a conversation about his work at Google, where he thinks quantum computing could soon enter the computational science workflow and the mental gymnastics of harnessing hardware that researchers are still designing.
Sarah Webb 01:54
Jarrod, it is great to have you on the podcast.
Jarrod McClean 01:57
Yeah, great to be here.
Sarah Webb 01:58
So I think where I’d like to start is for you to tell me about your job. What is it that you do? What kind of problems do you work on?
Jarrod McClean 02:07
Yeah, so I’m a research scientist at Google, and I work on sort of the algorithms and applications of quantum computers to problems in machine learning, and more broadly, we try to think about both foundational advances and more practical ones. We think about where we should take different research directions. We chat with other scientists. We chat both in academia and industry, and I personally might write code prototypes. I might work on certain theoretical results. I might be writing papers or, you know, putting together a plan for a new research direction.
Sarah Webb 02:43
So you talked about quantum machine learning, and so let’s talk a little bit about Google and quantum computing and how some of these ideas that you work on fit there.
Jarrod McClean 02:57
Yeah, absolutely. Google has broad goals in making quantum computers useful outside of just quantum science and technology, really understanding what the landscape of that looks like. Say, if we had a quantum computer, what’s its potential value to things Google cares about, to things humanity, more broadly, cares about? And of course, what is the risk in doing that? How challenging will it be to realize that goal? And I think there’s a lot of maybe fundamental technology investments that Google does. Quantum may be among the more speculative, but I think it’s one that we’ve gotten a little bit more excited about over time as we’ve hit more and more technological milestones.
Jarrod McClean 03:38
And I would say, of course, Google is very AI-centric at the moment, and we’re hopeful that quantum computing can play some role in that. And if I had to sort of characterize my own role within, you know, Google’s goals, you could kind of think of a speculative technology effort as having at least two fundamental subparts. One is the risks of that. So, you know, the costs of building it, the probability it will work out, and then the reward of, you know, if it works out, what’s the the payout? And I feel like, as do the other people working on algorithms and applications, lean more in the rewards category of we try to figure out, if our hardware team is successful, what’s going to be the upside, both in terms of new kinds of computations or applications we can discover and really value and impact to people’s day to day lives? And if we can really boost that reward, it helps to build a foundation for diving more aggressively into riskier and riskier technology areas.
Sarah Webb 04:37
So talk about this idea of quantum AI. What does that look like? How is that different from what most people think about in terms of AI?
Jarrod McClean 04:48
Yeah, that’s a great question. I think we we get this question a lot, in particular because it’s in our group’s name as well as one of our research interests. I think quantum AI, for a lot of people, has come to mean a pretty broad umbrella of intersections between quantum technology and, you know, quantum computing and machine learning, or artificial intelligence. And I’ve seen this kind of manifest in many different ways. For example, how will quantum computers impact the machine-learning problems people are interested in today?
Jarrod McClean 05:21
Assuming we scale up our quantum computers and make them useful, it can also mean, how will quantum computers be useful in an analogous paradigm where data comes in in a quantum form, and we sort of process it directly on quantum computers? It has come now a little bit to mean, how will classical AI be empowered by quantum data? So this notion that data kind of makes different types of computation fundamentally more powerful, once you’ve seen examples, it might be much easier to solve a problem had you not seen those examples. And we think quantum computers will probably play a role in supplying data to classical AI. And this will, even without running the quantum computer in tandem with the device, make it much more powerful. And so I would say there’s many different facets in which now we imagine quantum will touch AI, and even including AI accelerating building the quantum computers themselves. And so we’ve lumped a lot of this into the similar family, and we spend time going down each of these somewhat distinct but related threads to try to figure out where the most viable pathways are.
Sarah Webb 06:29
Can you maybe walk through how that plays out in one of these examples, either accelerating or building systems to kind of illustrate some of the things you were just talking about?
Jarrod McClean 06:40
Yeah, let me maybe zoom in on two I find particularly exciting. So one direction from this that we are very excited about is this thing I mentioned about learning from quantum data, which takes a second to explain exactly what that is. So many people are say, familiar with this illustration of Schroedinger’s cat, where you have a cat that’s sort of both alive and dead before you open the box, and then you open it and it kind of collapses into one or the other. This bears some resemblance to the way we do experiments in many physics labs today. We prepare some system that we’re interested in studying, manipulate it a bit, and then we do a measurement and read, say, a classical number out. And this is like the cat being alive or dead. We repeat it many times, and this string of classical data for us sort of comes in, and we analyze it.
Jarrod McClean 07:32
So something we realized recently is that if you have a little more control over that, say, you have a quantum system. Its ability to manipulate that cat in the alive and dead state before you collapse it into one or the other can help you reveal far more information about that physical system than if you’re forced to collapse it first. And really the ability to do that manipulation is very much what quantum computing is trying to achieve, that we can manipulate these coherent parts of our universe without collapsing at first. So what we found, kind of roughly speaking, is if we take, sort of two copies of that Schroedinger cat and use a quantum computer to look at it, even with kind of a simple scheme, that ability to manipulate things quantumly lets us learn about that, say, cat system exponentially faster than we would be able to do without this kind of technology. And so what that means is there are systems for which it might take longer than the age of the universe to measure this quantity about the system.
Jarrod McClean 08:32
So granted, it’s only in somewhat specific cases we’ve proven this for so far, and we’re trying to push it closer to, like, say, real world applications. But that’s one vein I think we’re very excited about. There’s a few missing pieces of the technological stack we’ll need to really make that work: so more sophisticated quantum sensors and a technology called transduction that moves it from the sensor into the quantum computer. And all of these pieces of the pipeline need to work well enough, but I think we’re very excited about that general direction.
Jarrod McClean 09:03
Another one that’s really exciting to kind of point out that’s maybe a bit more near term, is the one I mentioned about empowering classical AI today with data from quantum computers. So there’s a few problems say that. One, what, many people in the computational science world, I think, are pretty interested in, like, simulation of chemical and material systems. This could include, you know, catalysis with rough surfaces. It could include, you know, strongly correlated biometallic enzymes. And you want to do these simulations and learn something about them, and from that, make design choices. So a lot of these, we think, are quite challenging to do when the systems become strongly correlated, or for certain particular reaction types. But we think quantum computers would be very good at these simulations, and this behavior, because it’s in a physical system, is somewhat regular. So we’ve started to ask the question, started in part by this power-of-data paper that we worked on, where we showed there were problems that even when they were hard with just a classical computer and easy with a quantum computer, they actually also became easy when the quantum computer passed some amount of data to the classical computer.
Sarah Webb 10:12
That paper was published in Nature Communications, and there’s a link to it in our show notes.
Jarrod McClean 10:18
You can imagine that I go and do my chemistry or physics simulations, I watch the atoms and molecules move around, record a bunch of data from my quantum computer. Now I turn my quantum computer off, but I have that classical database that future classical computers get to use, and now they can solve a wider set of problems than they could without that database. And so that allows us a much more near-term, and I would say widespread, application of quantum computers that comes from the ability to empower AI or general classical simulation methods with data from quantum computers. And I think those are two avenues we’re very excited about.
Sarah Webb 10:57
Let’s expand out a little bit to sort of talk about, hey, if we have these ideal quantum computers doing what we want them to do, what kinds of problems could we start to think about that were difficult or impossible for us to do previously?
Jarrod McClean 11:14
Yeah, that’s a great question, and I think there’s a lot of interesting things to say about that, especially with analogy to how we saw AI develop in the last few decades. And so maybe a way to start talking about that is to talk about at least a little bit of the progress we’ve seen experimentally. We’re very interested in achieving experimental demonstration of quantum advantage on useful problems that are– where I’m kind of defining useful to mean people outside of quantum computing would care about the solution, whether or not it came from a quantum computer. And so we’ve been stepping progress along for that. You know, we have had for many years, small systems of noisy qubits that have grown and they’ve improved in quality. And because of the noise. It’s often been hard to do simulations that surpass what we could do without quantum computers, and in recent years, that’s changed a bit.
Jarrod McClean 12:09
So we have demonstrations we believe are beyond the capabilities of classical computers, but they’re not easy to directly verify. So these are sort of these random circuit sampling experiments, where you can kind of infer that you’ve entered the beyond classical regime. But that’s not quite as satisfying as you’d like it to be. More recently, we did a demonstration of a quantumly verifiable advantage, meaning, if you and I both had quantum computers, we could both run the same problem, and if we believed in our own result, we could sort of check that your result was consistent with that. And that’s like a little bit further, but the gold standard is sort of, at least for early demonstrations, classically verifiable advantage. So some problem that’s easy to verify but hard to compute. So you can challenge me with the problem I use my quantum computer to solve it, send you the answer, and you can say, Wow, yes, they did it. And the prototypical example of this is, of course, factoring large numbers with applications to breaking cryptography and general study of kind of number theory and things like that.
Jarrod McClean 13:13
So that, you know, last one we can’t do yet. And there’s a lot of proposals for how to to move the needle forward on that, there’s always been an up-and-down interest in general optimization problems. So can quantum computers find you the design variables that you want to have to optimize some you know, maybe minimize wind resistance, or do something like that? And I think people have oscillated over time how impactful or how good quantum computers will be at this problem. I mentioned factoring it can have implications for breaking cryptography, or perhaps, if you wanted to view it in a more positive light, for pushing us towards better security across all of our systems that won’t be vulnerable to these types of attacks and can also be for other types of learning and sensing that I mentioned before. We’re kind of hopeful that it’ll impact machine learning and AI in both this kind of quantum data regime as well as classical one.
Jarrod McClean 14:11
The hardware has sort of moved much faster than I might have expected, and algorithmic improvements have stacked on top of that. And so I feel like I would not be surprised if we entered at least this regime of sort of beyond classical computations that are classically verifiable for maybe something like a number theory type problem or early quantum simulation by something like 2030. Which, if you had asked me five years ago, I would have said, much longer than that.
Sarah Webb 14:41
What are the big challenges that you’re thinking about right now in this work?
Jarrod McClean 14:45
So I think a lot about the intersection of how is quantum going to influence how we do machine learning, and, of course, how we we learn about quantum systems. And I think maybe there’s a few things worth highlighting in this space. One is this maybe analogy to how AI progressed classically. So if we kind of go a little bit back in time to maybe the 70s or so, there were a lot of the foundational ideas appearing that are used in modern AI systems: you know, neurons, multilayer perceptrons and things like that. But computational power and data availability was a bit limited, and so people tried to bridge the gap a bit with development of learning theory to get a better understanding of what systems we could expect to learn. When could we expect AI to succeed? When was it kind of a fruitless endeavor?
Jarrod McClean 15:39
And I would say we’re in a somewhat similar situation now with quantum computers. We don’t have the scale, or, you know, large enough robust quantum systems to go out and do the same datasets as people do classically today, and so a lot of what we have to do to make progress is work on these systems with pencil and paper. But that kind of restricts us a bit to what we can sort of rigorously show is true. But it’s really hard to say rigorously define what a real-world dataset looks like, and this seems to impact learning quite a bit. So if we take a step back on that analogy that I mentioned to classical learning theory, and say, the 70s, people worked on this theory of, say, PAC learning, which is still, of course, used today and used in quantum, which is probably approximately correct learning, PAC. And what they found was that for a lot of problems, they would conclude, in the worst case, they were unlearnable.
Jarrod McClean 16:34
So you started to see problem after problem where, if you read into this too seriously, then you know, we couldn’t learn anything. But luckily, that didn’t discourage everyone. At least some people plowed forward and continued to build these systems further and further along. And now we see, of course, incredible effectiveness in AI and tasks like image recognition, text generation, even coding now, all things that if you’d really listened too closely to your pencil and paper results you might not have even attempted. And I hope we don’t get too much into that situation with quantum computing, when the way we develop applications right now is we do try to do a lot of pencil and paper proofs and compare it to similar scaling analysis that we do classically. And I think it’s nice that we have some problems where we suspect there’s going to be a robust separation that quantum can do well on.
Jarrod McClean 17:24
But I think we shouldn’t be too discouraged before we can at least do some of the same empirical explorations that we’ve done classically, which are required to see really the capabilities of these systems. And so I work a little bit on both the pencil and paper aspect of that, because you you know you have to do with what you can with the tools you have.
Jarrod McClean 17:43
But I also try to prepare for the times of say, early fault tolerance, as we call it, so, early error-corrected quantum computers, and just beyond that, so that we can enable the empirical explorations we might need to get to larger data sets and see emergent phenomenon where quantum is doing much better than we might have predicted by pencil and paper. Maybe we don’t understand yet why, but we can still use that tool effectively.
Sarah Webb 18:08
That seems really exciting to think about. I mean, it’s a little almost science fiction to me to be thinking about, Okay, what I might be able to do with a system I don’t have yet? Do you ever like, you know, open your email and hear from a colleague and go, Oh my gosh, I can suddenly do something I couldn’t do yesterday or last week or last month?
Jarrod McClean 18:27
Yeah, I feel like we have had a lot of that push and pull over the years. So things have sort of shifted in the field a bit these days in terms of belief, of what the best pathways are to reaching useful applications. Not too long ago, I was thinking about, how do you make the devices we have right now get as close to this quantum and classical boundary as possible? And so there really would be a back-and-forth day to day of, Oh, these gate fidelities got better; these qubit lifetimes got longer.
Jarrod McClean 18:57
And I think some of the work I’ve had that’s had the most impact or taken off has been things really inspired by the true constraints you have in order to try to run something, I would say more recently, the after many years of trying this for smaller size systems, people have started to think, well, maybe we do really need error correction for this to work. And so a lot of our metrics have become more driven by how they impact our error-correction success, and a lot of our architectural designs are similarly impacted. But I think in that domain, we now, and previously, have a much more aggressive back-and-forth of someone prototypes a new error correcting code they might think has better properties, but it’s a missing some component or aspect of the device that would really let it happen.
Jarrod McClean 19:45
And so then there’s a back-and-forth of, can we make this possible? You know, I don’t think so. Move on to another code, but then maybe a month or two later, they come back and say, I thought about it some more, and I think we can do that. So maybe this is a space we should go into and rapid developments like that between the hardware team and sort of error correction can really change the trajectory of what I, an algorithm designer, might expect to have in two, four, you know, six years, in terms of not just raw physical qubits on the device, but early logical qubits and protected operations. And so it impacts the budget of as we go out and say, what are some of the first computations we’re going to run? How do we think about how to use that budget most effectively?
Jarrod McClean 20:30
And you know, that will even change day to day at that abstract level of, you know, I think we’ll have 10,000 logical qubits with a million operations, and maybe a few weeks later they’ll say, actually, maybe it could be 100,000 by that date. So it’s a very fluid space, and we try to adapt to the resources we think we’ll have and get the most out of them, once again, by pencil and paper, which is a bit of a strange way to develop. For anyone listening who’s probably done a lot of computational science, like you might even ask, How do you know your programs are going to compile or be correct if you’ve never actually run them? Which is a fair question. We’re going to have to work that out.
Sarah Webb 21:07
Logical qubits group together multiple physical qubits made from superconducting circuits, trapped ions or other technologies. Logical qubits help to manage unique challenges in these systems, allowing them to store information long enough for computation to be completed. Quantum systems are fragile and jostling in the environment can change their physical states and make them lose information. Quantum error correction strategies rely on spreading information out over multiple qubits. How many physical qubits are within a logical qubit remains an open and evolving question. The answer depends on many factors, including whether researchers are using superconducting qubits, like Google does, or other platforms. Jarrod discussed this in detail, and we’ll share that as a bonus mini-episode. But here I wanted to get a rough idea of how large quantum systems need to be to do useful computations, at least right now.
Sarah Webb 22:12
One of the things that I’m trying to imagine in this space is exactly what as quantum computers emerge and become practical devices, what are they going to look like? I mean, are they going to be freestanding quantum computers that go off and do their quantum thing and then you take an output and use it? Are they going to be linked in as a part of supercomputers, HPC systems? Obviously, people are working on all of those things. As someone working in the field, how do you see things evolving in terms of what the first practical systems might look like?
Jarrod McClean 22:54
Yeah, that’s a great question. It’s going to be interesting to see how these devices emerge and what roles they play in the near-, mid- and long-term. And I think we’re seeing bets by different companies and academic groups on them emerging in basically any form we could have conceived by analogy to how we do computing today. So of course, it’s easiest to build these because they often require somewhat bespoke and custom equipment to maintain control systems, cryogenics, etc. It’s easy to kind of have everything in-house and have it kind of be all parts of a research process with, you know, different connections.
Jarrod McClean 23:39
And so that makes it tempting to say, I’m only going to keep them in-house, and maybe that mimics very closely how we see cloud architectures today, where maintaining a complicated or large or often changing system, it makes more sense to have the people who built it always next to it and modifying it, and then users can log In and use it. So I’d say cloud model access is via sort of classical controls, where you submit programs, run them and get some results out, has been the most popular way they’ve been deployed, say, outside the groups that build them.
Jarrod McClean 24:14
I think other groups have indeed sold devices that then sit on, say, the property, kind of like the Department of Energy does with supercomputing centers, where you might have facilities set up to house someone else’s device. And we’ve certainly seen some of that. So far. We’ve seen people investing in quantum networking, meaning, instead of just taking classical program specifications, I want to be able to link to distant quantum computers with something that is also quantum, that that nominally kind of creates one larger quantum computer, or creates opportunity for communication-type problems.
Jarrod McClean 24:50
And so we’ve seen all of these coming out sort of at the same time, probably in part because we have some experience in what’s been successful classic. And we think that maybe quantum will repeat that success in a way, but I think we’re really going to have to wait a bit to see how people start using them for the real applications, to know what the dominant architectures will be, and perhaps it will be a mix of all of these.
Jarrod McClean 25:15
I personally think there has to be benefits to these kind of quantum communication-type applications, even though they’re harder to implement with certain architectures. But in certain ways, the internet, we didn’t really predict the way it would take off, but fundamentally, just connecting people and connecting technology seems to pay long term dividends. So I’m hopeful that that’s going to be an area where there’s a lot of success. I think in the near term, the challenge of maintaining these expert systems means they will predominantly live in the places they’re developed and be connected to remotely and classically. But I think we could see that shift quickly if, for example, someone found an application that was very clear and very valuable but required two distant networked quantum computers that would add a lot of fuel to the fire to push in that direction, because I don’t think there’s anything preventing that.
Jarrod McClean 26:08
We’re more guided by the guiding star of where is there likely to be the biggest application impact and value? And let’s shoot for that first and worry about the other possibilities later. But that could all shift overnight. Because, you know, you never know what people are going to invent tomorrow I suppose.
Sarah Webb 26:26
Is there anything else that you’re particularly excited about in this space that you think is important to let people know about?
Jarrod McClean 26:32
I guess maybe one thing I’ll say along the lines of the speculative direction, that maybe we need the technology to know what it’s good for. You know, I don’t think we would build the technology if we didn’t have a few good ideas of what it might be good for, but I’m excited to be open to the possibility of things that we have not anticipated yet.
Jarrod McClean 26:53
So, of course, I think there are some often repeated quotes from the inventor of the laser that I don’t know how true they are, but they often say, I you know, this is a toy. I have no idea what this will be useful for. And yet, here we are, some decades later, and lasers are everywhere. They’re incredibly useful tool. It was very hard to predict what their utility would have been. I feel like quantum has to sort of sit in this realm for maybe a very basic or naive reason, which is: It’s clear to us now, after many decades of experimental science, the universe is sort of fundamentally quantum, and if we want to continue to interact with it at deeper and deeper levels, have finer and finer control and build more and more technology, then our ability to manipulate things at a quantum level has to be a crucial part of this.
Jarrod McClean 27:41
And the practice of building a quantum computer is the practice of gaining finer and finer control over the quantum aspects of our universe. And I just feel, to my core, that this is going to unlock things we couldn’t have predicted in our ability to sort of manipulate the physical world. You know, we view quantum computing right now very much through the lens of what we’ve used classical computers for, but I think there’s going to be so much more, and I’m excited to get there. If I knew exactly what it was, I would tell you, but maybe that’s part of the fun.
Sarah Webb 28:12
If you were sitting in a room with early career researchers right now, either computational scientists more broadly, or people interested in quantum computing, what piece of advice would you want to pass along?
Jarrod McClean 28:23
Yeah, I guess maybe this would follow my own career trajectory a bit, but I would also advise them, because quantum computing is a speculative but exciting area, and it’s one where things are changing all the time, overnight. And so I think it’s worth it to you know, stay curious and be flexible in the topics you work on, the directions you feel like you could take. And, you know, it’s sometimes tough to predict, I think, what work you do that will take off. So the best you can do is to kind of do good work on any topic that you’re interested in and definitely put an effort out in sharing the work that you do do.
Jarrod McClean 29:03
Think I heard it once said that: The recipe for success is simple, but crucial to do both parts of it, which is do cool stuff and tell people about it. Without either side of that, it doesn’t work, but, yeah, it’s a simple recipe, I guess. And maybe is more targeted advice these days for young computational scientists or people entering the field that’s not quantum-specific, is, I don’t think I’ve ever this is diving a little bit away from quantum into AI. I don’t think I’ve been alive where I’ve seen more rapid progress than I have in the last six to 12 months in the way AI tools interact with– not only how people play with them as toys, but they’re now entering serious researchers pipelines in a big way–and people who are using these tools are becoming 10 or 100 times more productive.
Jarrod McClean 29:56
And I think if you want to succeed, these tools have limitations, but they’re evolving all the time. You need to understand what their trajectory is and how that’s going to impact your future in both kind of career-wise, research-wise, and, you know, general life. I think these are coming in a big way. They’re entering our quantum research pipelines. We’re often thinking about how they’re going to impact mathematical proofs, program development, how quantum is going to feed back into that, and they’re, I don’t know. I’ve never seen anything move so quickly, and I’ve never been so blown away. So what’s yeah, stay on top of that if you’re young in your career, especially.
Sarah Webb 30:33
Jarrod, thank you so much for taking the time to tell me about all the cool stuff.
Jarrod McClean 30:40
Absolutely. Thanks for having me
Sarah Webb 30:45
To learn more about Jarrod McClean and his research, additional information about superconducting qubits and Google’s Quantum AI Laboratory, check out our show notes at scienceinparallel.org. Science in Parallel is produced by the Krell Institute and is a media project of the Department of Energy Computational Science Graduate Fellowship program. Any opinions expressed are those of the speaker and not those of their employers, the Krell Institute or the U.S. Department of Energy. Our music is by Steve O’Reilly. This episode was written and produced by Sarah Webb and edited by Susan Valot.
