Quantum computing involves collaboration and interdisciplinarity, the meeting of minds from different perspectives to solve problems where their expertise overlaps. This episode does a version of that with audio, bringing together insider insights from four quantum researchers across industry, academia and the national labs. They discuss research areas including fundamental quantum mechanics, algorithms and calibration and the human and network connections that will be needed to build utility-scale quantum computers.
All four guests are alumni of the Department of Energy Computational Science Graduate Fellowship program, which supports this podcast.
You’ll meet (clockwise from upper left):
- Jacob Bringewatt: Assistant Professor of Physics at the U.S. Naval Academy
- Grace Johnson: Senior Product Manager, NVIDIA
- Alicia Magann : Senior Member of Technical Staff, Sandia National Laboratories
- Dylan Sim: Senior Quantum Applications Architect, PsiQuantum

From the episode:
Jacob mentioned how quantum sensing and its connection to physics experiments such as LIGO to observe gravitational waves and HAYSTAC to search for cold dark matter
Dylan talked about T-gates, Toffoli gates and resource estimates for quantum algorithms, and highlighted the work of Google, Quantinuum and others on quantum error correction. Dylan also mentioned the DARPA Quantum Benchmarking Initiative, a U.S. government program to quantitatively measure progress in the computational challenges toward building utility-scale quantum computers.
Grace discussed the need for low-latency connections between quantum and classical parts of a hybrid supercomputers. She also mentioned Craig Gidney’s estimates of the number of qubits required for Shor’s algorithm. Gidney is a researcher with Google Quantum AI.
Alicia described Sandia’s quantum work as in a “golden age.” She works with Sandia’s Quantum Performance Laboratory and Quantum Algorithms and Applications Collaboratory.
Additional reading and listening:
Scott Aaronson: How Much Structure is Needed for Huge Quantum Speedups?
Recent work from Dylan and PsiQuantum: They compiled a circuit for simulating quantum chemistry on PsiQuantum’s active volume architecture and compiled a quantum circuit for state preparation, a common subroutine in fault-tolerant quantum algorithms.
CUDA-Q, is an NVIDA programming platform and a quantum analog to CUDA. CUDA facilitated programming in C++ and Python across hybrid-classical systems that include CPUs and GPUs. CUDA-Q extends that to include quantum processing units (QPUs). You can also hear more about NVIDIA’s quantum work in our recent interview with Grace’s colleague Sam Stanwyck.
Transcript
Transcript prepared with otter.ai and human copyediting.
Sarah Webb 00:03
On the Science in Parallel podcast, we’ve been exploring technical challenges to building general-use quantum computers. We’ve also discussed the partnerships across academia, industry and the national laboratories that are translating basic research into quantum computing platforms.
Sarah Webb 00:24
I’m your host, Sarah Webb, and this episode is a roundtable discussion that brings together experts from all those research sectors simultaneously. You’ll hear from Alicia Magann, Jacob Bringewatt, Dylan Sim and Grace Johnson, they all completed their Ph.D.’s within the last six years, and all are alumni of the Department of Energy Computational Science Graduate Fellowship program, which supports this podcast. Their conversation highlights the broad and deep expertise needed to build quantum computers, the hard problems to be solved and the opportunities to collaborate on tomorrow’s technology.
Alicia Magann 01:14
I am Alicia Magann. I’m a staff scientist at Sandia National Labs working on quantum computing.
Jacob Bringewatt 01:20
I am Jacob Bringewatt. I am an assistant professor of physics at the United States Naval Academy, also working on quantum computing and quantum sensing.
Dylan Sim 01:30
Hi, my name is Dylan. I’m currently a senior quantum applications architect at a quantum computing startup called PsiQuantum. I’ve been working here for the past four years, really focusing on developing quantum algorithms specifically for fault-tolerant quantum computers.
Grace Johnson 01:33
Hi everyone. I’m Grace Johnson. I’m a product manager at NVIDIA, working on developing quantum computing platforms so you can bring sort of classical and quantum computing together.
Sarah Webb 01:59
It’s great to have you all on Science in Parallel. How do you see your roles in quantum computing? What are you contributing to the field?
Jacob Bringewatt 02:08
So I’m a physicist by training, and I’m a theorist. Honestly, I might be unique, especially with some folks working in industry here in that in some sense, I don’t particularly care if we manage to build a quantum computer or not. I think it’s going to be super interesting either way. I’m particularly interested in questions about how information theory and computation allows us to think about quantum mechanics. So, you know, obviously we build these devices, it’s going to be fantastic, and we can use it for all sorts of interesting science and quantum simulation, etc. But mostly I view my role as I’m just curious. And I’m curious about this very interdisciplinary way of thinking about physics and quantum mechanics and whatever we can learn along the way, I think, is super exciting.
Dylan Sim 02:51
Yeah, I guess part of my work is trying to build a tool for folks like Jacob, at least one part of the quantum computing kind of platform I work in kind of the quantum algorithms portion of the stack. And so that means that once we have a particular problem or application that is well-defined, perhaps from folks like Jacob, we think about which algorithms out of the existing or new quantum algorithms that we can build that can be stitched together to tackle this problem. And for my role, specifically, once we choose a set of particular algorithms, it’s not enough to just talk about these quantum algorithms, but actually instantiate their corresponding quantum circuits. That essentially is a prescription for the operations that we need to kind of execute on a quantum computer. And so part of my work is really specifically in cases where these algorithms may be quite new. These quantum algorithms are often formulated in terms of these black-box oracles, or essentially high-level operations that actually need to be compiled down into explicit gate zone operations. So I kind of work on being somewhat of a human compiler for a quantum computer and evaluating how good these circuits are by doing what we call quantum resource estimates.
Grace Johnson 04:11
Yeah, I can hop in here. So what’s really interesting is, like the field of quantum computing has made so many advances in the last, you know, decades, and it’s kind of moved from sort of physics experiment into kind of real-world industry trying to figure out how to scale. And with that kind of comes questions of, how do you program it, right? And quantum computers, at least, kind of, the way I see them is they’re not standalone computers. They’re never going to replace classical computers. It’s a device that can accelerate very specific applications. And kind of like Dylan was saying, there’s new algorithms that people are coming up with and new applications that people are looking into, but at least sort of the way I see it at the end of the day, it’s going to have to be working in concert with classical computing. And so with that comes a lot of questions of programmability. And how do you split your tasks in an intelligent way amongst that system? How do you, physically and with the hardware and the software kind of build and operate that? And so that’s what I think is really interesting about the work that I’m doing, is sort of thinking, you know, what are the tools that you build to program and run one of these things?
Alicia Magann 05:19
All right, I’ll jump in. So my role as a scientist at Sandia is to do research. There’s been a huge amount of progress in quantum computing in the last few years, but it is still very much an emerging technology. It hasn’t fully emerged yet, and so there’s a lot of areas where more research is needed to keep driving that progress. So I personally work in different areas, kind of under this broader umbrella of quantum computing. So I work on the development of scalable and platform agnostic, calibration and control frameworks for quantum computers. And then I also have some work on the design and analysis of algorithms and applications for quantum computers.
Sarah Webb 06:14
What quantum innovations are you most excited about right now?
Jacob Bringewatt 06:20
Personally, I’m a theorist, and again, I’m very interested in sort of information theory, computer science related problems, and how they help us think about physics. I think this is kind of what we can do now, and it has been the state of this field for, you know, several decades. We’re very interested in these ideas of, you know, how things like computational complexity theory help me think about quantum mechanics. These days, I’m super interested in thinking about pseudo resource states of various sorts, so things like pseudo-entanglement, pseudo-random quantum states, which is this idea of thinking about some property that we have in quantum mechanics, and maybe we have it what it looks like to a computationally bounded observer, right? So this leads to a lot of just interesting new algorithms, but also, I think it’s very interesting from a physics perspective. Of this idea, you know that there’s properties that states can have that with limited computational power. I can’t tell the difference between whether they have the state or not. And I think this is a super interesting idea.
Jacob Bringewatt 07:19
So, personal level I’m really interested in that. I have recently been thinking a lot about connections between randomness and quantum sensing, which I think goes back to the more quantum technology side. I think sort of two things that I’m most excited about nearer term are sensing, which I you know, isn’t a full-scale quantum computation, but this is being used now in experiments like LIGO or HAYSTAC to detect gravitational waves or to try to detect dark matter. So I think sensing is kind of a nice bridging quantum technology, and then increasingly, as we start, you know, adding compute to the sensing tasks, we are in this emerging regime where we’re kind of moving beyond NISQ.
Jacob Bringewatt 07:59
And I think sort of on the theory side of things we’re maybe a little bit more pessimistic about thinking about near-term applications of quantum algorithms with, you know, a few hundred noisy qubits. And everyone else mentioned scalability, right? I think this is the super exciting thing right now, is how do we scale, and what are the applications that we can reach as we scale? Is that a few-year thing, or is that a 10- or 15-year thing? And I’m honestly, really excited to hear what other folks have to say about that, like, what actually do we think is the horizon for demonstrating things like quantum simulation? There’s been all these amazing error-correction demonstrations over the past few years, right? I think this is super exciting, and how quickly we can get to the point where we’re in this early fault-tolerant regime where we have maybe hundreds or thousands of logical qubits is super exciting.
Dylan Sim 08:47
Just riding off of what Jacob was saying in terms of scalability and excitement and error correction, I kind of echo that sentiment of even though I’m not as well-versed in quantum error correction and fault tolerance. I think those demonstrations and innovations that they’ve been kind of putting out there more recently have been very exciting to me, at least, because this means all the circuits that I’ve been kind of working on and trying to optimize to run on fault tolerant quantum computers are closer to being actually realized. And, like Jacob mentioned, because these circuits are allowed to be now much deeper and more resource-intensive than some of these near-term quantum algorithms, we could exploit a lot of interesting structures and algorithms to potentially find some quantum speed ups.
Dylan Sim 09:35
Another thing that I find very interesting that is a little bit more specific to what I do, is I always find very clever circuit identities and compilations of quantum algorithms very interesting. I don’t see us being too different from machine learning or I guess machine learning was sort of based on or trying to mimic how we learn. So I see my job as trying to look at more circuits, work with more circuit fragments, as well as read more papers on different compilations that people have come up with in order to, kind of like train ourselves to be better human compilers. And of course, in the future, we would like to automate a lot of these processes, especially using high-performance computes that perhaps Grace is more familiar with. But these papers that are coming out in terms of circuit compilations, I believe, is, yeah, a step towards building like a powerful compiler for quantum computers, kind of paralleling what has happened for classical computers?
Grace Johnson 10:31
Yeah. This is kind of going off of what Jake was saying earlier, but what I find really interesting kind of thinking from the applications level. So my Ph.D .was in quantum chemistry. So, you know, not kind of like a pure quantum computing background here, but we have this sense, you know, from physics and chemistry that problems that are actually interesting for us to solve have structure in some way, right? And so, you know, maybe the work that Jake is doing is trying to sort of characterize that structure or entanglement in some way, or some, you know, randomness, and have some kind of theoretical understanding. And what I think is really cool is the idea of sort of being able to take that structure and somehow map it efficiently onto the hardware that we have right be that quantum computers QPUs or CPUs or GPUs or LPUs or whatever, this zoo of heterogeneous compute, and I think that’s really interesting. And I think that’s kind of the way that we’re going to have to go, sort of the, you know, end of Moore’s Law idea, right? So being able to to learn that mapping from the problem structure onto the right kind of hardware for the job. I think there’s just so much to be explored there, and I’m really looking forward to that.
Alicia Magann 11:49
Yeah, so I can kind of piggyback off of some of these other answers, but this is a tough question. So what excites me? I feel so excited about so many things, but I was thinking back. And when I came into the field, during my Ph.D., I did research on quantum control. And when I started doing that, I did it because I thought it was so cool. I thought it was interesting. I was really curious about it. And that was enough. So it didn’t matter at that time if there was a big killer app or technology at the end of the road, but these days, it feels totally different. So I continue to be, I guess, interested and curious about quantum science and quantum research questions. But also there is this question of, you know, are we going to continue along this path, and is there going to be this utility scale quantum era at the end of it? And I think that’s extremely exciting. So these days, I do point a lot of my work in a way that tries to support the advancement and eventual realization of utility-scale quantum computing. It’s wonderful because, you know, not only is there that kind of application driver there that feels exciting, but there’s still that curiosity driver as well. I mean, all of this is so exciting.
Jacob Bringewatt 13:11
I think you point to this very interesting, I mean, honestly, tension in the field. But partially what makes me enjoy it so much is, you know, on one hand, we have very basic, curiosity-driven science, right, which, you know, is my personal inclination. But on the other hand, right, we have this injection of both ideas and resources, really, because of the technology benefits as well, right? And yet, I don’t think we can really predict very well what’s going to happen. And trying to strike this balance locally, a balance that I don’t really know if I always nail, but I think it’s a very interesting tension that makes for good work.
Alicia Magann 13:48
Honestly, yeah, absolutely, I feel the same way.
Sarah Webb 13:51
A quick recap that I’m speaking with four researchers working on quantum computing. Jacob Bringewatt, who teaches at the U.S. Naval Academy, Dylan Sim, who works at a startup company PsiQuantum, Alicia Magann of Sandia National Laboratories and Grace Johnson of NVIDIA.
Sarah Webb 14:16
What do you think is most misunderstood about quantum computing right now? What is overhyped and what do you think is underappreciated?
Jacob Bringewatt 14:29
So I would love for people to jump in and contradict me a little bit of this, but I’ll go ahead and elaborate a little of my previous comment, just because I think it is relevant. I mean, honestly, I do think what is mainly misunderstood is, in my mind, how far we are from actually practically useful applications of this technology, like it’s not particularly useful yet, and we’re trying things like for optimization problems or sort of proof-of-principle demonstrations for applications to quantum chemistry, say. But as far as I can tell, we’re still orders of magnitude away from having realistic applications that are practically relevant outside of just, you know, basic science research. And I think sometimes the hype of the field obscures this. Now I am a theorist, and I’m not working on the experimental platforms, and I know that I’m sort of on the pessimistic side of the community of the timescales that are necessary.
Jacob Bringewatt 15:24
But it is exactly this tension, right? We have a technology goal that we want to achieve, that I think should have investment and time. But then I think the analogy is often made between early days of AI, right? You know, AI as a field has gone back decades, and, these you know, sort of had this tip-over point in the past few years, right, where suddenly it’s become a useful, or at least broadly a useful thing, right? And people are very aware of it, and it seems to me that we’re still pretty far from that for quantum computing. On the other hand, I think that sometimes gets missed in the broader conversation, is that this how technological development happens, right? Is we should still pursue these things, even though they’re on larger timescales than maybe we would like, or is sort of portrayed in the media.
Jacob Bringewatt 16:11
I think that interesting things will come out of playing around with this anyway and trying to push the technology forward. Right? This requires industrial help, as well as academic research, right? And navigating this tension, I think is well worth it. I mean, progress has been extremely rapid over the course of looking back to the start of my Ph.D., we had a handful of qubits in our hundreds of qubits. And yeah, so I think it’s worth continuing to navigate this tension, but I do wish the broader public was more aware of the extent to which this is attention. But yeah, we’d love for people to push back on my pessimism a little bit about the technology side of things.
Dylan Sim 16:44
Yeah, not, not so much a pushback. But I definitely understand where you’re coming from, and I often can also feel like that. But like you said, Jacob, I’m also coming kind of from the theorist side or the application-developer side, where currently I design quantum algorithms for fault-tolerant quantum computers, and I often see resource estimates telling me I need million to billions of T-gates, which are kind of a proxy for the time-cost of quantum algorithms. And thinking about that, I become kind of panicked about, well, will this ever be able to be executed? Will the device actually come to fruition?
Dylan Sim 17:25
But then as soon as I kind of leave my bubble for a little bit and talk to some of the architecture scientists, or even the hardware scientists, both outside quantum and reading papers and meeting hardware engineers from other companies at conferences, the things that they actually were able to demonstrate, like the quantum error-correction experiments at different places like Google and Quantinuum and other places, it’s very impressive. And I definitely, maybe didn’t expect those to be demonstrated quite so soon, and then so these are, of course, different from having fully fleshed-out fault-tolerant quantum computers, they definitely, for me, give me kind of more hope in terms of, perhaps it’s not if we can run these quantum chemistry circuits using millions and billions of T-gates, but more a when question. Of course, we might want to be more realistic about the when. But I think we as a broader community, seem to be kind of stepping in the right direction at a, I’d say, a pretty good rate.
Alicia Magann 18:26
I similarly feel hopeful, Dylan, but also skeptical, like you do, Jacob. I think there is a long way to go. And what I wanted to kind of chime in with is that: When you say that, you know, when you say, we’re here today and we need to get to a place where we can run things that are at so much bigger of a scale, right. Then it sounds like, Okay, we need to keep scaling hardware, and it’s really an experimental and an engineering challenge to get the hardware to the point where we can use it for these applications. But I think something that might be underappreciated is that there’s also opportunities to work on the applications, on the algorithms, and sort of approach this from both sides, in a way where we are scaling the hardware, but also working to bring down the costs of doing meaningful calculations on quantum computers, right?
Alicia Magann 19:22
So working to bring down the resource estimates and sort of trying to meet in the middle, right, having the, you know, hardware and the applications come together in a place that will hopefully someday be reached. And so I think that’s maybe another perspective. And then kind of building on that challenge of bringing down the resource-cost of solving applications. I think something kind of cool that’s happened there is that when people start trying to work through and cost out what it’s going to take to solve some problem on a quantum computer, that can motivate classical approaches to solving that problem, too. And sometimes there’s advances on the classical side, and it inspires this competition between quantum and classical algorithms. And I think that’s something that’s kind of cool, right? I think both the quantum and the classical communities benefit from thinking in that way. I think that there is a long way to go, and I think that there’s a lot of ways that theorists can contribute to to bring down some of the costs or the resources that we’re going to need.
Grace Johnson 20:27
Yeah, I think Alicia touched on something really important there, which is that convergence, right? The improvement in the hardware, but then also the improvement in the algorithms. And you also said something really interesting about kind of this classical versus kind of quantum you know, the improvements in quantum algorithms will drive the tensor network folks to come out up with a new scheme to simulate something. And that’s been sort of the MO for a while. But what I think is kind of really interesting, that’s that seems like it’s shifting now, but maybe is still a little bit misunderstood, is that I don’t think it’s quantum versus classical. It’s classical and then add on quantum, right? Like you think about what does it mean to have quantum advantage? Well, to do that, you need to show that you can basically, if you add a quantum computer to your best classical approximate solver is your best classical hardware, you actually show an advantage by adding the quantum computer as well.
Grace Johnson 21:27
So the way I sort of think about it is quite holistically, which is, what can you get if you bring all of the tools to bear on our most interesting physics problems, or what have you. So I think this notion of classical plus quantum, instead of classical versus quantum is something that is, in my opinion, a bit kind of misunderstood, but hopefully changing in the next kind of few years. And I think I’ve been in sort of the HPC side of things as well as quantum computing, for a while now. And a few years ago, the thought of integrating a or having a qpu as part of some sort of HPC center was kind of foreign because, you know, they were mostly like experimental devices, or, you know, where people are just trying to build the darn thing. But in the last kind of two years, I would say people are taking notice, like you go to supercomputing conferences, and a third of it is about quantum computing. So I think there is a big shift, and people are sort of starting to realize that these systems are going to work together.
Sarah Webb 22:28
What challenges need to be solved before quantum systems are a routine part of computational workflows?
Grace Johnson 22:36
Yeah, maybe I can start us off just because I got us on this path. I mean, there’s so many questions to be answered, right? But we’re going to have some notion of, you know, we need to have very kind of low-latency connection, especially for error correction. So when you do error correction, you have, you know, measurement syndromes that you take off of the quantum computer, and then you have to solve some decoding problem, which is done classically, and so you need that latency, especially for modalities like superconducting qubits, which run very fast, you need to have very low latency. So, so that necessitates, like a colocated type of thing. But then there’s kind of a question of like, okay, if you, if you move beyond the sort of calibration, control, error-correction processes that you need to do, classically, what, what does the coupling look like? Kind of, on the higher-level application side, you know, what are the latencies there?
Grace Johnson 23:29
And I think there’s just so much work to be done, kind of, again, coming from this top-down design of I need to understand the application, you know, the physics problem I’m trying to solve. I need to understand how it breaks down, and that kind of informs the hardware. So I think that’s a big part of it. And another piece is, again, you know, kind of getting back to scaling. So what’s it going to look like if we, you know, it’s great that we’re pulling down our resource estimates, right? Like Craig Gidney went from millions of qubits to now a million qubits for Shor’s algorithm, probably going to be fewer than that. But okay, we still need to say, have a million-qubit, physical-qubit machine, you know, does that look like modules that we’re somehow kind of doing quantum communication between? How does that scale? What does that look like in terms of colocation with classical HPC? So there’s still so many problems to be solved there, I think. But I think if I were to imagine it, it’s going to be kind of a big data center. You’re going to have to have a lot of infrastructure to deal with the cooling. You’re going to have a lot of HPC sitting kind of right there next to the QPUs, and somehow figure out how to scale up and potentially network, you know, QPUs together.
Jacob Bringewatt 24:40
If I had to predict, I mean, I’m a theorist, right? So I wouldn’t begin to predict what hardware is going to win out. I think given these orders of magnitude between what we have now and what we need again, which again, of course, also requires algorithmic development to come from both sides, I think it’s really hard to predict what exactly the hardware would be. Right? It does seem likely to me that Grace is correct, right? That you know, really this QPU paradigm seems likely to be what we will see, right? It’s part of some sort of HPC system.
Jacob Bringewatt 25:12
Now, of course, there may be other applications that we haven’t yet predicted, but generally speaking, right, the main thing that these things are going to be useful for that we know of is quantum simulation, or simulation for scientific computing, which is, you know, nice for this group, and factoring, right? But these are really the killer applications that we’re aware of. And while I’m sure there will be further developments, these are pretty bespoke things. And I think even if we do come up with other cases, they’ll also be pretty bespoke, right? This isn’t just a general-purpose computing tool. It’s for very particular problems with very particular types of structure. And so I think at a very high level, it will have to be sort of, you know, you farm off certain pieces of your computation to a QPU, quote, unquote, and certainly will require heavy integration with classical computing, I think is a extremely safe bet as for the hardware. Who knows?
Dylan Sim 26:06
Yeah, I was going to say maybe such a safe bet, because I think we’re already using so much classical compute in quantum computing research and all parts of kind of the stack. I think Grace mentioned this decoding problem, but also at the level of quantum-applications or quantum-circuit development. We’re already writing all of these kind of quantum programs, if you will, using classical computers. And from the last question, I mentioned that we’re going to work with many, many quantum operations like T-gates and in order to optimize or compile these circuits such that they’re suitable for executing on your specific hardware, this is going to require a lot of high-performance compute.
Dylan Sim 26:52
And I think for the first practical systems, something that I really find important and interesting is this idea of validating and de-risking some of these quantum classical computational workflows. It’s a very challenging problem because quantum algorithms, by design, especially for fault-tolerant quantum algorithms, or quantum computers, are designed to be intractable to kind of simulate using, for example, like a state vector simulator on a classical computer. And yet, when we’re kind of designing early QPU experiments, we want to make sure that we validate these applications as much as possible, as best as we can, because any sort of runtime on a qpu is going to be very, very expensive in all sense of that word. And so I think classical compute will be very important in both compilation as well as validation of different parts of the stack.
Alicia Magann 27:48
Alicia, yeah, I love that perspective. Dylan, I think that’s a great point. I also definitely agree, Grace, when it comes to the need for low-latency coupling between classical and quantum compute in the context of things like quantum error correction. But also, I mean, I think about this often in my work on control and calibration, if we want to be able to do things in real time during computations, right? This is something we’re having classical compute resources tightly integrated with our quantum computer could be really essential.
Dylan Sim 28:26
Yeah, if I may start, I think actually one of the biggest primary challenges is that we’ve probably face in the quantum computing community is actually not unique to quantum computing. I think whenever you try to build a very elaborate, especially like type of workflow that uses different types of, I guess, emerging tech or hardware, a big challenge lies in communication, in my opinion, and human communication. I mean, I feel like with any very complex workflows, we need to have very good agreement and clarity on kind of inputs, outputs, data structures or assumptions that we make at each interface in order for everything to kind of work seamlessly. For example, from kind of just like my team’s perspective, if the problem that we’ve been kind of given or what we thought we wanted to solve is actually not very well defined or even kind of misunderstood from the get go, this means that my team will spend all this time compiling quantum circuits or applications that are actually not well-suited or even suboptimal for the actual workflow that we want to execute. And so in that way, the communication between kind of teams, especially at the interface of different parts of the stack, is going to be a very big challenge in quantum computing, and it’s not unique to quantum computing at all.
Grace Johnson 29:48
Yeah, I totally agree with that. Dylan, I think that I touched on this a little before, but figuring out how to program and run this very complex system with a bunch of different kinds of physics and a bunch of different kinds of compute paradigms is going to be, is going to be a big challenge. And it requires, as Dylan said, human communication between, you know, different communities with different languages in terms of how they talk about things. And so we really need to move toward having some notion of, like open interfaces, or, I don’t know, something that we can agree upon, where each party can sort of innovate in their own space. And does that look like proprietary things or not? But having just some agreement of of the stack, almost of building toward a stack, is going to be really important. Because, I mean, we’ve even seen this a little bit with with GPUs, right in the HPC community, where there’s such a barrier to kind of adoption of this new kind of compute paradigm. And you know, the transition can be slow. So, so what are the tools that we can make such that people can use these systems easily? And I think there’s some interesting questions about, you know, how AI tools can help us there. But yeah, I think, I think that’s a huge challenge.
Alicia Magann 31:10
Yeah, and I think I was gonna take things in a little bit of a different direction, building off of what you said, Dylan. But a big challenge is how interdisciplinary solutions need to be. So we have different levels of the stack, right? We have engineering; we have controls; we have compilers. We have, you know, higher-level algorithms and applications design, you know, benchmarking, characterization. We have experiments. We have different platforms. And so there’s just, it’s become this really kind of broad and deep field, and the problems end up often needing to develop solutions that go up and down the stack or go across different platforms. And so it becomes really essential to collaborate and to approach things from a cross-disciplinary perspective. And I think that can be really hard. I think that’s one of the hardest things that I do is, you know, talking to people with with totally different technical backgrounds than I have and trying to solve problems together and understand each other and work together and and make progress. I think that can be really challenging, but but also just hugely, hugely rewarding. And I think that as the field continues to grow, we want to keep bringing in people that have different backgrounds to contribute to further progress, and that will probably continue to be challenging, but, you know, hopefully also rewarding.
Jacob Bringewatt 32:50
Yeah, I think these are all really fantastic answers. I 100% agree that this interdisciplinarity is both a challenge and a strength of the field. And I guess I’ll kind of add one additional thing that makes this hard and why we need all this expertise, particularly as we try to bridge this gap between, you know, having a large-scale fault-tolerant quantum computer that does all the wonderful things we want it to do, right? Is classical computers are really good, right, so it’s sort of, I think that’s some ways one of our biggest challenges as well, right is, you know, it’s a very special-purpose tool, a quantum computer. And so to try to find practical uses for it that beat classical computers requires all sorts of, you know, first, just knowing what these practical problems are that we can’t solve classically, and then learning to think about them from the perspective of quantum algorithms is two very different pieces of expertise that require communication from people with very different training. I’ve struggled a lot with this recently, trying to learn some quantum chemistry, talking to some chemists, and thinking about how we can simulate some interesting molecules near-ish term. It’s really hard, right? It’s really hard to bridge these communication barriers, and I think it’s 100% necessary.
Dylan Sim 34:08
I think when we’re talking about comparing quantum and classical computes, something that I still haven’t fully resolved in my mind is where does the actual kind of power or capabilities of quantum computers come from? I think we’re all familiar with how quantum computing is usually sold to kind of the general public, of talking about phenomena like entanglement and computing stuff, quote, unquote, in parallel, through superposition. But I think that’s kind of like an incomplete story. I think there’s still a lot of fundamental kind of algorithms research that can go on to kind of really better pinpoint down in what specific context can we make very good use of these quantum phenomena. Just because, for example, I often see several kind of applications where perhaps the quantum evolution or the simulation part may be very efficient, but once you get to kind of the data-extraction step or the measurement problem, this is where a lot of the speedups get wiped out or reduced. And so in my mind, there still is a missing part of this story that we as a community could better understand towards this idea of finding quantum advantage.
Jacob Bringewatt 35:28
Yeah, this is a really good point, right? There’s this general sort of folklore opinion, right? You know, you need some sort of amount of structure, but not so much like that the quantum computer can take advantage of, but not so much structure that’s also easy classically. And you know, this is a true statement, but yeah, I think having a sharper sense of that is really important. I mean, honestly, I even said, take a step further back and say I would like to have a better understanding of just quantum mechanics period, which is partially why I’m interested in the field. I think it provides another way to think about that very basic question. Feynman gets quoted too much in quantum computing, but of this idea of who actually understands quantum mechanics? And I, you know, it is over-quoted, because I think we do mathematically understand it very well. But what exactly, at an intuitive level enough to decide, okay, is this problem definitely good for a quantum computer? I don’t think we have a clean answer to. Maybe this is hard, also for things like GPUs, but I feel like I can at least quickly say what a GPU is good for.
Alicia Magann 36:33
It is wild, right? We have this theory of quantum mechanics, which is so unintuitive, and now we’re trying to make quantum computers based on this wild, unintuitive theory. It’s just, it’s, it’s kind of, kind of an incredible. And when you think about kind of what that means, it’s exciting, and it’s, it’s kind of crazy. But it’s, yeah, I think I do step back sometimes and just look at some of the hardware demonstrations that that people do with quantum hardware, and I think about all of the engineering that goes into those things, and just the fact that we can, that we can do these things in real life, that we can do these experiments and get clean results, it’s incredible. I mean, it’s completely amazing. And we can do so much more today than we could do even three years ago or five years ago, from kind of just a quantum mechanics perspective, computing and applications aside. It’s incredible.
Dylan Sim 37:35
Yeah, this conversation is just making me think, like maybe we as a community will understand quantum mechanics and theory just enough to build one, but then we’ll need to use it to better understand, or more completely understand, quantum mechanics and theory. But it actually kind of sounds like what Jacob, your group is interested in.
Jacob Bringewatt 37:57
Yeah. I mean, to me, this is the exciting question, right? I mean, if we have a quantum computer, fantastic, right? I mean, both for, you know, applications like drug discovery and so on and so forth, but also this feedback loop of, you know, having a large-scale quantum system that we can control will hopefully give us another tool to have insight, right? I mean, because we do have intuition, but it’s mathematical intuition. And, you know, I think this is the hard part about quantum mechanics, right? Like we have more abstract ways to think about, say, Newtonian mechanics, like I could think about Lagrangians or Hamiltonians, or these more abstract mathematical things that, you know, I could also have mathematical intuition of. But at least I can always go back to Newtonian mechanics, which feels very physical. And you know, I know how the world around me at my scale works.
Grace Johnson 38:45
You can drop an apple on your head.
Jacob Bringewatt 38:46
Exactly right, like, right? Like a vector makes sense, like a force vector makes sense, right? Our intuition for quantum mechanics is at remove, right, and it’s mathematical intuition. And then we can predict experiments. And I sometimes wonder if the fact that this still bothers me just means that maybe it’s okay, and I shouldn’t expect more than that, because I’m not an atom. But I would like to think that this computational perspective on thinking about quantum mechanics can give us a little bit more. And if we can build on computer and do discover a lot of other cool stuff, that’s also a wonderful side effect.
Sarah Webb 39:29
I then asked these four computational scientists to reflect on how the various research environments and perspectives within industry, academia and the national laboratories contribute to solving these complex, interdisciplinary problems. Grace Johnson and Dylan Sim started with their industry perspectives, followed by Alicia Magann with the national labs and Jacob Bringewatt with academia.
Sarah Webb 39:57
So what’s unique and interesting about the sector you work in? How do you see these parts coming together to drive quantum computing now and in the future?
Grace Johnson 40:13
Maybe I can hop in here, coming back to that point about scaling. We’re getting to the point where we’re we’re thinking about building toward utility-scale machines, right? These are no longer in research labs. They’re being scaled. And so I think industry, you know, I’min industry, plays an important role in that scale, and specifically for kind of software development platforms, which is what I work on, you know, having, having, you know, coded in my PhD, and now seeing what can be done in terms of software engineering and industry, it’s just really impressive. So having that ability to have kind of concentrated resources on that engineering, you know, software engineering, or actual engineering, heavy research within industry, is a really important part of that triangle. And maybe I’ll let others pop in about other areas there. But yeah, obviously they all need to come together eventually.
Dylan Sim 41:08
Absolutely, we talked previously a lot about kind of these complex quantum-classical workflows. And I would say maybe this is not so unique to industry itself, but at a place like PsiQuantum, where we are building both hardware as well as kind of the applications I mentioned previously, that human communication is a really big challenge. But I will say it’s still a really big challenge, but it’s slightly easier when things are kind of all together, at least in one place. Still not easy, though. And so we kind of, for example, from my standpoint as an algorithms scientist, it’s really valuable to get information that is specific to the architecture or the hardware that we’re building at the company. And at PsiQuantum, we have a metric called the active volume which is technically hardware-agnostic or architecture-agnostic, but is well suited for our photonic devices. And so now we do most of our quantum resource estimates using this active volume metric instead of the T- and Toffoli Gate counts that our community broadly used to use.
Dylan Sim 42:22
I do think the community overall is shifting more towards architecture-tailored resource estimates. But just to say this type of information I can’t easily get unless I had closer access to the architecture and hardware scientists. But I think on the flip side, I could see that national labs and parts of academia that are not fixed to a given architecture, there could be a lot of strengths because they can come up with some really nice, generalized methods. I’m sure Alicia will have a lot to say about those calibration protocols that her group and she develops. But also another really exciting aspect, I think, of academia and national labs that I I’ve been thinking about lately is the idea of evaluating across different platforms that are being developed by different companies and different labs. I think this kind of evaluation efforts by, for example, the DARPA benchmarking program, has been really, really helpful for the entire community to kind of keep us accountable, keep us kind of on track towards building utility scale quantum computers. And I think, like Grace mentioned, this kind of triangle of all three parts, it’s going to be very crucial for us actually reaching a utility-scale device.
Alicia Magann 43:35
Yeah. So one of the most wonderful parts about working at Sandia is the people. So we have a large program in quantum computing and just incredible scientists working on so many different aspects of quantum information science, from experiment to theory to modeling and simulation. And so I think it’s a wonderful place to do kind of big, collaborative science in service of trying to advance quantum computing technology. I think that something that I have come to really appreciate is that I am constantly asking myself, is the work that I’m doing targeted in the best way to sort of maximize the chance of it having a real impact on the field? I think something that’s changed for me as I’ve worked at Sandia is I’m less thinking about what are my next papers going to be. It’s more, you know, what is the real open problem here, and how can I try to chip away at it in a meaningful way? And so I like that tie to applications that I feel at Sandia, and then the opportunity to work together with so many great colleagues is really wonderful. I like to say, and I feel like it’s true: It’s kind of a golden age right now in Sandia’s quantum program. There’s just a lot happening, and it feels very exciting.
Alicia Magann 45:02
But to maybe pick up on something that you teed me up for, Dylan, yeah, so I do spend a lot of time these days thinking about approaches for calibrating quantum computers in the near term and in the far term. And the perspective that I take, and it’s something that I have learned from colleagues. So we have a group at Sandia called the Quantum Performance Lab, and that group specializes in the development of methods that are platform agnostic for quantum-computer benchmarking, characterization and calibration and so it is challenging to keep things platform agnostic. But it’s so rewarding because it means that if we develop a calibration framework, we can then support its implementation across different qubit modalities and so on and so forth and see a lot more of an impact than maybe we could if we were kind of narrowly focusing in on a single hardware platform. And so I think that’s maybe kind of a cool example of the kinds of things that we can do at national labs is that we can try to do research that has kind of broader impacts and supports progress across platforms.
Jacob Bringewatt 46:22
I think there’s sort of two pieces to the answer of why I’m happy to remain in academia. One is on the research side is, you know, on the downside, obviously, academic groups can’t solve these scaling problems in the same way that industry can, or national labs are able to kind of bridge this gap, as you all have discussed. But it is the place where that’s best suited, sort of basic curiosity driven science. And obviously, I mean, there’s a lot of wonderful curiosity basic science work coming out of industry groups and national labs as well, but this is, I think, the most natural home for it. Academic groups are likely to be one of the core users of an eventual quantum computer, right? I work a lot, for instance, with people interested in either quantum chemistry or lattice gauge theory, so wanting to simulate quantum systems, obviously, is relevant to physicists. So this is one front.
Jacob Bringewatt 47:17
I think the other front, which we haven’t really touched on, but it’s also important, I think equally important, maybe even more so. I’m at an undergraduate or institution, which is education, right? I hope that students I teach gain some understanding of what all this is actually about, and are able to judge for themselves what quantum computers actually do what they can be used for. So I think this education piece is super important, both for those who go on to become scientists and members of the quote, unquote, quantum workforce of the future, but also just citizens of the country who are aware of what’s going on with this technology. And so I think it’s really rewarding to be part of that as well.
Sarah Webb 48:03
Thank you to Alicia Magann of Sandia National Laboratories, Dylan Sim of PsiQuantum Grace Johnson of Nvidia, and Jacob Bringewatt of the U.S. Naval Academy for joining me for this special quantum computing roundtable episode. To learn more about each of these guests and their work on quantum research and computing, please check out our show notes at scienceinparallel.org.
Sarah Webb 48:29
Science in Parallel is produced by the Krell Institute and is a media project of the Department of Energy Computational Science Graduate Fellowship program. Any opinions expressed are those of the speaker and not those of their employers, the Krell Institute or the U.S. Department of Energy. Our music is by Steve O’Reilly. This episode was written and produced by Sarah Webb and edited by Susan Valot.
