Nearly a decade ago, the U.S Department of Veterans Affairs and the Department of Energy launched the MVP-CHAMPION initiative, not for sports, but as a data-driven strategy for improving healthcare outcomes for veterans and others. Silvia Crivelli of Lawrence Berkeley National Laboratory turned her skills in computational biology toward this new field, especially the problem of identifying veterans at high risk for suicide. As she and her colleagues worked on this challenge, large language models and the notion of foundation models emerged. Now her team is focused on a more comprehensive challenge: a foundation model for medicine and healthcare.
You’ll meet:
- Silvia Crivelli is a staff scientist in the Applied Computing for Scientific Discovery group at Lawrence Berkeley National Laboratory, where she’s worked for more than 25 years. Her research applies artificial intelligence to medicine and healthcare with the goal of combining biomolecular and clinical data. She works on the MVP-CHAMPION research initiative between the U.S. Department of Veterans Affairs and the Department of Energy, focuses on precision medicine for veterans and the broader population.

From the episode:
The MVP-CHAMPION (Million Veteran Program- Computational Health Analytics for Medical Precision to Improve Outcomes Now) was announced in 2017 as a partnership that combined data from the VA’s Million Veteran Program with the DOE’s computational expertise and high-performance computing resources. Silvia got involved in this work after a meeting with Kathy Yelick, Berkeley Lab’s computational science area lead at that time.
To learn more about how to work with data from electronic medical records, Silvia and her team started using public, deidentified data from MIMIC, the Medical Information Mart for Intensive Care.
The VA’s initial model for identifying veterans at the highest risk for suicide was known as REACH VET (Recovery Engagement and Coordination for Health-Veteran Enhanced treatment). The original version used only structured data, but a newer version includes additional information from the unstructured data. An article in Nextgov/FCW from July 2025 describes those efforts.
When she described how her team started their research, Silvia mentioned Word2vec and RNNs (recurrent neural networks), early tools for extracting information from text.
Within our conversation, Silvia mentioned two foundational papers about LLMs: Google’s “Attention Is All You Need” and Stanford University’s “On the Opportunities and Risks of Foundation Models.”
Related episodes:
- On HPC for improving human health – Amanda Randles: A Check-Engine Light for the Heart
- On the importance of verification and validation for medical devices – Paulina Rodriguez: Building Credibility and Authenticity
- And the rest of our Season 6 episodes on foundation models
Transcript
Prepared using Otter.ai with human copyediting
Sarah Webb 00:03
This is Science in Parallel, a podcast about people and projects in computational science, and I’m your host, Sarah Webb. Before we start, a quick content note, this episode discusses suicide and computational research to identify people at risk.
Sarah Webb 00:23
On the podcast, we’ve been talking about AI for science, and one area where AI could help researchers understand big data is in medicine, where genetic information, billing codes, wearable data and medical notes can point out patterns that could help doctors tailor treatments, diagnose disease earlier and maybe even prevent people from getting sick. In 2011, the US Department of Veterans Affairs launched its Million Veteran Program, a research cohort where individuals provide a blood sample for genetic testing, complete surveys and grant secure private access to their electronic medical records.
Sarah Webb 01:06
In 2017 the VA and the Department of Energy announced a partnership to use leadership-class supercomputers to analyze those data. The project was a marriage of the VA’s unique, vast dataset. With the development of exascale computers at the DOE, it offered an opportunity to improve the health care of veterans, advance science and support technical innovation. This was all before the development of large language models and generative AI, but the fundamental ideas behind those technologies, machine learning and data-driven computational research were motivating this work.
Sarah Webb 01:48
My guest Silvia Crivelli is a staff scientist in the Applied Computing for Scientific Discovery Group at Lawrence Berkeley National Laboratory. At that time, she and her team had been using computational models to study biological molecules, and this project offered the opportunity to expand their work into human health. Silvia and I spoke about this research, particularly her team’s work on models for predicting which people are at greatest risk for suicide. Recently, Silvia and her colleagues are thinking even bigger: about building a foundation model for medicine.
Sarah Webb 02:28
Silvia, it is great to have you on the podcast.
Silvia Crivelli 02:31
Thank you. It’s great to be here.
Sarah Webb 02:33
I know that you’ve been working on computational biology, computational medicine for quite some time, and one of the reasons I brought you on is because you are thinking about large language models in this space of medicine, and I want to hear about this ongoing partnership between the Department of Energy and Veterans Affairs. Talk a little bit about this partnership. When did you get involved, and what were the goals?
Silvia Crivelli 03:01
I got involved in 2017. I remember it was summer, and I was hosting and mentoring a group of a faculty and two students from Hood College. We were working on applying machine learning to rank or score protein models, models created computationally, and we wanted to see how far or how close they were from the actual model. So we were having a lot of data at that time, models that have been created by different collaborators of mine. And because we had so much data, we thought, okay, this is a good opportunity to use machine learning and deep learning to score these models see if we can improve scoring functions. And I was really excited about the results that we were seeing with this team.
Silvia Crivelli 04:03
At that time, there was a meeting. And remember, Kathy Yelick was the computer science area lead at that time, and she called for this meeting and said, Well, we have this new project with the VA. There is no funding involved yet, but there is a lot of data. The idea is to combine the data and the subject matter expertise at the VA with the resources the DOE lab has to offer computer resources, but also expertise in high performance computing, data science and machine learning. and is there anyone that is interested?
Silvia Crivelli 04:48
And I thought, Oh, what an exciting opportunity to apply the techniques that we were using at that time for proteins to a new field, given the results were really promising. And I went back to my office, and I remember I discussed this with my visitors, and I could see their faces, and they were so excited, so happy. I said, Yes, that would be awesome if we could do this. And the problem was that we didn’t know anything about healthcare, electronic health records or any of that data. We haven’t done anything really in medicine or healthcare, so it was a totally new field for us. But there was this excitement, this opportunity to get the chance to do something that had societal impact. And I think that that was a big motivator for all of us. So I went back to Kathy and said, You know what, I’m interested, I am in. And so she put me in touch with some people. There were people from other labs. There were people from the VA, physicians from the VA, and that’s how it all started.
Sarah Webb 06:05
Silvia included the Hood College students who had been in her lab that summer.
Silvia Crivelli 06:11
And I got some little funding. I said, Hey guys, do you want to stay and so that we can do more of this, we can spend all the fall trying to learn about this data, these problems, and they say, Yes, of course. And so that’s how we started. So that was the beginning of our VA collaboration.
Sarah Webb 06:34
And I guess at that point it’s just kind of trying to figure out what data was available and how you might be able to use it?
Silvia Crivelli 06:40
That was another problem, because the data didn’t come quickly. We had to wait until 2019 to actually see the data.
Sarah Webb 06:52
But Silvia and her team wanted to begin working on the problem, so they searched for medical data sets where they could begin to develop models. Initially, they were interested in three primary areas: cardiovascular disease, prostate cancer and suicide prevention.
Speaker 1 07:08
In the meantime, one thing that we found out is that there was no much data publicly available. So we use a dataset called MIMIC that was developed by professors at MIT.
Sarah Webb 07:23
MIMIC, stands for the Medical Information Mart for Intensive Care, and is a public deidentified data set from patients admitted to Beth Israel Deaconess Medical Center in Boston between 2008 and 2019.
Speaker 1 07:38
And that was really important for us, because that data, even though it was small or smallish compared to what the VA data was, it was good enough for us to see, okay, this is how the data looks like,
Sarah Webb 07:55
And studying those data helped them figure out how they’d approach their work with the VA data.
Silvia Crivelli 08:00
There are two modalities of data within the electronic health records. So there is a structured data, and there is the unstructured data. And the structured data is all the demographics, all the codes, which are billing codes. So when you go see the doctor, the doctor will have some codes that we put in the chart, which are the ones that are used to get paid by the insurance. So that all these codes that are lab, lab work, there is procedures that might be recommended. All of that is part of the structured data.
Silvia Crivelli 08:40
But then there is the unstructured data, which are the notes every written part, you know, all the notes that the clinicians, the healthcare providers take, the nurses. Everything that is written is part of the unstructured data. And what we saw at the time is that the unstructured data was not being used. It was mostly the models that were developed were basically focused on the structured part of the electronic health records. But that was for us, you know, going back to the data using MIMIC. MIMIC had the two modalities so we could clearly see, you know, what to use for one, what to use for the other, what we could do and imagine, okay, in the context of what we the exemplars, which at that time were suicide prevention, prostate cancer and cardiovascular disease. Those were, you know, the cases that we need to work on, and the MIMIC gave us some examples, especially for cardiovascular disease, we had some good enough cases that we could use for developing some models, initial models that will give us give us some ideas about how this will work later.
Sarah Webb 10:05
What was the motivation behind looking at these medical conditions?
Silvia Crivelli 10:10
So originally we had suicide, prostate cancer and cardiovascular disease and suicide. So we obviously got to talk to physicians or experts in all these three areas, and we had meetings, weekly meetings, with them, so we got really involved in all the projects and what they needed. But suicide was such a difficult, difficult problem. Suicide affects the country and the world, there are high rates of suicide, but for veterans, the rates are much, much higher than for civilians. There are 17: 17 veterans dying by suicide every day.
Sarah Webb 11:05
Wow.
Silvia Crivelli 11:06
And that number stuck with me, and I said, Oh, we gotta do something here.
Sarah Webb 11:11
Mental and physical health and significant life events contribute to a person’s suicide risk, but doctors have struggled to predict when those factors point to an immediate danger. The VA had built a suicide-prevention model based on the structured data Silvia mentioned such as billing codes and demographic data within electronic medical records as a prevention strategy. The VA would run this model against veterans medical records, and every month they’d pull out a list of people who met their highest suicide-risk criteria, the top 0.1%. A newer version of that model now uses natural language processing, but not a large language model, to incorporate additional risk factors from medical notes.
Silvia Crivelli 12:01
And they reach out to them, they call them, they, you know, discuss. But the problem is: This is far from accurate. So they might call people that are not a high risk and get really upset because they are called. And they might not call people who really are at high risk. And they’re basically using structured data, lots and lots of variables from the structured data. But we realized and and that was one of the things that we were doing, even before we got access to the VA data, that the unstructured data the nodes were very rich in details about the patient, the doctors and the nurses will write: This person is going through a hard time because unemployment, is going through a divorce, is having troubles with the law, has social isolation issues. I mean, all of that. There are few points in the structured data that tell you about what is going on in the life of the person, but there is a lot in the unstructured data that we can use to get better models. And that was part of my idea at the time, you know, this is challenging. This really is, you know, the main reason to try to incorporate as many modalities as we can. Why not to try? This is our opportunity.
Sarah Webb 13:37
So talk to me a little bit about how you started to work on this because it sounds like at the time you were starting, it was before people were talking about large language models in the way they are now. How did you start to think about working on this unstructured data when you were starting on this problem?
Silvia Crivelli 13:56
Yes, you’re right. Language models were not even mentioned at that time.
Sarah Webb 14:00
Right.
Silvia Crivelli 14:01
But we were getting close, and it was really interesting, because the private sector, the industry, was developing methods for dealing with text that was clearly something that they were leading the field there, and so we were using whatever they were using. So we started with methods that, you know, looking now back, they were not really great, but we were like Word2vec, you know, RNNs, models and methods that were developed to extract information from the data to make summaries, to extract some something you know, some knowledge from text. So we were using that and the results, like I said, the data is noisy. There is a lot of missing data. There are lots of abbreviations, jargon. It makes it so, so hard.
Sarah Webb 15:05
Silvia gave this example. Let’s say you’re trying to examine a risk factor such as housing instability. Words such as homeless or homelessness could be a start, but these words aren’t the only ones that could fit. In addition searching for homeless, would flag texts where people reported that they are not homeless or were talking about their family members or friends. So when you’re analyzing unstructured data, Silvia says, context is important,
Silvia Crivelli 15:36
So that’s why it’s so challenging. So anyways, we started with those relatively naive methodologies, and in 2017 there was a seminal paper that was called “Attention Is All You Need” by Google people, and that created the foundation for the language models. So by 2018 we already had GPT-1; we had ELMo; we had BERT. We had the beginning of those language models which we started to use. And let me tell you, GPT-1 was none. All of those, most were really not great, but, obviously, they were improving. Every year we had new iterations, and they were getting better and better. So we started, we created what we call an NLP pipeline, and we were improving the pipeline as we got more newer methodologies. The most difficult problem that we found is that the way that the records are written, the notes have this part that we call semi-unstructured.
Sarah Webb 16:56
These are the yes-or-no, survey-type questions that doctors might ask during a routine exam, such as, have you experienced housing instability in the last two months? And that question might be followed by, do you expect to have housing challenges within the next two months?
Silvia Crivelli 17:15
You can have some cases that is uncertain. Some cases they would put a yes. In some cases they put an X. In some cases they put a check mark. When we use these models, they could see housing stability, and they will flag these as a housing instability positive. But maybe it was answered negatively. I mean, that’s one of the many examples that I have. But we were extracting what we call dramatic life events for the patients, and we were using them in our models, adding those to the structured data. But the positive predictive value of those was pretty low due to the noise. So that was something that we had to adapt over and over and try to make better over the years so that we could get really as much as possible what we were looking for in an accurate manner.
Sarah Webb 18:18
And in 2021 researchers at Stanford University published their influential paper about foundation models.
Silvia Crivelli 18:27
And I remember reading that paper, and I said, I got so excited. Oh my God, this is it. This is what we need to do. So the idea was, we were thinking about training these models. We were not even training any model from scratch. At the moment, we’re using models that have been developed by something else, or maybe fine tuning those models, like GPT or BERT. But there was something about this paper on foundation models. I said, Okay, if you have a large, very large data set, and you can pretrain a model with this large data set, then that model becomes a foundation model, and it can be used for a variety of different applications downstream, with very minor fine tuning. And I thought, Oh my God, we have so much data at the VA. In fact, we have data. I think at that time we have, like a GPT-2 or 3 that were trained on billions of tokens. We had trillions of tokens.
Sarah Webb 19:58
Wow.
Silvia Crivelli 19:58
So we have so much data that we could train our own language model from scratch, so I wouldn’t have to worry about training one for suicide and then training another one for sleep apnea, another one for lung cancer. I could train this big language model, this foundation model, and then fine tune it for whatever application I might need to use, but not just me. I could give these to the VA physicians, and they could do the fine tuning because they won’t need a supercomputer for that. They can do fine tuning with a much smaller data set. So the idea was, yes, that’s it. I am going to do this. This is an excellent opportunity. We were brought together the VA and DOE to take advantage of the large data, the big supercomputers.
Silvia Crivelli 20:58
This is an excellent opportunity to actually bring all these together, the data, the supercomputer, you know, the subject matter expertise, to create this foundation model, and then leave this foundation model so that everybody can use it, even if they don’t have access, like I said, to a large supercomputer. So that’s what my main goal was since then, and has been since then, and now we are at the process of really seeing the results of our effort. It was not something that could be developed quickly. It took a lot of time we had to apply for we did an insight application to apply for resources from Frontier. We then had to convince the VA data managers that we needed to move the tokens from the enclave where this data leaves, and so I never mentioned but this data is highly protected and is stored in an enclave at Oak Ridge National Lab, and there is a cluster of GPUs there that we can use to handle the data.
Sarah Webb 22:22
Throughout our conversation, Silvia and I talked about various ways that they protect sensitive medical information.
Silvia Crivelli 22:29
So all of that is within those walls and nothing can move. If you need to use a plot for a paper, you need to request that for that plot to be in press, somebody is going to look and make sure doesn’t have any information that should not be seen by others, and then they will approve. So it’s really protected. And so we had to convince them that we could move those tokens that we need to train the foundation model from the enclave to Frontier, and that took a lot of time. So now finally, we’re done.
Silvia Crivelli 23:12
So we had a budget to create two models. One is a small model of 1.62 billion parameters. The medium size model is 13.6 billion parameters. We are done with a small. We’re mid course with the medium size and now we are applying those to the different applications that we have.
Sarah Webb 23:40
I wanted to know what those models were telling them about identifying people at high risk for suicide.
Silvia Crivelli 23:47
In the case of suicide, I have not seen the results for the medium-sized model yet, but the small model has shown us two things that we’re preparing to publish soon. One is that it can predict better with fewer data points than other models that are not trained on VA data. For example, we use Llama as an example, so the model is doing much better. But also the model is able to identify patients that were not identified by the structured data-based models, so we can see clearly different groups of people being identified by these models. They, for example, have not so much about mental health problems, but physical problems, pain, chronic diseases, things like that. The other models were not doing well on these subgroups, and the language model is doing really a great job, and I look forward to seeing what the medium-sized model is able to produce. But I’m very excited about the results of this. It has been a long, long project, and finally, we’re seeing the results.
Sarah Webb 25:18
Obviously, it seems like physicians could take advantage of that sort of information right away.
Silvia Crivelli 25:24
Yes, this is something that remember I told you, we have weekly meetings with the physicians, and they are super excited about all these because clearly they are thinking about how the next stage now is bringing all these things that we’ve been doing in a research environment to a clinical environment and put them to practice. And obviously we have to do more validation. We are working on doing better and explainability and things like that. But I do think that all these results and all these models that we have developed will have, very soon, an application in the clinical world.
Sarah Webb 26:11
You’ve alluded to some of the things that are involved in translating this into something that is used in the clinic. And obviously there are the privacy issues, making sure that information doesn’t identify individual patients.
Silvia Crivelli 26:25
The integrity of the data is critical here, and it’s highly protected by the VA and for obviously, for good reasons. How are we going to be able to, one day, move these models outside the environment of the VA so that other people can use them? And you still need to do some work in terms of making sure that data doesn’t leak. These models can still leak data, and so we have to, right now, these models are there for VA physicians to use, but cannot go beyond that, and so more work needs to be done before they become available for general use.
Sarah Webb 27:07
As you’re thinking about making these models trustworthy and making sure that clinicians understand the predictions that they’re making. What do you as computational scientists have to be thinking about?
Silvia Crivelli 27:20
Well, I have to think about, first of all, like I mentioned before, we need to do a better job at developing methods that can explain what these models are doing. We have some idea, but we don’t know exactly how they come up with the answers that they produce. So we need to do a better job there.
Sarah Webb 27:43
It’s also important to consider how well the medical data used to train a model match the populations where doctors might apply it. MIMIC data, for example, were gathered from a single hospital in Boston. The VA’s dataset comes from people across the United States from a range of geographical regions with different socioeconomic factors. Training data that are more varied should make such a model more widely applicable. But veterans are also disproportionately male, which could mean that the model’s predictions are less accurate for females. So Silvia says computational scientists need to calibrate a model and quantify uncertainty within it and across subgroups so that researchers know whether that model’s predictions are useful for specific patients.
Silvia Crivelli 28:37
That’s the sort of thing that we need to be aware and that we need to be extremely, extremely careful. And then the other part, the doctor, the physician, has to be always in the loop. That is something that makes me feel in the in all these pipelines that we develop, they are there all the time. They check every single thing that we develop, when we develop the lexicons, when we got these results, when we were doing validation, they were there, they were helping us. That makes me feel comfortable, and ultimately, I want them to make the ultimate decision. And then the other thing is, models may drift over the years. Things can change, and so they will have to be updated all the time. So when we are developing these models, we are considering that as well. What is the way that I’m going to be updating these models all the time without having to pretrain? I don’t want to have to start from scratch, but I want to make sure that I keep on adding new data so these models keep up to date.
Sarah Webb 29:47
I want to zoom out a little bit and talk about this notion of foundation models. We’ve been talking about generalizability. And that’s obviously a theme that has been coming up in this discussion that I’ve been having with researchers about foundation models. How do you define foundation models at this point, from your perspective and your work in this field?
Silvia Crivelli 30:10
Foundation model is, to me, the same definition that stands for use in 2021 that really got me excited. So it is a model that has been pretrained on large data and that can be then fine tune for a variety of different applications. That is the concept that motivated me to develop the VA language model, and it’s a concept of foundation model that is still have in my mind. There are many things that we need to consider for the future. One thing is that I believe that, and I mentioned multimodality, I think. I truly believe in multimodality, but I also believe that it’s beyond modality. It goes more into different domains.
Silvia Crivelli 31:08
For example, you know, I always had the dream that we could understand health and disease by zooming in and zooming out. You know, you can go all the way down to the molecular level but then go up and see maybe the organ level, maybe the full body. But then goes beyond that, goes in, goes into my interactions with the environment, how my environment, the social and environmental determinants of health, you know, how are they affecting my health? That we know that they do. And so all of those models require, perhaps multiple language models, multiple agents that are experts in each one in their field, just like if you had to do some consultation, and you bring experts on different areas, right? So the same thing, I think that we have to maybe the foundation is not going to be a foundation model make that knows it all. Maybe it’s going to be different foundation models, each one expert in its own area, is a concept, maybe is that is evolving as we find that eventually we have to integrate the different domains.
Sarah Webb 32:39
That would be really exciting to have that. I mean, that would be completely amazing, really. It sounds really hard.
Silvia Crivelli 32:45
It is. I never said that this was easy. It was never easy. And one thing that you know, perhaps in the context of going back to the field that is occupying most of my attention right now, which is medicine and healthcare. One thing that I would like to mention that I’m very excited, I think that that should be the next thing for us is we’ve been working on predictive models.
Sarah Webb 33:18
That’s not just her research on suicide, Silvia has used the same strategy to examine sleep apnea and risks of cardiovascular disease and diabetes. Doctors know that sleep apnea puts people at higher risk for these chronic diseases, but modeling data could help them personalize, and even quantify, that risk or help with issues such as weighing the benefits versus the side effects of lung cancer treatments.
Silvia Crivelli 33:46
Even though these are super important, and I look forward to having more and more of this, the things that I really want to have is way sooner. Can we do more preventive medicine? Can we say, way before this person gets sick, what are the things that this person, this patient, should do in order to avoid getting into this path of disease? I think that that is, to me, the most fascinating aspect of all this getting to the patient before the patient becomes a patient.
Sarah Webb 34:26
What I’m imagining here is where a doctor could look at your profile and, you know, whatever information with this model, and say, Okay, I prescribe this much walking, this type of exercise, this type of diet, this type of thing to keep you in optimal health.
Silvia Crivelli 34:44
Yeah, yeah, yeah. And hopefully more than that, you know, there will be more information than that, but yes, that is the idea. For example, there is a strong link between the bacteria in your guts and your brain and the diseases that you might develop in the future. So I want to know more about that, you know. I want to know how that will influence that, because it’s not just about diet in general, like eating, you know, maybe a healthy diet, but how can I keep control of my gut microbiome so that I can avoid getting into these neurodegenerative diseases?
Silvia Crivelli 35:33
I mean things like that more information, things that we haven’t seen yet, because simply, they are not in the models. But if we are going to get to the point where we are going to include all these modalities, we’re going to get data from, you know, from my watch, from my phone, my voice, my voice is telling a lot about how I feel, you know, my guts. I mean, this is a variety of information that is going to go into these models. And therefore I believe, I truly believe, that the characteristics of the people that avoid going into a path of disease will be much more rich, we will learn much more than what we know now,
Sarah Webb 36:25
As we wrapped up Silvia and I talked about other computational issues, the new methods that are needed, and how researchers can build the skills they need to work in this field.
Silvia Crivelli 36:36
One example is that we need to have more energy-efficient methods, For example, if we want to have long histories or a long context window in the model so that the model can read more information when it’s making an inference, those are computationally very expensive, and the computational expense grows quadratically with the length of this this window. So this, this is a problem with the transformer. So maybe we need to think about, okay, what other models, what other techniques can be used to make things more efficient, and how are we going to evaluate our models. This is also very important, and there are many other things that we learn from our experience.
Silvia Crivelli 37:29
But if there is one message that I would like to convey today is especially for your young audience, is get to pretrain your own model. Yeah, it’s fun to use Gemini. It’s fun to use GPT, but it’s a different level to pretrain your own model. So do it, because that will give you an idea for what these models are doing under the hood, and we need that. We desperately need that. This is what we need for the workforce today, and we will need in the future, more people that are very familiar with how these models work. So that would be my message. Get some dataset. Work through all the issues that we discussed. Think about doing the pretraining, and if possible, see if you can get to develop foundation model and see what you can do with it. I think that that will give you some quite unique expertise,
Sarah Webb 38:43
Silvia, that sounds like a great place to end. Thank you so much. This has been such a pleasure, and I’ve learned so much from talking with you today.
Silvia Crivelli 38:51
It was really fun. Thank you so much for having me
Sarah Webb 38:56
To learn more about Silvia Crivelli and for links to her research and other articles mentioned in this episode. Please check out our show notes at scienceinparallel.org
Sarah Webb 39:09
And if you, or someone you care about, is in crisis, please call text or chat with the suicide and crisis lifeline at 988, or contact the Crisis Text Line by texting TALK to 741 741.
Sarah Webb 39:25
Science in Parallel is produced by the Krell Institute and is a media project of the Department of Energy Computational Science Graduate Fellowship program. Any opinions expressed are those of the speaker and not those of their employers, the Krell Institute or the U.S. Department of Energy. Our music is by Steve O’Reilly. This episode was written and produced by Sarah Webb and edited by Susan Valot.
