Computational science requires translation, breaking ideas and principles into pieces that algorithms can parse. The work requires experts capable of zooming in on core computer science while also being able to step back and make sure that the big scientific questions are addressed.
This guest, Sunita Chandrasekaran of the University of Delaware, moves seamlessly across these layers— from working with students and postdocs on fundamental software, collaborating with researchers on questions ranging from physics to art conservation and helping to shape AI policy in her state. In our conversation, we discuss the rapid pace of artificial intelligence, the synergy among academia, the national labs and industry, and keeping humans at the center of AI innovation.
You’ll meet:
- Sunita Chandrasekaran directs the First State AI Institute at the University of Delaware and is an associate professor of computer and information sciences. She is also the vice-chair of Delaware’s state AI commission. She has worked as a computational scientist at Brookhaven National Laboratory and served on the U.S. Department of Energy’s Advanced Scientific Computing Advisory Committee. During a sabbatical, she completed two visiting researcher stints in industry, first at Hewlett Packard Enterprise and then at NVIDIA.
- Sunita was named the 2025 Emerging Woman Leader in Technical Computing by the Association of Computing Machinery’s Special Interest Group on High Performance Computing.

From the episode:
Sunita discussed the evolution of her work at the University of Delaware: her research group, the university’s AI Center for Excellence, founded in 2022, and the First State AI Institute, founded in 2025.
In talking about LLMs, Sunita mentioned the Hugging Face website and a variety of models: Mistral, Qwen from Alibaba Cloud, DeepSeek, Code Llama and NVIDIA’s NVLM.
As an example of her work on AI for Science, she mentioned an ongoing collaboration with plasma physicists and computational scientists at the Helmholtz-Zentrum Dresden-Rossendorf in Germany.
Sunita mentioned how she encourages her students to pursue internships and co-op training opportunities at the national laboratories or in industry. She specifically mentioned the DOE’s Science Undergraduate Laboratory Internships (SULI).
As we were talking about the Delaware state AI commission, Sunita mentioned the state’s sandbox for testing AI technologies and partnership with OpenAI to foster AI literacy in the workforce. More about those efforts is available in this GovTech article.
Sunita mentioned the breadth of projects at the First State AI Institute, including learning more about the science of art conservation and visiting the Winterthur Museum in Wilmington, Delaware.
Related to career development, Sunita mentioned the role of research software engineers (RSEs) in computational science. Learn more about that work and career path through the Society of Research Software Engineering.
Additional reading:
Recent work from Sunita’s plasma physics collaboration with HZDR: The Artificial Scientist: in-Transit Machine Learning of Plasma Simulators
A preprint from her University of Delaware team to be published with IEEE HiPC, Dec 2025: LLM4VV: Evaluating Cutting-Edge LLMs for Generation and Evaluation of Directive-Based Parallel Programming Model Compiler Tests
Related episodes:
- Ian Foster: Exploring and Evaluating Foundation Models
- Prasanna Balaprakash: Predicting Earth Systems and Harnessing Swarms for Computing
- Lois Curfman McInnes; Building Software Sustainability and Broadening Workforce Participation
- Silvia Crivelli: Understanding Suicide Risk and Building a Foundation Model for Medicine
Featured image courtesy of Sunita Chandrasekaran and from the highlighted paper above: The Artificial Scientist
Transcript
Sarah Webb 00:03
Over the past year on Science in Parallel, we’ve been talking about foundation models and AI for science. I’m your host, Sarah Webb, and this guest grapples with these issues on every level: fundamental software, real-world applications and shaping AI policy.
Sunita Chandrasekaran 00:32
My name is Sunita Chandrasekaran. I’m the director of the First State AI Institute at the University of Delaware, associate professor of computer and information sciences at the University as well, and vice-chairing the state of Delaware AI commission.
Sarah Webb 00:49
I first spoke with Sunita several years ago about a plasma physics collaboration. She’ll mention it during our conversation. At that time, she and her colleagues were translating complex ideas from physics into code that could run on GPUs and testing software strategies that could perform well on the Department of Energy’s first exascale computer, Frontier, at Oak Ridge National Laboratory. That’s an example of how she was thinking deeply about how science and computing drive each other before AI put a brighter spotlight on this research. Even though her professional home is at a university, Sunita has worked at Brookhaven National Laboratory and as principal investigator on a component of the DOE’s Exascale Computing Project. She’s also spent time as a visiting researcher at both HPE and NVIDIA.
Sarah Webb 01:43
Here we talk about her experience in those sectors and how academia, the national laboratories and industry fit together to take on complex science and computing challenges. We also discussed her work on state AI policy in Delaware and what she’s learned from thinking about ethical questions, guardrails and AI for the public good. Join us for a conversation about computation in translation, building technology, tools for science and beyond, and how she’s grappling with the rapid changes in AI that affect both her students’ careers and her own work.
Sarah Webb 02:25
Sunita, it is great to have you on the podcast.
Sunita Chandrasekaran 02:28
Thank you very much for talking to me. Sarah, I’m looking forward to this.
Sarah Webb 02:32
I really want to get a sense of what you’re working on right now. You are wearing a lot of hats.
Sunita Chandrasekaran 02:39
Yeah, great question. So I think there are several pieces of my pie, as you have already figured the current set of roles responsibilities entail my research group for sure, where I have four Ph.D. students, several undergraduate students and several master students. So that’s where we perform several of the cutting-edge research questions and topics on high performance computing, machine learning and AI. Some of them are working on foundational computer science. Some are working on the interdisciplinary computer science, meaning applying computer science topics on real world problems. So that’s my research group, and it’s computational research programming lab that I established actually 10 years ago. It’s been 10 years since I’m at UD.
Sarah Webb 03:23
Wow.
Sunita Chandrasekaran 03:24
The second major piece of the pie is the first state AI institute that I started to direct in summer this year. So it’s a relatively new initiative where the mission of the institute is to invest in a state-of-the-art AI infrastructure to provide the UD community with the computational power, compute resources necessary to develop and to create innovation– newer AI solutions towards their respective domain science. Domain science, which meaning could be plant and soil science, could be coastal science, could be fintech science, you name it. So what would it take for, you know, research scientists, staff and faculty, to create newer AI models, including LLMs, or retrain some of the existing LLMs to open up questions in their respective scientific domains that being one of the missions. The other mission is, of course, to educate our next generation workforce development, which includes not only our students, but also faculty as well as staff as well as professionals. Because I guess we’re all in the somewhat train-the-trainer program in the space of AI and learning literally on a daily basis, right?
Sunita Chandrasekaran 04:42
So there is research and teaching and UD is also looking into different operations that could be modernized using some of the AI tools and software. So how do we do that? And not to forget, in all of these different pieces that we are trying to solve or trying to initiiate some solutions, there is the ethics piece, right? So we have to create ethically correct AI models, transparent AI models, which is why we are big on the open source angle of the AI solutions we will create, which will also lead to an explainable AI model, Meaning, if it’s a black box, how do we open up the black box to understand how did the AI give us an output, what goes in, what comes out? And if we have a control over what goes in, which is the dataset, then we may be able to better understand what comes out of these AI models. So the long and the short of it is, we are excited to have created this new AI Institute, and looking forward to innovate and educate our university cohort.
Sunita Chandrasekaran 05:44
The third major piece of the pie is, I’m vice-chairing the state of Delaware AI commission, and we can talk about it as we go down the story line. And this is a good timing. It’s a 10-year legislation, and we kicked this off last summer, and it has been awesome to work with several of the state employees and different divisions. My chair is Representative Krista Griffith, in trying to understand, you know, how do we get ahead of the game in the space of AI, and what does that even mean? So these three are some of the major pillars and the different hats I wear on a daily basis,
Sarah Webb 06:24
And constantly switching back and forth among them. I imagine.
Sunita Chandrasekaran 06:27
That’s right, a lot of context switching. Yes.
Sarah Webb 06:29
So I mean, obviously AI is probably about the hottest topic around in computing and science, and you know the massive implications that you’ve been talking about, when did you realize how big and transformative AI was going to be, and what has it been like to be in a field that now has this gigantic spotlight?
Sunita Chandrasekaran 06:52
Oh, that’s a fantastic question, which I wondered myself about. I believe sometime around the timeframe of 2023, 2024 is, in my humble opinion, that the space of you know where we are with AI just blew up. And the reason I’m indicating the particular time frame is University of Delaware had funded an AI Center of Excellence in 2022, and we had just then started right we were just trying to understand what does it mean. And back in 2022 I think the space was relatively quiet. And we started working with faculty of different disciplines across campus, and we started to solicit case studies from different disciplines and understanding what their needs are. By then, we were around 2023ish: ChatGPT. At the time, it was 1.0, 2.0 and then came all the other different models, now that they have blown up literally. And then we had departments like fashion technology, a department like art, you know, non-STEM, that started to want to be part of this and wanted help to build tools of very different types. And there were also education cohorts on campus that started to build AI tools that could be used by students and faculty in classrooms. So as we were around, say, mid-2024, where it was already starting to feel like, Oh, can’t keep up with this. So what’s happening? And that led to proposing to UD, another idea of this First State AI Institute, which was literally the pitch, happened last fall 2024 and then it took off.
Sunita Chandrasekaran 08:26
So even within my core research group, we started exploring usage of large language models for software creation. And before we could finish using one LLM, we go back to the Hugging Face leaderboard, we would see 10 more LLMs popped up, and we landed up in a spot that we couldn’t keep up, right? So it feels like a tremendous opportunity to be in this field at this moment. It’s overwhelming, but I call it pleasant overwhelming, because it’s an opportunity. I think we just jump on the bandwagon and we try and see what this means for us, because it’s easier to get lost in this web, but it’s critical to understand what does it mean for UD, what does it mean for the state? What does it mean for the Mid Atlantic region, the East Coast, rest of the country and rest of the world? But we got to start somewhere, right? And here is where we are. So to me, this is just a fabulous opportunity, and I’m glad we are at the center of it.
Sarah Webb 09:20
How are you thinking about LLMs, or other types of foundation models, in your own research?
Sunita Chandrasekaran 09:26
So in our own research, we have started to look into or try and evaluate some of the LLMs, you know, from the Hugging Face website. For example, Mistral, 7 billion instruct models; for example, Quinn from Alibaba cloud; DeepSeek, that’s another popular model of different parameters, right? Could be 7 billion 33 billion, what have you. Code Llama and NVIDIA have their own, NVLM and so on. So we created a new project around 2022, 2023 to try and evaluate the then models– and many of these models that I mentioned are, you know, they have evolved over a period of just one and a half years– to try and see, how could they generate small test code, meaning CC++ Fortran. How are these LLMs performing when we have it, you know, generate test code.
Sunita Chandrasekaran 10:20
So that led to three publications where we were trying to evaluate. We are not pretraining. We are not training an LLM from scratch, but going down the list of you know, potential LLMs trained on code, and just turning it towards our purpose, right? And that alone has led to a lot of questions, a lot of metrics that we started to define, to point fingers at the accuracy. Because you’re generating code, you want it to be 99% accurate. What does it mean? Right? What is the definition of accuracy here? What are the metrics? And this is just, you know, for a particular project, which is code generation. Now, if I step back from it, and if I look at the amount of disciplines we have on campus, as well as real-world applications I work with in outside of UD, it’s mind-blowing to see what models are out there and what models are to be customized for the research that we want to do, because the catch is there are models, but are they for you, or are they for somebody else, doing something else? I think those are some constant questions that we keep asking.
Sarah Webb 11:30
So overall, what are you most excited about regarding AI for science?
Sunita Chandrasekaran 11:36
I look at it as a tool that will help me create questions that I’ve never thought about. A very quick example is a plasma physics project. We are working with the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), with the Dresden folks in Germany. They are a physics group. They are a computational science group as well. And there’s a project that kicked off in 2019, and we are still at it, where it’s a plasma physics code, basically. And we were trying to use an invertible neural network, for example, to create a partial reconstruction of datasets and try to answer some questions. And those questions have never been answered before. Nobody has thought about this before. It’s like mapping the output to the input. And it’s an ill-posed problem, which means, when an egg is scrambled, you can never bring the egg back to its original shape. It’s impossible, right? So that’s the ill-posed problem. Now I’m trying to find a point in the scrambled egg and trying to map it to the original shape of the egg. The point back to the original shape of the egg. You can imagine how complicated that is. And now throw some plasma physics into the whole storyline. And, you know, imagine terabytes and exabytes of data, and you don’t even have enough storage. So we bypassed storage. We learned on the fly. We tried to remap the output to the input. We did some partial reconstruction. We couldn’t even think of this prior to AI for science. Questions like this are starting to make you just wander out of the box, some things you have never thought about getting done, you know, because you don’t know how, or we didn’t have the tool, they’re starting to make sense. But there’s also opening questions that you never felt like asking, but now the datasets and the tools are making you pose questions that you have never thought about. I think I find that part extremely fascinating.
Sarah Webb 13:28
What concerns you?
Sarah Webb 13:30
Oh, boy, that’s a loaded question. I think the very obvious question is the ethical angle, right? Most of the models, if you’re not training from scratch– it’s somebody else’s model, and it’s trained on certain data set that I have no freaking clue about. So it’s a black box at the end of the day, and I don’t know what went in, and I’m expecting a certain output, and it doesn’t give me the output. It makes me wonder, what was the input, right? So I think that’s a billion dollar question, and that it’s keeps me awake at night, because we are using this, the tools and the models for many different science. We are using it for student data; we are using it for healthcare data; we are using it for finance data, like stock market prediction, right? Or trying to understand if a pediatric cancer is going to be a relapse in the pediatric cancer. And we are trying to do data analytics. It’s scary to completely rely on the tool. And I think in the many conversations I have with so many people on a daily basis, the idea of having human in the loop is so critical, so important. I cannot emphasize more. And when people ignore it and rely on the answers out of the AI model, I think that’s the most concerning.
Sarah Webb 14:47
There has been, over the years, this strong interplay among academia, the national labs and industry, in HPC, and now it’s continuing into AI. And you’ve had some experience in all of these sectors.
Sunita Chandrasekaran 15:00
Yes.
Sarah Webb 15:00
So I basically, I wanted to ask you some questions about, you know, what you’ve learned in each of those spaces. And since your core work is in academia, how do you think your academic work is supporting the development of these technologies of tomorrow?
Sunita Chandrasekaran 15:15
Staying on top of state of the art research, you know, staying on top of what is the current state of the art in this space, and what questions do they open to in the very near future? Is what we constantly think about. Meaning, I have Ph.D. students, and I need to make sure they are still in the game five years from now, and they are graduating and out going out of the door, right, right? So what does that mean? How does that look like? What are the GPUs I’m using? What are the CPUs I’m using? What machines Am I using? Am I up to date? I think that’s what I constantly keep thinking about, and that is also my constant advice to universities and whomever I’m speaking to within the academic environment, it’s important to not fall behind, especially in the AI space. And I feel like, if I haven’t read up on at the end of the day, on the day’s findings, on AI, I feel like I’m I’m several years behind. So I think it’s important to stay up to date. That’s key with the university trying to put new machines together and trying to bring new collaborations and talking to domain scientists, meaning scientists working on different scientific domains, I think they help us drive some of those research questions that will, in turn, help them solve the problems, but it keeps us traditional computer scientists on our toes.
Sarah Webb 16:37
It’s really crazy to think about the fact that missing a day’s worth of news could make you feel that far behind.
Sunita Chandrasekaran 16:44
Constantly.
Sarah Webb 16:46
So let’s talk about the national lab part of this. You’ve held a position at Brookhaven, have worked with the Department of Energy and the national labs. What do you see as the role of national labs, you know, big government research, in this AI landscape?
Sunita Chandrasekaran 17:02
Yeah, and that’s been shifting tremendously, right, in the last several months to the last couple of years.
Sarah Webb 17:08
Absolutely,
Sunita Chandrasekaran 17:09
Yeah. I feel like every solicitation I read there is an AI component needed, right? Be it DOE, be it NSF, be it NASA, be it DoD, be it private foundations. And I think everybody is trying to understand what is in it for them, if they try to unlock what AI can provide them with. And depending on the impact factor, you see more of the, you know, necessity and solicitations requiring the use of AI than otherwise, right? I know NIH is putting out so many solicitations where they’re trying to understand the impact of AI. NSF definitely is on it, and so are the other funding organizations, because this also pushes the boundaries of what these tools can do, because it’s starting to look at many different types of problems than mundane type of problems, right? Which is only going to retrain some of these models in many different ways. And I think it is becoming a norm, almost literally. I look at any solicitation, including private foundation, you know, Simon, Sloan, any foundations you see there is this component. So it only tells me that they’re trying to understand what is in it for them and what should be in it for them, and what’s the feedback they can give to these model creators on what do they want to see in these models in the near future?
Sarah Webb 18:33
So let’s talk a bit about the industry piece because obviously that’s important for everything from building systems all the way up, and you’ve also spent some time at HPE and NVIDIA. So I’m curious what you’ve learned from your time working in industry, and how you are bringing those lessons back into your day to day work.
Sunita Chandrasekaran 18:55
Yeah, it was a different world. Completely different world. Well, I have a ton of collaborations with industries. Being an employee in the inside, you know, with an email ID and a badge and everything, it was a very different experience. And I must say, it was a very welcoming experience, right? The core things I took away is connections, meaning met more people than I was in touch with which was fabulous, and both at HPE as well as NVIDIA, ensured to bring me into many of their discussions, exploration topics. I gave multiple talks in both the industries on what we do and what I would like to explore with the two industries in different capacities, and also, what feedback do I have? Right? Because in the last decade, I worked with solar physics applications, plasma physics, biophysics, nuclear physics, and now we’re looking at a lot of non-STEM problems. And often I wonder, okay, what the next GPU architecture should look like, and this is the certain memory hierarchy we see in the current GPU. But do we need them? Do we need mixed precision? How do we want the mixed precision to be distributed so on, so forth? Right?
Sunita Chandrasekaran 20:07
When you look at so many different domains, it opens up your thought process on what the target platform should be than building a target platform and having all of us use it. I think those kind of conversations were very enlightening, and the fact that I met a bunch of teams, and I was able to learn more about their tools, because we download the tool in our group, and we use their profile as performance analysis, and we use what we use, right? We use what we know. And then I realized, oh my god, there is 90% of the tools features I didn’t even know, and now I find the tool even more cool, because now I know it can answer more questions. And how the heck didn’t I know about this, right?
Sarah Webb 20:52
Wow, yeah.
Sunita Chandrasekaran 20:53
Yeah, and these things don’t just pop up by you reading the manual, right? The manual can only do so much justice, but you have to, like, play around. I think those parts I really loved, because now I can bring it back to my group and say that, Hey, you know what? This profile can open up these many features, and then these wonderful students go open it up further, and then come back and tell me, oh, did you know that there was this feature? I’m like, No, I didn’t. So then we go back and forth, and some of the connections we made, we also started running workshops together, as a Python workshop. I run with one of the very good contacts on HPE, for example, that stemmed out of this meeting with them. And again, like I gave a bunch of talks in both industries, and a lot of questions came up out of those talks, and think differently. NVIDIA was phenomenal. I went to their headquarters in San Jose and met with architects, met with their fellows, met with their research groups, and then we had lots of Zoom conversations as well. I just realized how different the world is, but I also realized how they are trying to connect with academia, right? You don’t see that from the outside. It feels like a closed shop, but when you actually work closely with them, there are many avenues that can be created for these industries to work closely with the academic environment, and they’re open to it. So the question is to untap those.
Sarah Webb 22:15
Talk to me a little bit about how all that comes together in this current HPC-AI space. What do you see as the key ingredients for the strategic partnerships that are going to push us forward?
Sunita Chandrasekaran 22:26
Yeah, I think the translational research is a big piece, meaning both ends, right? We communicate what we have done to them, and they give us feedback, and vice versa. And that’s critical. And just, you know, publishing a paper and giving a talk at the conference and expecting the other audience to pick up, I think that’s not enough, because I’ve often felt like if I targeted a bunch of folks to make them aware that we are working on this and we have a paper published, they actually engage with you, and the research goes forward with their input. So this translational angle between the academic and the labs and the industries is key, and that communication is very, very critical. Career pathways that have been established, and it’s a pretty good career pathway. There is the SULI program by DOE for recruiting undergraduate students. There are some co-op programs, internship opportunities, and I feel like these are important, because at least in my group, every summer, students go out to other places, right? And especially if they finish their Ph.D. qualifiers, I intentionally send them out, because I want them to pick up pieces from each of these categories and come back with newer ideas, and then facilitating these opportunities is awesome. And when they do that, they also learn about mission-critical problems, which is important, right? Because I could be sitting in my office and trying to create new research questions, and we will excel at it. No questions there. But am I working towards a mission-critical problem? Am I working towards an impactful problem that will solve a question that’s lingering because nobody has looked at it? So who’s giving me these questions? Right? And I guess that’s why creating these different pathways, and it goes both ways. We can’t be waiting for them to come to us. We need to go to them as well. That’s what I’ve realized very clearly, that it’s a two way dialog. And you engage, they engage, vice versa. But don’t hesitate to take the first step. And if it is you that’s taking the first step, so what? Take it. That’s my way to do this.
Sarah Webb 24:32
So I want to loop back to talking about ethics, your involvement with the state AI commission. You’re the vice chair of the state of Delaware’s AI commission. What is that work like?
Sunita Chandrasekaran 24:47
Yeah, I’m laughing, because there’s so much to share. But long and the short of it is, I’m glad Delaware created the commission. Within the commission, we have created a couple of subcommittees to look into what does it look like to work with state dataset on an AI infrastructure that can be once again opened up. Meaning, if we used open-source models for state dataset, wherever there is, you know, which is not a sensitive dataset, but we push these models a little farther and answer some questions. And if it is a sensitive dataset, how about we try to answer some of those questions without compromising the sensitivity of dataset by using, you know, subscription-based models. Instead, use a system built dedicatedly to some of these problems we are not able to solve right now because of the sensitivity around this dataset, and we don’t have such a system.
Sunita Chandrasekaran 25:46
So we are talking about there’s also a sandbox initiative that our governor, Matt Meyer, signed very recently, which is designed to accelerate innovation while also maintaining the state of Delaware’s hallmark of responsible corporate governance, right? Delaware is usually known for five C’s. A corporation is one of them. Chicken is one of them, believe it or not. Yeah, I learned this recently. Cancer care is another one. Cars, because we have Chrysler, etc. We used to have a huge Chrysler plant, et cetera and so on. But the bottom line is, corporations. We are so well known for corporations. So what does it look like if we put together a sandbox regulatory framework to evaluate some of these incoming companies, as well as, how do we accelerate innovation while not compromising on the responsible governance?
Sunita Chandrasekaran 26:38
There’s also a training subcommittee, as critical as it sounds, where the bunch of folks are looking into, how do we train Delaware employees, professionals? You know, what does a training look like? Again, it’s an ocean, and you can start from anywhere, and also state of Delaware signed certification opportunities with OpenAI very recently, and it’s public news. Also, Google works with the Small Business Development Corporation as well. So a lot of these big tech companies that are trying to engage with the state of Delaware. I ran a hackathon in August, which was fintech themed, and we had Google Databricks and NVIDIA boots on ground offering platform as well as mentor. But the state was also very interested to see what we could do for the state case studies. That’s another round of hackathon, which we will be doing sooner than later in our TBD details. But what does it look like to bring state dataset? So yes, I’m glad there is a commission in place, lots of critical initiatives, like the model sandbox training subcommittees, with a goal to upskill, reskill professionals in the state in their respective divisions like take labor, take transportation, take DTI, take Homeland Security, everywhere there is pocket of data waiting to be analyzed and modernized.
Sarah Webb 27:58
A quick note, DTI is Delaware’s Department of Technology and Information, and we’ll have links to more information about these state level AI projects in our show notes at science in parallel.org
Sunita Chandrasekaran 28:14
So challenges are manyfold, right? As a commission, you can provide guidelines. You can’t do stuff. So I have to often wear remove my doer hat, on which is very complicated, very challenging. But I have to decide and only think in terms of policies and guidelines. It’s a learning platform, but we have been moving very fast, and it’s not even a couple of years since the commission has been created.
Sarah Webb 28:38
So wearing that policy hat, how does now having to think in that way part of the time– how has that shaped how you then go back and think about the doing the fundamental software based research that you’re working on and thinking about?
Sunita Chandrasekaran 28:57
Yeah, I think that’s an evolving question. I’m starting to realize, what are the different problems the state is trying to solve? Whom they’re working with, whom we should be working with, what is a technology and innovation entity or unit in the state? What do they do? And when I come back to my core group and the institute at UD, and I start looking at it as, oh, did we define guidelines? Did we define guardrails? Did we define a framework? Do we have a body of people thinking about this, or are we just doing this ad hoc, right? So the commission makes me think towards structuring some of the things we would just do. I think that’s the positive angle that I can leverage from the commission, because it’s a policy, right? But many things we do because we want to do it, and we want to see what the outcome is, and we just go do it. But then, did we think about the guardrails? Right? Did we think about the regulation? Did we think about the ethical angles. So I think it pushes me back a little bit to think of, are we doing it, right? Are we doing this right? You know, it’s a social good, right? Are we compromising something? Have we spoken to enough people to make sure we are doing this ethically correct? Are some of the things I’m starting to question some of my core research stuff as well as I pose these questions to my groups internally at UD. So that’s really useful, right? It’s very useful.
Sarah Webb 30:28
So obviously you have all of these technical skills to do what you do in computing, but you’re doing all of this new work in policy, and you’re working in all these different sectors. Talk to me a little bit about what other skills have you developed along the way to do either interdisciplinary research or policy work? What are the kind of the extra skills that one might need to develop to be successful in this translational space that you’re talking about?
Sunita Chandrasekaran 30:59
Oh, it’s such a phenomenal question. You’re making me think so deeply here. I feel like, along the lines of all that’s happening, the constant creation of awareness of what the others are doing. I find that a very powerful tool, meaning not just within the country. Look at the European laws, right? The EU’s AI laws. They are blueprints. How are they approaching these problems? How is Australia approaching this problem? How is Asia approaching this problem? Originally, I was not thinking along those lines, because I was very tunnel vision on what we need to get done. But given the variety and the breadth of topics on the table. It makes you go find relevant solutions. Meaning, has somebody solved this? Often not, because there’s an evolving phase for everybody, right? And then it makes you think, how do you want to solve it? So where do you go for resources? What stuff do you read, right? What stuff do you explore? So we often kind of do small prototypes within my group if I want to test something out and it needs to be done overnight. Obviously, that’s not possible, so it takes about a month, but we run some quick evaluation, run some quick tests to see what worked what didn’t.
Sunita Chandrasekaran 32:15
At the same time, when we have these case studies from so many different colleges, right? It’s not College of Engineering anymore. It’s College of Arts and Science, College of, you know, Education and it’s coastal science, for example. So many different disciplines, when they throw their problems at you, you start learning their field a little bit. We started getting engaged with the department of art conservation at UD, and we ended up understanding historical data on books and arsenic and mercury and other kind of substances in these different materials, which has led to the deterioration of the book. And they want to know how to preserve some of these art materials going forward. And that’s the kind of data we have never seen before. Wow. So we visited the Winterthur Museum to better understand what are you talking about. And they opened up their labs, and they showed us marbles. They showed us, you know, Italian marble, carved sculpture that another master student in the department of art is putting pieces together. And that was a year-long project, and I thought I could just glue the two pieces together. But it’s not that easy.
Sunita Chandrasekaran 33:21
You know what I mean, and this is data, and this is data to be translated into something that the tools can understand. It’s not a bunch of words, but it’s data, right? And how do you convert it to a language that the AI models can understand? What does it even look like? Leveraging information gleaned from one discipline, applying it to another has been a game changer. Last year, we got a large NSF grant, again, interdisciplinary grant focusing on political, coastal and fintech science, basically, to modernize many of their datasets, analyze their datasets, do data engineering and try and explore some of the unanswered questions at the table. And one of them entailed looking into Russian politics and trying to understand their messaging, trying to see what did that mean over a period of years. It’s just one country’s political language. Now throw a bunch of other countries’ political language in the mix and see how they define their government, for example, right? You can’t do this without AI tools.
Sunita Chandrasekaran 34:22
Wow.
Sunita Chandrasekaran 34:23
Yeah, so just trying to learn the other side of the coin, which is not computer science is… Call it science communication. Call it just learning a different discipline and bringing it back to your field, I found that a very special and precious skill that I didn’t have, and I’m enjoying learning it.
Sarah Webb 34:42
That just sounds like so much fun,
Sunita Chandrasekaran 34:44
It’s exciting.
Sarah Webb 34:48
No, I mean, I just, you know, to get to go from art conservation to, you know, coastal science, I mean, and learn all kinds of crazy stuff. That, just to me, sounds like living the dream.
Sunita Chandrasekaran 35:02
Yeah, if I get four hours of sleep a night, this is why.
Sarah Webb 35:07
is there any other nugget of advice that you would pass along to other researchers who are, you know, either early career or interested in being more translational in their computer science work?
Sunita Chandrasekaran 35:22
Oh, yeah, we recruited a couple of research software engineers– acronym is RSE, and I think that defines this kind of profession. You apply software engineering on research. It could be anything, right? Software engineering skills applied towards any research domain, and also vice versa. And funnily enough, maybe it is not funnily enough, maybe it’s technically enough, the two RSEs I recruited for the NSF project are physicists. You would think they were computer scientists. They are not. And just the twist of how we recruited two physicists to work on software engineering problems already speaks volumes of what you want to prepare yourself for, right? This is the interdisciplinary science field. That does not mean looking into newer algorithms, looking into newer software techniques is any less critical. But I think the main in the very important thing, or at least this is what I tell my students, is to be aware of what is coming a year, two years from now.
Sunita Chandrasekaran 36:27
You know, it’s easy to look behind, look back. It’s very difficult to look forward, because you don’t know what’s coming, but you have data from the last many years of what has come up. So you’re not shooting in the dark, right? You’re still going to miss many, many points. But what are we looking at in 2028 in the AI space? What is it going to look like? Have you imagined it? Have you thought about it? All of us will have crazy ideas, depending on our background. But can we envision that into a reality? Can we turn this to our advantage, but carefully with humans in the loop. What are these crazy problems that we can solve looking ahead? And what are the innovations we need to do along the way? So whom should we speak to? Who are the point people? And what is a variety of point people? You know, I always love to have multiple mentors, role models, because I think that’s what has helped my career, because different people have different things to bring to the table. And then you have a bucket full of ideas from people with various backgrounds, and now you can think about what it means for you. So, yeah, I don’t know if it helps, but that’s two cents.
Sarah Webb 37:38
Well, I think that’s a great way to wrap up. Sunita, thank you. This has been such a pleasure talking with you today.
Sunita Chandrasekaran 37:43
Thank you, Sarah, very much. Your questions were awesome. That made me think so hard and so deep. So I appreciate your questions a lot. Thank you very much.
Sarah Webb 37:53
To learn more about Sunita Chandrasekaran, the First State AI Institute at the University of Delaware, Delaware’s State AI commission and other topics we discussed. Please check out our show notes at scienceinparallel.org. Science in Parallel is produced by the Krell Institute and is a media project of the Department of Energy Computational Science Graduate Fellowship program. Any opinions expressed are those of the speaker and not those of their employers, the Krell Institute or the U.S. Department of Energy. Our music is by Steve O’Reilly. This episode was written and produced by Sarah Webb and edited by Susan Valot.
