Artificial Intelligence and Climate Change: Do the Work and Meet the People

One of today’s hottest areas of computational research, artificial intelligence, could help scientists build better solutions for one of global society’s steepest challenges. Three early career computational scientists – Priya DontiKelly Kochanski and Ben Toms – talk about AI’s potential for understanding and predicting climate shifts, supporting strategies for incorporating renewable energy, and engineering other approaches that reduce carbon emissions. They also describe how AI can be misused or can perpetuate existing biases.

Working at this important research interface requires broad knowledge in areas such as climate science, public policy and engineering coupled with computational science and mathematics expertise. These early career researchers talk about their approaches to bridging this gap and offer their advice on how to become a scientific integrator.

You’ll meet:

Priya Donti is a Ph.D. student at Carnegie Mellon University, pursuing a dual degree in public policy and computer science, and a fourth-year DOE CSGF recipient (at the time of recording). She is also a co-founder and chair of the volunteer organization, Climate Change AI, which provides resources and a community for researchers interested in applying artificial intelligence to climate challenges. Priya was named to MIT Technology Review’s 2021 list of Innovators Under 35. Read more about Priya and her work in the 2021 issue of DEIXIS.

    Kelly Kochanski completed a Ph.D. in geological sciences at the University of Colorado, Boulder in 2020 and works as a senior data scientist in climate analytics at McKinsey & Company. Kelly was a DOE CSGF recipient from 2016 to 2020, and her graduate research was featured in the DEIXIS 2020. She also is profiled in the 2021 issue as one of this year’s recipients of the Frederick A. Howes Scholar Award.

    Ben Toms also finished his Ph.D. last year at Colorado State University studying atmospheric science and is a 4th year DOE CSGF recipient. He has founded a company, Intersphere, that provides weather and climate forecasts up to a decade into the future.

    From the episode:

    Transcript

    Sarah Webb  00:00

    Hello, I’m your host Sarah Webb and this is Science in Parallel, a podcast about people and projects in computational science. In this episode, we’ll discuss how artificial intelligence can help address climate challenges. Machine learning, a subset of AI, has become a popular tool for identifying patterns from complex datasets. And climate change is an urgent problem with mounds of data. Bringing these two fields together offers the opportunity to improve climate forecasts, match renewable energy production with consumption, boost energy efficiency and much more. But applying these tools appropriately requires a broad understanding of computer science, energy system, climate science and public policy.

    Sarah Webb  00:50

    This episode’s guests are three early career computational scientists who have focused their work on AI and climate change. All of them received Department of Energy Computational Science Graduate Fellowships – commonly called CSGF – to support their graduate research.

    Sarah Webb  01:07

    Priya Donti is a PhD student at Carnegie Mellon University pursuing a dual degree in public policy and computer science. She is also a co-founder and chair of the volunteer organization, Climate Change AI. Kelly Kochanski completed her doctorate in geological sciences at the University of Colorado Boulder in 2020, and currently works as a senior data scientist in climate analytics at McKinsey & Company. Ben Toms also finished his Ph.D. last year at Colorado State University, studying atmospheric science. He has founded a company, Intersphere, that provides weather and climate forecasts up to a decade into the future.

    Sarah Webb  01:50

    My startup question for the three of you is, could each of you tell me a bit about how you use artificial intelligence to work on climate change issues, maybe I’ll start with you, Priya,

    Priya Donti  02:02

    I work on applications of artificial intelligence for the electric power grid. And so fundamentally, as we put more renewable energy, solar and wind, for example, onto the power grid, we have to keep optimizing the power grid under more uncertainty. So what I mean by this is that at every moment on the power grid, you have some amount of power going into the grid that is being produced by a solar plant, or a wind plant, or a coal plant, or whatever. And then you also have some amount that’s being consumed either by a consumer like us, you know, turning on a light or something like that, or also through just resistive losses on the power grid. So the amount of power that is put into the grid has to exactly equal the amount of power that’s consumed on the power grid at every single moment.

    Priya Donti  02:47

    And this becomes more and more challenging as we sort of build power grids that have more renewables that have more lower low carbon energy resources. And so what I do in my work is I use artificial intelligence in a couple of ways here. One is to build better forecasts of electricity supply and demand in order to reduce the uncertainty in that part of the equation. And I also think about how we can take our existing power system optimization algorithms, the way we actually manage the grid, which often relies on basically very slow and very large optimization algorithms. How can we use techniques from AI to actually slim down or speed up those particular models in a way that allows you to make more real time decisions and more real time balancing of this power grid, but at the same time preserves various properties of the power grid, like the physics of the power grid that definitely need to be preserved for that optimization solution to actually work? Yeah, Kelly, do you wanna go?

    Kelly Kochanski  03:46

    Sure, sounds good. So I got interested in AI, and specifically in machine learning, for earth and climate science, because it’s very clearly where the future of the field is going. We have enormous datasets. They’re growing all the time, as we start getting more satellite data and more observations about the earth. And with climate change, we also have more urgent questions that we need to answer in order to figure out where we’re going and start making predictive models.

    Kelly Kochanski  04:14

    So in my own work, I’ve focused on a fair amount on using machine learning to build on physical models of the earth and climate system, working with sea ice models, for example, in sand dunes as well, to see if we can train a machine learning model to learn how our existing physics rule based model change works, and accelerate that and use that to make faster, more global predictions about how the earth is going to evolve. And then finally, in my current work at McKinsey & Company, I’m working on using comparable predictions from a number of sources about future natural hazards, to inform the risks that are facing nations and companies around the world and give them better information so that they can better prepare for the hazards of climate change.

    Sarah Webb  05:00

    All right, Ben, I think you’re up.

    Ben Toms  05:02

    So I just wrapped up my Ph.D. And the focus of that work was actually trying to use AI to discover these patterns in the climate system that scientists hadn’t discovered before. Simply because, you know, the methods that they’ve been using, while advanced weren’t well-tuned to the specific problem they were tackling. So that’s what I focused on for my PhD is this concept of interpretable or explainable AI.

    Ben Toms  05:28

    And I had the opportunity to work with somebody on my committee who who did work on decadal forecasting. So the concept there is, you know, weather forecasting and say, from today to two weeks, they’re seasonal forecasting, which is like two weeks a year. And then decadal forecasting is from a year to say, 10 years out. And it gets really complicated on those timescales. Because you have, you know, with climate change what humans are doing to the world, you can think of it in very simple terms is we have a data point for temperature today and a data point for temperature in 10 years.

    Ben Toms  05:59

    And it’s pretty linear between those kind of just draw a straight line. And that’s generally how the temperature will evolve. We have all these wiggles on top of that line, too. And that’s the natural variability we call internal variability. So you have to get these wiggles and the straight line correct if you’re going to try to prepare for climate change in the next five to 10 years. So that’s where I started inner sphere. And that’s what I’ve been focused on for the last six months or so is using AI to be able to predict those wiggles on top of that straight line.

    Kelly Kochanski  06:27

    Man, I just want to jump in here and say that your work on interpretable AI, in particular, it’s just really cool. The direction that our science should be going right now.

    Ben Toms  06:38

    Yeah, I appreciate that.

    Sarah Webb  06:39

    Why don’t both of you talk a little bit about interpretable AI, and why that is so important, and, Priya, where that’s important for your work? I’d love to hear from you as well.

    Kelly Kochanski  06:49

    Ben I think you should lead this since it’s your area.

    Ben Toms  06:52

    Okay. Yeah, I guess what I started thinking about interpretable AI is there was a lot of excitement around machine learning and neural networks and deep learning and stuff like that. And people were starting to use it to actually make forecasts and weather and climate. But my, one of my concerns there was typically when you make weather climate predictions, you use these, you know, numerical models that have the actual physics of the world encoded within them. So you can kind of trust them. There are errors within them. They’re imperfect, but they’re trying to actually represent the physics. And then you go to these very data driven models, where that physics isn’t always fully baked into the model. So if you can try to interpret and understand why these data driven models using AI, are making their forecasts, then if you actually use that type of thing to make a decision, you can back it up with scientific logic or domain science, stuff like that. So that’s why it’s important in my eyes.

    Kelly Kochanski  07:43

    And building on that a little bit, I think the largest barrier that I’ve seen to adopting AI for a lot of climate change problems has been in that interpretation and understanding area. You see this a lot with scientists who worry about replacing the models they trust, that have perhaps been developed by the community for 20 or 30 years, with something that looks a little bit more like a black box, where they don’t necessarily understand what’s going on, and where we have fewer tools for validating it. And I see it likewise, in consulting, in industry, where one of the most common questions we get from clients is, how do you know this? How do we know it’s going to work like this? What are the assumptions that went into your models? What scenarios does it cover? What scenarios is it going to break on. And in both of these areas, having a lack of interpretability can really reduce people’s trust in machine learning models. And that, in turn dramatically reduces the number of places you can use them.

    Priya Donti  08:41

    So in the electric power sector, I think interpretability comes up in a couple of different ways. One of them is when you’re a power system operator, if you do something wrong, if there’s a blackout on the grid, you have to explain to the regulator why this happened, right? Why is it that a ton of people lost their power? And basically, if you are using, for example, a machine learning model to make a forecast of solar or wind, and that was something that forecasts being wrong, for example, is the thing that causes you to mismanage the grid, then I think there’s basically a lot of interest in and push from regulators to basically say, I want you to be able to tell me why. And so this sort of regulatory environment has led to some interest among power system operators in terms of how do we actually create power system forecasts that are actually interpretable.

    Priya Donti  09:34

    But one thing I’ll bring up and actually, Cynthia Rudin wrote a kind of, you know, an article that made a big splash here, basically arguing that when we construct interpretable models, especially in high stakes decision making contexts, we shouldn’t be creating black box models and then trying to back explain what they were doing. We should try to build models that are interpretable from the get-go. And so this ties into also some The work that I’ve been doing which is trying to bake in, for example, existing physics existing domain knowledge into the way we actually construct machine learning models, so that we’re not just trying to, you know, construct a model and back, explain it. But instead, we’re building a model that in some sense, has some internalized notion of physics, and that we know how that model works, at least to an extent and at least in terms of where that physics is already baked in.

    Sarah Webb  10:24

    In other areas of AI, there are all sorts of concerns about bias, are there any challenges in those areas that you have to keep a lookout for, as you think about building models?

    Kelly Kochanski  10:36

    Priya,I bet you want to jump in about bias and fairness here.

    Priya Donti  10:40

    I think in general, the requirements we have around interpretability, fairness, all of these kinds of things are really dictated by the real world contexts that we actually apply models in. And so I think, as those of us who are working on these technological innovations, I think it’s our job to really listen and really understand what these requirements are. I think also, one thing I’ll note is that a lot of these concerns around fairness, interpretability, etc, they sort of assume that a model is going to be used and that it’s going to be used to solve a particular problem. And I think stepping back from that framing is also really important thinking about why am I using AI and machine learning in this context? And is it the thing that’s best suited?

    Priya Donti  11:25

    Because AI and machine learning are often accessible to entities with more privilege and power in society, is the way in which I’m using an AI entrenching existing power structures and inequities in society. Making your model fairer, won’t necessarily deal with that right thinking about whose problems you’re solving and who you’re actually working with will. And so I think being intentional about understanding when something lies in this the phase of problems, scoping and stakeholder engagement. And when it comes down to now now that I’ve chosen the right problem, I’ve chosen the right way to work on it. Now, how do I make this model fairer, more interpretable, like keeping that whole scope in mind is really important.

    Sarah Webb  12:05

    So not just not just the fairness of the model, but the fairness of when it is chosen to be used and applied?

    Priya Donti  12:13

    Absolutely, I think, basically, and a lot of other people have said this. So this isn’t us necessarily a unique thought. But AI is an accelerator of the systems in which it’s employed, it’s not necessarily going to make a system more objective or more fair. And so we have to be very conscious, basically, about what systems we are choosing to uphold or entrench or accelerate through the use of AI.

    Kelly Kochanski  12:36

    That’s a really elegant way to put it.

    Ben Toms  12:38

    Yeah, I love that.

    Sarah Webb  12:42

    Is there sort of a case study, you can sort of offer of where, or a comparison like a situation where AI is might be the most straightforward and best approach and one where it might have some serious limitations?

    Priya Donti  12:55

    Yeah, so I can give two quick examples here. So one example of where AI may, let’s say, be a good approach, but only in a certain context, which may be like tries to get at both sides of the this is the use of AI for smart heating and cooling management. In buildings. Often for buildings, you don’t have a perfect physical model of the building, you have a lot of occupancy and sensor data and a lot of cases. And you can use this to optimize heating and cooling systems, which make up I think about half of building energy use in a very dynamic way. That said, if you aren’t doing things like insulating your building, or taking very kind of clear efficiency measures that are potentially not smart or that are less flashy, basically, you might implement algorithms that are only well suited for a building that isn’t insulated. And when you put an insulation, it doesn’t work well, or you’ve sort of done the fancy thing before doing the the non fancy thing that was more effective. So this is an example where like AI can be well suited. But it sort of depends on making sure that you’re sort of doing the nonflashy things first.

    Kelly Kochanski  14:02

    I can chime in there that I’ve been house shopping recently, and I’ve seen a lot of advertisements for smart thermometers, and no advertisements at all to tell you whether or not the roof is insulated.

    Priya Donti  14:14

    Nice. Yeah, that’s a great example. And then yeah, I guess a quick one to just emphasize this point about power structures that I talked about, societal power structures. When we talk about, for example, making agriculture more efficient. Often with AI, we often think about things like precision agriculture, ways to basically enable the management of fields with large and heterogeneous sets of crops in a way that’s sort of better for the land. But a lot of these depend on you know, large amounts of machinery, large amounts of automation that may only be available to large farmers, and may not be available to kind of smaller farmers. If we sort of do this without thinking about it, we may end up in a situation where we’re really strengthening one set of stakeholders while really leaving behind, and sort of weakening, basically another set of stakeholders who we also should be strengthening and sort of to build various forms of, for example, adaptive capacity to climate change. And so I think being explicit about like, again, whose problems we’re solving, who we’re working with can be really important in shaping power structures in society.

    Kelly Kochanski  15:22

    I think this is a significant challenge with providing information for climate adaptation in general, like working in the climate information and natural hazard space, one of the most obvious challenges. First is that getting the information in a tailored way can be costly. So that’s only available available to some players, although hopefully, that will be scaling up considerably in the next few years. And then acting on it is very costly as well. So there’s an argument that we’re doing good work by making it possible for anyone to adapt in a meaningful and considered way. But there’s also an argument that it’s going to lead to larger and larger distinctions between the haves and have nots here, in terms of who can adapt. Now, the other trend that comes across really clearly in a lot of this is that the vast majority of serious climate change impacts are going to hit countries that are not particularly well prepared to adapt. And so we’re really increasing that gulf a little bit with a lot of these high tech measures that are going to be available to mitigate those impacts on the countries where they’re, they’re smaller in the first place.

    Ben Toms  16:28

    I think I’ve experienced what both of you are talking about, while trying to start a company, too. There’s a lot of pressure when you start a company, especially if you take external investment to become profitable very quickly. And so one way to do that is to get these really huge customers that, you know, have a ton of money, but they already have a lot of money. And so, you know, if you’re addressing climate change with the people that already come with a lot of money, there’s the risk that the people that don’t come with a lot of money are being left out of the solution. And so kind of back to what he was getting was talking about is maybe you’re developing your AI based system to address these challenges based on the system that people with a lot of wealth and power already have. But that might not be applicable to the people who don’t have wealth. So that’s something I’ve been grappling wit,h too.

    Kelly Kochanski  17:15

    I do think on the positive side, AI has the option to make some kinds of information more scalable, and therefore more accessible. Oh, absolutely. Like, if I look at flooding, for example, if you wanted to get a close up detailed look at your flood hazard even a few years ago, you and you were say a city, you had to engage a small boutique team that would send multiple people out probably working for months, in order to survey your flood hazard. And that’s expensive, that’s a lot of time for well-trained people. And now there are multiple companies that are starting to provide these in a very scalable way, using machine learning algorithms to provide much more local information, in some cases completely for free. So I do think there are some examples where machine learning and AI can really improve accessibility.

    Priya Donti  17:59

    Yeah, totally agreed. And I think that basically underscores the idea that kind of the way in which we leverage AI and machine learning is at is absolutely a choice. And we can in some sense, we can choose to which sets of stakeholders we are actually supporting. And as Ben mentioned, there are lots of pressures that make that potentially a very hard thing to do you especially in a for profit context, anything addressing a lot of these contextual factors will be really important to shaping this overall equation as well.

    Kelly Kochanski  18:26

    It’s like you were talking about earlier, it’s about choosing which systems to accelerate.

    Sarah Webb  18:31

    So I would really kind of like to hear from all three of you a bit about what are the kind of sticking points at this interface where it definitely feels like AI can can move certain things forward. But where are the sticking points in, in in making that progress?

    Kelly Kochanski  18:49

    I think the most obvious one is going to be about skill availability. And we have a lot of, we have an increasing number of people around the world who are developing data science and machine learning skills right now. But a lot of them are developing them in relative isolation in computer science fields and applying them entirely in technical fields. And I think we’re only just starting to see a trickle of and of people moving back and forth between those fields, and starting to build those connections and bring that knowledge into places that have really complex, technologically challenging problems that don’t fit the exact circumstances that a lot of machine learning algorithms were developed for

    Ben Toms  19:31

    Isn’t the Climate Change AI organization doing a lot of work to that end to try to unite the two groups?

    Kelly Kochanski  19:36

    I was trying to set Priya up, Ben.

    Sarah Webb  19:47

    I love the explicit setup. Go Priya.

    Priya Donti  19:49

    Yeah, no, absolutely. So Climate Change AI is definitely trying to help with with some of these bottlenecks. And so things like Like, how do you build, you know, multidisciplinary teams and sort of bring together who have machine learning people who have machine learning expertise with people who have expertise in various climate relevant sectors, and also sort of build up that expertise within each of those those groups as well? And we also think a lot about, you know, what are the data bottlenecks? What are the funding bottlenecks? Where are the bottlenecks, and just understanding and I think things like this podcast, a lot of the topics we’ve talked about today are really important to sort of understand the context around a lot of these issues.

    Priya Donti  20:30

    And then one kind of additional challenge, which I think is really difficult to crack. And I’d actually be really curious to hear To what extent this applies in the climate sciences as well, Kelly, and Ben, which is sort of integration with legacy systems. I think, for example, in electric power systems, you have this, you know, optimization software that is used to manage electric power grids. And that’s not something you can change overnight, both from a technical perspective, but also because there are various procurement processes in place that actually get that software there. If you’re somebody who’s providing that software, you’re huge company that has the ability to build out that very complex software maintain it, if you just create one algorithm and give it to a company, they’ll be like, we can’t use this. So I think surmounting that particular challenge of just integration with legacy systems is is really tough, and not one that I’ve seen, like a great solution to as of yet.

    Kelly Kochanski  21:27

    Yeah, well, there’s definitely there is a well known problem in the climate sciences involving the difficulty of implementing machine learning algorithms and Fortran. And a lot of people who work in machine learning on a day to day basis generally can’t believe that this is a real problem that needs to be solved. But it’s a real challenge, there’s a lot of things you can’t do if you have to rewrite a large part of your system over from scratch in order to implement some new tools.

    Ben Toms  21:53

    Yeah, I think that’s a really critical problem for us too, because, at least for climate models, one of the main ways that machine learning and AI. AI can help is that it can actually accelerate the model. So if you can take this part of the code, that would typically take 100 seconds, you know, for every time step that you take, and now it takes one second, for each iteration, you accelerate your code significantly. And so there are a lot of groups that are actually trying to solve this. I’m not an expert on that by any means. But Vulcan is one that I can think of their group that they’re developing these parameterizations for climate models, and are trying to figure out a better way to actually integrate Python, and Fortran so that the community can help them out a little bit quicker.

    Kelly Kochanski  22:37

    I will say that I think a lot of this comes back to the talent problem as well. Because if you take, say, a newly trained, brilliant computer scientist, they’re not excited to go work with Fortran legacy code. It’s it’s not what’s sexy in that field at the moment and saying, it’s always climate change helps a little bit, but it doesn’t get it all the way there. And I do think that there’s a bit of a values misalignment or a difference of values between what’s exciting from a cutting edge technical front, although that in and of itself is a kind of value-laden term, nobody’s done for trend machine learning algorithms, that is cutting edge in a way in what’s actually needed to solve a lot of problems. Yeah,

    Ben Toms  23:18

    I think that’s, that’s why this bridging between domain scientists and computer scientists is so important, because you don’t necessarily have to have people that are total experts in the machine learning side and the Fortran side, so long as the people who are experts in each have a good communication bridge. And so that’s something that I’ve been thinking about a fair amount. And something that the CSGF program actually helped me out with lot was actually trying to figure out how you can communicate with both sides. Because at least in my perspective, something that could work really well is having a lot of people that are really good at machine learning, say in Python, Julia, whatever they’re using, and then a lot of people that are really good, in this case for climate science and Fortran, right? And then just having a few integrators that makes sure that the communication between the two is really clear and efficient. So the integrators are really critical. But maybe, you know, you don’t need as many integrators as you do domain experts in each side.

    Kelly Kochanski  24:11

    Well, that’s a promising way to look at it. I think. The CSGF, I think, finds brings together and trains more integrators than anywhere else I’ve been.

    Ben Toms  24:21

    I agree with that.

    Sarah Webb  24:23

    Well, I want to explore that a little bit in terms of for each of you how the cscf has supported this particular journey of looking at climate change using AI.

    Kelly Kochanski  24:34

    I think the biggest one was just having the freedom to pursue opportunities. So in my case, about two thirds of the way through my Ph.D., a friend of mine from undergrad came to me saying he had this phenomenal idea for writing a paper about the ways we could tackle climate change with machine learning. Priya was also a co-author on this paper and can talk about it as well. And I said, You know what, this sounds great. I think it’s really high impact and I kind of dropped everything and just read about machine learning for the next six months. And  there are almost no other circumstances as an employee or as a student where I would have had the freedom to do that and say, “You know what? These are the right people. This is the right time; it’s the right place. I’m going to go for that.” And I think it was ended up being a really important project that’s had a big impact on a lot of people. And that also really shifted my career and opened up opportunities that I wouldn’t have imagined before. And so the CSGF has really enabled me to get that moment and say, “Yes, this is what I’m going to do. “

    Priya Donti  25:33

    Yeah, I just want to echo every single point in there. cscf fellowship has really been instrumental in in kind of allowing me to explore the space and in the way that I wanted to.

    Ben Toms  25:43

    Yeah, I think it’s been pretty similar for me, too. One of the things that I think the program really attracts these, maybe it’s not the right word, but challengers in a way where people want to kind of push the envelope in question the status quo, and they’re willing to take risks. And like Kelly was talking about the fellowship gives you an opportunity to actually pursue those risks. So it’s a natural incubator for this type of thought. And that’s pretty inspirational.

    Kelly Kochanski  26:07

    I think that’s right, I learned a lot about being a scientist and doing impactful work from the CSGF conferences. It’s something that it’s it’s hard to learn and that I haven’t seen expressed so clearly anywhere else, just seeing coming in as a first year and seeing brilliant people further along in their Ph.D. with these impressive well developed ideas. And just seeing that each of them had taken their field and had found a little bottleneck that was stopping people from doing a whole bunch of potentially really impactful work and just gone after that. It was a much more structured and high-impact way of doing science than anything that I was explicitly taught in grad school.

    Sarah Webb  26:45

    I wanted to sort of get a sense of where you see yourselves in the next five or 10 years, because I’m imagining AI and climate change and all of this will be changing. But also, or even the context of where you see the field heading and kind of where you want to be a part of it, maybe going back to those bottlenecks that Kelly was just talking about and where you might be able to fit.

    Priya Donti  27:09

    So I guess, calling back to a concept that Ben talked about earlier, this concept of being integrators between different sets of people or different different sets of stakeholders. I think regardless of the form that that takes, that’s something that I definitely would like to do, I think that there’s a lot of potential basically in bridging kind of people who know AI and who kind of, you know, have been studying that and are thinking about the cutting edge, and you know, what techniques we should be developing, and people who have societal problems like, you know, various climate relevant problems that we will need to be addressing. And I think this is a mutually beneficial relationship for both communities, I’m, I guess, I think there will always be a need for these kinds of integrators. And that’s something that I’m really excited about doing,

    Kelly Kochanski  27:55

    I think that’s a great way to put it. I couldn’t tell you exactly what I expect to be doing in five to 10 years. But those cross-disciplinary integrator roles have always been the most fun. And so I’m looking forward to staying in those. I think AI in climate science is here to stay, and it’s going to be is starting to grow very dramatically already. And it will grow more. And I’m hoping to be moving forward and leveraging larger, more ambitious projects,

    Ben Toms  28:22

    it’s kind of interesting, the conversation that we’ve had, it’s actually resonated with a lot of my thoughts recently about where I want my life to head. So, you know, I think Kelly’s take on the optimistic side of AI is that you can scale up some of these types of analyses and make them more accessible to a whole lot of people that, you know, 10 years ago, if it was all, you know, people having to crunch the numbers would be much more difficult. So that’s, that’s where my focus is going to be, I think, for the next few years, is trying to make sure that, that AI can be used to scale a bunch of, you know, climate analyses and things like that, that makes it more accessible to people. So then people can focus on how to communicate that most effectively. So people can then take action. So kind of, you know, using AI to scale up these problems and make it societally accessible.

    Sarah Webb  29:13

    What advice to each of you have for other people who might be interested in this particular research space? People who might be thinking about graduate school or thinking about careers?

    Priya Donti  29:26

    I guess my biggest piece of advice would be if you’re interested in the intersection of climate change and machine learning, like definitely consider engaging with the Climate Change AI community. So the goal of the community is basically to help you know, build networks of people who are thinking about this intersection, provide educational resources,  alleviate various bottlenecks in terms of data and funding and also try to shape various kinds of nuanced discourse about how you responsibly and impactfully do work in this area. And so I would say kind of attending the workshops that we hold or the webinars that we hold. We have a virtual happy hour for more casual networking, joining the forum or Signing up for the newsletter, they’re basically kind of various ways in which you can engage in order to meet people or to learn more. And to sort of build up that those networks and knowledge that that can really help you to engage in this space. And so the website is just climatechange.ai. So I’d encourage you to check it out or reach out to anybody on the team,

    Kelly Kochanski  30:20

    I think I’d follow up on that and say, put in the work and meet the people. Working and climate change, and AI requires a fairly large background into different fields. And there aren’t really shortcuts for that. I see a lot of data scientists who want to work on socially impactful problems. And I see a lot of climate scientists and graduate students who want to move into data science. And very few of them can actually speak the language of the other side convincingly to make it look like that’s a place where they belong and where they understand what’s going on, and where they’re going to thrive. And you have to get to the point where you can do that, and as soon as you do, it unlocks doors, all over the place. And with that said, I’m going to chime in again for Climate Change AI saying that it’s a great organization trying to make that a little bit easier for everyone.

    Ben Toms  31:11

    Yeah, I think the one thing that I would add to that is, there’s a lot of excitement around climate change in AI. AI in academia and the private sector, there are always these really difficult problems and really popular fields that people kind of avoid, in my instance, the interpretability work, and I had a lot of help and getting the courage to, you know, tackle that with a bunch of people at the DOE and CSU. And I think that if you’re going to try to engage the climate change community, that’s a way that you can make a big impact is taking a risk. So yeah, I guess that’s all I’d say is be be willing to find those places that really need to be addressed and then tackle those problems.

    Kelly Kochanski  31:51

    The speaking of risk, the other one I see a lot is scalability in science, a lot of new computational tools are just really opening the door there. And I see a lot of opportunities for the fields to move from solving local problems, to scaling that up and solving national or global problems. So that’s probably the most concrete advice that I would give a new grad student: take your problem and see if it goes big.

    Ben Toms  32:16

    It’s how you start a company, too.

    Kelly Kochanski  32:20

    It’s how you sell consulting projects, as well.

    Ben Toms  32:23

    Cool, it’s all the same.

    Priya Donti  32:29

    And the other thing I would say is, yeah, get involved in your local community. I think, for example, there are many, you know, for example, grassroots groups that are working on transportation justice in a particular area, where they’re trying to advocate for more, you know, sustainable or public transportation in a particular city, get involved, understand what they’re thinking about, become a part of that community and sort of, I think don’t go in with your AI hammer immediately. But I think as you sort of engage in that community, you will see a lot of these connections start to emerge.

    Sarah Webb  33:03

    Priya, Ben Kelly, I want to thank you. This was a lot of fun for me, and I hope it was for you too.

    Kelly Kochanski  33:10

    Mmhmm.

    Ben Toms  33:10

    Oh, yeah.

    Priya Donti  33:11

    Yeah, definitely. Thanks so much for having us on.

    Sarah Webb  33:14

    To learn more about Climate Change AI, Intersphere, and each of this episode’s guests, please check out our show notes at scienceinparallel.org. If you like this episode, please share it with a friend or colleague and subscribe wherever you listen to podcasts. Science in Parallel is produced by the Krell Institute and highlights computational science with a particular focus on work by fellows and alumni. The Department of Energy Computational Science Graduate Fellowship Program, which is celebrating its 30th anniversary in 2021.  Krell administers this program for the U.S. Department of Energy. Our music was written by Steve O’Reilly. This episode was written and edited by me, Sarah Webb.

    Scroll to Top