Like all potentially era-defining things, AI—particularly generative AI—comes loaded with a raft of stories people tell about it: It is a wonder tool. It can help us cope with the effects of climate change. It can help society. But because of its voracious appetite for energy, it’s also a contributor to climate change. Oh, and bias is built into it because it learned from us. Good luck.
We spoke about these topics and more with technology editor and analyst David DeLallo, who’s an optimist about our collective ability to deploy AI for society’s benefit and a believer in the uniqueness of human judgment.
About David DeLallo
David DeLallo is a longtime tech editor, content strategist, and AI industry analyst. His experiences include spearheading McKinsey & Company’s AI thought leadership program and engineering AI content marketing strategies at IBM. As the principal of his consultancy, David Loren, he helps organizations showcase their tech prowess through thought leadership and provides AI education for business leaders.
It seems as if AI has been touted as a do-it-all tool. In the context of climate change, what have been some of the most promising use cases you’ve seen?
There’s a long list. Some characteristics of AI make it a very good tool—one of the tools, anyway—to address the challenge. AI is good at finding patterns in a ton of data. It’s good at making predictions from that data. It’s good at optimizing performance. When you take all of those together, there’s a lot that it can do.
Let’s take wildfires as an example. There’s a public safety program that the University of California San Diego launched called ALERTCalifornia. It provides the tools to prepare for and respond to wildfires. California’s vast, and there’s a lot of ground to cover, not to mention huge population density. The program has installed more than 1,000 monitoring cameras and sensor arrays, collecting data that provides actionable, real-time information to inform public safety.
To use AI and other tech to help tackle climate change, we’ll obviously need decision makers to have a working knowledge of what the tools can and can’t do. Decision makers who are asked about their intentions for emerging tech say they intend to use it, but they also say they don’t know how to use it. How are you taking that insight?
I think it’s typical.
Compared to technologies of the past, the people in the C-suite are using [generative AI] more. I saw one study that said about a quarter of C-suite executives had already used it fairly regularly, either in their personal lives or at work. That’s unusual for a new technology.
When it comes to applying it to your business, you are faced with the typical issues. First, prioritization: Where does it make sense for me to employ it? In this case, there are lots of risks, right? You have the risk of bias and hallucinations, as well as the risk of your data going outside of your walls to places you don’t want it to be. So there’s excitement and then there’s risk. It creates this tension, which can be a healthy tension, but it can also paralyze you.
One other thing that I think prevents any technology from being put into effect in business right away is this: we get very stuck in our ways of working, and we don’t want to adapt to the new tool. There’s a learning curve with any tech and people think, “I don’t have time to figure out how to master that because I have these deadlines, and I know how to get those done now.”
About Into the Weeds
We at Leff are, at heart, storytellers. We are dedicated to amplifying voices and causes from all over the world, regardless of gender orientation, race, or economic background. And the stories we tell as part of the Into the Weeds interview series are particularly important to us. We will be interviewing inspiring individuals whose work contributes to the achievement of the SDGs at every level; we’ll bring you insights from the leaders of global organizations, renowned experts and academics, and innovative local businesses.
Our goal for this series is the same one that underpins all of Leff Sustainability Group’s client work: to use our storytelling skills to build awareness of the issues that threaten our planet and to draw attention to all the people, initiatives, and innovations that are fighting back.
We think of AI’s unique value as its ability to process vast amounts of data. Where do you see the conversation on the role of human intervention, human expertise, and human judgment? Especially when models tend to be opaque?
Human judgment is not going away. It’s needed to even come up with the ideas. “What are the ways we’re going to employ these tools to help reverse climate change?”
It reminds me of a project that shows the human ingenuity required. A scientist came up with this idea: “Let’s model the entire Earth, and we can basically put information into it and see what’s going to happen. And you can have everybody put in their initiative and their results and model out what that’s going to do, plus a lot of other uses.”
“What are the ways we're going to employ these tools to help reverse climate change?”
It’s very difficult to pull off. He was able to launch this with the European Space Agency, and it’s called DestinE [Destination Earth]. It’s a multiyear effort and is going to be a multicontinent effort, with many different stakeholders involved. Computers can’t just do the extensive orchestration and coordination among humans. That’s up to us humans.
Something else that’s been dogging us humans: New technologies tend to slot into existing systems of inequality. How actively are organizations talking about using generative AI in a way that distributes the benefits, drawbacks, and risks equitably?
It’s generally the more developed nations that are contributing much more to carbon emissions. In using AI to help mitigate some of the climate problems, obviously the other developing nations are going to benefit too. Climate actions are not just going to benefit one region. For example, extracting carbon from the air, making sure that we’re keeping our water clean, etc.: That’s going to benefit everybody.
Maybe one way to get at this is to talk about how a lot of the bigger companies have humanitarian projects and initiatives to make sure that AI benefits developing nations. We know that many of the big tech firms have been committing funding to make sure that they’re using AI to help in humanitarian efforts around the globe. Google and Microsoft have committed more than $100 million each, easily, over the past few years. So the big companies are stepping up and trying to make sure that AI benefits other countries.
Is there anything about AI’s role in climate change that you think we’re not exploring enough?
Quantifying how much energy it takes to train a model—and we’re really at the tip of the iceberg on looking at what it’s going to take to use the model, what we call “inference.” There was a study that came out in which Hugging Face wanted to see the energy that was required to train a model called Bloom, which has a smaller number of parameters than a GPT-3 or GPT-4. Training the model took as much power as it takes to power the average American home for 41 years.
ChatGPT is estimated to maybe have taken three times that much power to train. And again, we haven’t even gotten a true estimate on how much energy it’s going to cost to keep running it. This is obviously something that we need to address. And we haven’t even talked about the energy needs of other generative AI components. How about chip manufacturing? That also contributes to harming the environment.
So I think that AI can be part of the answer, but we have to make sure it’s also not part of the problem or just counteracting anything that we’re doing to use it to solve climate change.
“AI can be part of the answer, but we have to make sure it's also not part of the problem or just counteracting anything that we're doing to use it to solve climate change.”
Where do you think we might be headed? A single dominant model or different dominant models, or dominant models for different uses or fields?
Yes, that’s the million-dollar question: How this will all play out? Will we have just a few models? Will we have many models? Advances keep happening, so prediction can be very difficult. Companies and research institutions are trying to create smaller models. And we know that there’s research showing that some smaller models can do just as well as the larger ones on very specific tasks.
We can go in a few different ways. We can start seeing a bunch of smaller models that are very specific to doing certain things because they’re cheaper to use, faster to build, easier to deploy, etc. And then maybe we have a few larger models that are multipurpose.
The organizations we’ve talked about so far have been private entities. What’s the public sector’s role?
The public sector has a few roles to play. For example, the AI executive order that came from the White House specifically asked the Department of Energy to do some homework and look for ways that AI can be used to help with climate change. It basically ordered them to make sure that they’re teaming up with the private sector and actually building new models that can be put toward scientific topics, including climate change.
We’ll see what the EU AI act comes out to be. They want to ensure that AI is being turned toward societal good, including mitigating climate change.
The other way that the public sector plays a role is ensuring that we don’t end up ruining the environment by using generative AI. Both the executive order from the White House and the EU’s regulation have language in there stating that the agencies that are responsible for monitoring energy need to look into how much impact there’s going to be from using AI. The EU AI Act might actually include language that requires companies to send reports on how much energy they’re using.
Anything you want to leave us with?
I always wish I were hearing more conversation about equity when a new technology is being considered.
For example, the effects of generative AI on jobs is going to be two times higher for women than for men. I haven’t heard that stat much, even though it came out of an International Labour Organization report. You would imagine it would be fairly well publicized, but I don’t hear a lot of conversation about it: “Let’s think about how this is going to affect women in the workforce.” AI and our usage is not where it needs to be in terms of creating an equal and just tool for all.
Behind the Scenes
This interview is part of Leff’s Into the Weeds interview series—a series that amplifies individuals whose work contributes to the achievement of the SDGs at every level. We’ll be bringing you insights from renowned experts and the leaders of global organizations and innovative local businesses. Mimi Li (she/her) is a senior editor for Leff, and Clair Myatt (she/her) is the manager of Leff’s Sustainability Group, for which Katie Parry (she/her) is the director.
Comments and opinions expressed by interviewees are their own and do not represent or reflect the opinions, policies, or positions of Leff or have its endorsement.