Tackling generative AI’s sustainability problem
- Summary:
-
Although the adoption of generative AI is only in its early stages, reports are starting to emerge about potential sustainability problems if it goes mainstream. As a result, the time is now for organizations to take action in order to avoid problems further down the line.
Generative AI has a sustainability problem. According to a new report by the Capgemini Research Institute, the technology has a significantly heftier carbon footprint than many other tools – which, given the current hype surrounding it, is far from good news in the middle of a climate crisis.
Put simply, because generative AI’s requirement for compute power is high, it consumes a lot more energy and, therefore, emits a lot more carbon. For example, training GPT-3, the so-called ‘parent’ model of the more famous ChatGPT, produces 10 times the emissions generated by an average car over its lifetime. Integrating generative AI into a search engine also requires four times more compute power per search than a standalone product.
These findings are backed up by a recent Harvard Business Review (HBR) article, ‘How to Make Generative AI Greener’. It says:
The hidden environmental costs and impact of these models are often overlooked. The development and use of these systems have been hugely energy-intensive and maintaining their physical infrastructure entails power consumption. Right now, these tools are just beginning to gain mainstream traction, but it’s reasonable to think that these costs are poised to grow – and dramatically so – in the near future.
Even more worryingly, the HBR article cites one research paper indicating that:
The recent class of generative AI models requires a ten to a hundred-fold increase in computing power to train models over the previous generation, depending on which model is involved. Thus, overall demand is doubling every six months.
Not all AI is created equal in sustainability terms
But as Heather Dawe, Head of Data at digital transformation specialist UST, is careful to point out, while large language models, for instance, constitute some of the most “impact-intensive forms of AI”, other approaches are significantly less carbon-hungry. This means it is important not to tar all forms of the technology with the same sustainability brush:
All Large Language Models are compute-intensive and so use the most carbon. But some of the analytics tools that companies use to monitor carbon usage aren’t that carbon-intensive. Using AI to monitor the quality of patient outcomes in hospital is quite low intensity too. These tools also offer significant benefits, which means the trade-off is low.
Furthermore, the HBR article indicates, even large generative models themselves are not all created equal in terms of energy consumption and carbon emissions. As a result, there are three considerations when calculating their carbon footprint. These consist of working out how much carbon has been generated from:
- Training the model
- Running inference (or predicting outcomes using new input data, such as a prompt) with the model once it has been deployed
- The hardware and cloud data centre capabilities required to run the model.
Dawes indicates that it is the first area which tends to be most energy-intensive, not least as the data sets required for training are usually much larger than older AI models:
With generative AI, key issues are how frequently you train the model, and also how you train it. It may be prohibitive to build and train models yourself, for example, as doing so can cost millions of pounds. Maintenance costs can also be huge. So many companies just opt to fine-tune an existing model to fit their own company, which is a less energy-intensive process. You’ll need to refresh the data regularly though and how often you do so requires a trade-off between energy costs and accuracy.
David Pugh, Head of Sustainability at Digital Catapult, the UK’s innovation agency for advanced digital technologies, takes a similar stance:
Most people don’t need to train the last few percentages of their data to get 100% accuracy. On the enterprise side of things, companies say their carbon footprint mostly comes from data processing, so it’s about how you balance being accurate enough with lowering emissions.
Questions
Because of the high costs of training generative AI models, most organizations tend to purchase models that are run and operated by so-called ‘hyper-scalers’ or large cloud providers, such as Google, Amazon and Microsoft.
As a result, Pugh advises asking certain sustainability-related questions of such suppliers from the outset so that, over time, cumulative customer pressure means they will have to take action. Potential queries include how:
- Their data center is powered, which includes whether it is based on renewable energy. (Data centers are currently responsible for 2-3% of global greenhouse gas emissions)
- They store and manage data, which includes optimizing the size of data sets to increase retrieval speed and cut both costs and carbon emissions in the process. The aim here is to ensure “the right data is accessed by the right people at the right time”, Pugh says
- Suppliers intend to optimise their data centers in sustainability terms going forward. This includes being transparent about their own sustainability performance, working with other suppliers that likewise have a sustainability focus, and taking the environment into consideration as a key stakeholder at all times.
Mitigating generative AI’s impact
Nisar Ahamad, Capgemini’s Vice President - Head of Industries and Americas’ Sustainability, also recommends putting a mitigation strategy in place. Such a strategy should cover Scope one to three emissions:
Business objectives and sustainability have to go hand-in-hand…We know there’s a carbon footprint, so there also has to be a mitigation plan, but that requires uncovering the energy consumption of the entire technology infrastructure. So it’s something an organization has to include in its business objectives if it wants to meet its mitigation aims.
A key problem today though, according to the firm’s report, is that few of the 1,000 leaders questioned appear to have this kind of mitigation strategy in place. This is despite an awareness among 78% that generative AI could lead to a higher carbon footprint than more traditional tech.
The challenge here, says Dawe, is that:
People don’t know what to do about this stuff as the use of generative AI is only starting to be explored. While some companies are pretty advanced already, most are in the exploration stage and are seeking to understand what they can do and how to embed it in their ways of working. They’re learning, which includes in sustainability terms, so it’s natural for the report to point out a disconnect between awareness and action. It’s not that organizations aren’t putting any effort in. It’s more ‘oh yes, goodness, we need to consider this as part of our ESG [environment, social and governance] framework’. Everyone wants to be responsible but it’s still early days.
Adopting a process of continual improvement
But over time, as adoption starts to move into the mainstream, Dawe expects to see sustainability become another key strand of Responsible AI frameworks alongside concepts, such as accountability, inclusivity and transparency. She also hopes Responsible AI standards, metrics and audits will become part of the wider regulatory landscape in an attempt to combat greenwashing.
Pugh likewise is hopeful that the industry is currently “moving in the right direction”, if slowly. But rather than organizations have their generative AI trials grind to a halt over sustainability concerns, he recommends they bear in mind that not everything has to be perfect from the start:
Don’t let a desire for perfection get in the way of doing good. This is emerging technology, so adopting a process of continual improvement is very important in sustainability and other terms. Over the next 18 months, I expect things to start moving much faster and the sustainable technology movement will explode. So, for example, when you use Alexa or Google, you’ll be able to ask what its carbon footprint is today. Or you’ll have the option to deploy YouTube videos in different ways so you can choose the most carbon-friendly one.
In fact, Pugh is convinced that AI could well end up playing a key role in fixing its own problems in areas, such as grid management and optimising data centre management. As he concludes:
No one has all the answers today. You can’t give a check list to someone to make AI net zero as the issue is so complex and relies on so many moving parts. Companies almost need to take a leap of faith to develop the next cycle of products and ensure they’re more sustainable, but ultimately it’ll be all about continuous improvement.
My take
Sustainability as an issue has been little talked in a generative AI context to date as the hype surrounding this controversial technology continues to swirl. But organizations need to start getting to grips with the implications, and quickly, if they are to mitigate its worst effects.