Generative AI’s appetites are costs that must be priced

Peter Coffee Profile picture for user Peter Coffee January 29, 2024
Summary:
Sustainability needs to remain front of mind as enterprises seek the benefits of AI, says Salesforce's Peter Coffee.

Portals to galactic wonder © Yuri_Arcurs  - Canva.com

In fiction, fusion power was going to take us to the stars, but it’s looking as if some just want a rockbound version to sate the energy appetite of artificial intelligence. We’ve learned the hard way, most recently in the domain of cryptocurrency mining, that activities with high perceived profit will find and consume whatever inputs they need – without concern for unpriced externalities.

As enterprises in every market make ambitious AI plans, we will therefore do well to have sustainability (of every resource) in mind, with all of the costs kept in plain sight from the beginning – rather than letting AI’s short-term returns intensify long-range problems, while deferring (or even ignoring) accountability for their solutions.

The problems, unlike the solutions, are already in the present tense. Greenhouse gas by-products from powering servers, and pumping their data flows, already represent two or three per cent of the global total of those emissions. It would be perverse to promote AI-aided climate modeling and climate change mitigation as tools for environmental stewardship, if the carbon footprint of training a single Large Language Model is on the same order as the full life-cycle impact of building and driving five automobiles.

The numbers involved are so huge that they may overwhelm any sense of personal responsibility, but they can be re-scaled to the dimensions of a single worker’s choices. For example, I recently wanted to add an image to a presentation slide to illustrate the role of prices in optimizing resource allocation. “The number of different ways to use, combine, and recombine resources is unimaginably colossal,” observed economics professor Donald Boudreaux, adding:

Almost all of these ways are useless. It would be a mistake, for example, to combine Arnold Schwarzenegger with medical equipment and have him perform brain surgery.

Image

Rather than searching for a (possibly rights-encumbered) image to depict that idea — the Terminator with a surgical mask, wielding a scalpel in an operating room — I used Stable Diffusion Online to see what it could create for me. A series of increasingly specific prompts produced something perfectly suited to my need, and my progress toward that result also illustrated collaborative interaction of human and machine – another point that I wanted to make.

At the time, this attention-focusing image seemed like an inventive way to make my point about prices as signals of growing abundance or increasingly challenging scarcity – these being the two most forceful drivers of social revolution, as George Gilder observed in 1996. Any pride I might have felt, though, in making this creative use of a new tool, was soon damped by my learning—soon afterwards—that generating a single image uses about as much energy as charging an average smartphone. I must have gone through at least a dozen iterations before settling on my chosen series of images. Would I charge my phone a dozen times to illustrate four slides? It seems as if that would feel like icecap-melting behavior.

Further, energy is only one of the appetites that AI’s rough beast is enlarging on an exponential curve. Energy drives hardware, but scarcity of AI-oriented processors feels like an emerging geopolitical concern. Capital pools in Japan and Abu Dhabi, along with massive chipmaking groups in USA, Korea, and Taiwan, are all part of a semiconductor susurrus that feels like Starfleet facing off against the Klingons over access to a planet full of dilithium. (Dilithium crystals “are common stones” on her planet, said Elaan of Troyius in a 1968 episode of “Star Trek” – while transistors seemed like “stones performing useful work” to Tracy Kidder in The Soul of a New Machine in 1981. Science fiction almost always gets there first.)

The wall

With issues of talent, data quality, and regulations including privacy and intellectual property thrown into the mix, the resulting stack of scalability challenges for AI adds up to what industry analyst Dylan Patel called “The AI Brick Wall”. There are at least two ways to proceed: we can try to crush the wall, or we can pick our spots and drill through it.

The wall-crushers will say, let’s build the chip factories and train the engineers and give the beast whatever it wants. They remind me of a thought experiment from 2018 called “the paper clip problem”: “Suppose that someone programs and switches on an AI that has the goal of producing paperclips. The AI is given the ability to learn, so that it can invent ways to achieve its goal better…it will appropriate resources from all other activities… fighting humans for resources.” Speaking of “Terminator” scenarios.

Precision practitioners will prefer to specialize our AI tools for tasks, in all likelihood resulting in higher performance for lower cost – as long as we understand that a chatbot is present, for example, to sell us a lorry without also solving the Navier-Stokes equations of fluid flow. Will this path lead to “artificial general intelligence”? Pretty clearly, not. But does that path go to a place where we want to be?

It's possible that I’m thinking too small. On the Kardashev Scale of a civilization’s advancement, we’re not even at Level I: “able to access all the energy available on its planet and store it for consumption.” Level II would add our local star’s entire output to our resource pool; Level III, our entire galaxy’s energy in all forms. Some will say that our reach should exceed our grasp if we ever want to be all we’re meant to be. A global decision to follow that path, though, should not be made without all of its costs (for everyone) in view.

Loading
A grey colored placeholder image