Main content

Sam Altman wants to spend $7 trillion to accelerate AI; how about we #acceleratetrust instead?

George Lawton Profile picture for user George Lawton February 14, 2024
Summary:
The OpenAI CEO wants to spend $7 trillion on an AI chip maximizer that promises to scale AI faster. Here are some reasons why accelerating trust frameworks may scale AI even quicker.

trusting

The Wall Street Journal first reported that OpenAI CEO Sam Altman was in talks with sovereign wealth funds and venture capitalists to raise $7 trillion to accelerate the creation of AI. That’s not a typo; that’s a trillion with a ‘T,’ which is more than most countries' GDP and double the market cap and at least ten times the total revenues of all current chip companies combined. I’ll call it 10X because Silicon Valley wants to 10X everything, which in this case might suggest creating an enterprise valued at $70 trillion.

Numerous experts and pundits have posted opinions against the feasibility of this from various technical, investment, environmental, and AI safety perspectives. Seemingly confirming the rumors, Altman pushed back against these opinions on X, stating:

You can grind to help secure our collective future, or you can write substacks about why we are going fail.

I won't dive too deep into those concerns here since they have been widely covered elsewhere. Besides, I don’t want to grind against ambitious ideas to secure our collective future. It just seems like accelerating trust is a better approach to accelerating AI than maximizing the chips. As one practical example, Icebreaker One CEO Gavin Starks recently told me that the £40 million pounds invested in setting up the trust framework underpinning open banking has already generated about £6 billion in economic value. 

That’s a 150x return on investment and certainly more than might be feasible in quickly building more chips. Imagine the returns if that same $7 trillion could instead be invested to accelerate trust. It might generate $1.05 quadrillion (thousand trillion) in societal value by activating billions of people's hands, eyes, and data to solve all the United Nations Sustainable Development Goals around climate, good health, poverty, hunger, and more.;

In contrast, a massive investment in an AI self-optimizing chip production process sounds suspiciously like the paper clip maximizing thought experiment. It’s a classic parable in AI alignment introduced by Oxford Professor Nick Bostrom, suggesting an unrestrained AI trained to maximize paperclips might one day grow to consume the whole of earth’s resources, pushing us humans out of existence in the process. 

It's also worth noting that OpenAI is anything but open about its AI, data, or training process, which is fundamental for building trust. I was recently struck by a large banner at the State of Open Conference in London declaring the event was about open source, open data, open infrastructure and ‘AI openness.’ The presumption being that the term open AI had somehow been tainted. However, in fairness, diginomica has also covered serious efforts to build open-source AI.  

What’s more, investing in trust also promises to encourage the sharing of data, buy-in to new AI services, and adoption from the experts like doctors we currently trust. American Medical Association President Jesse Ehrenfeld argues that the medical industry should use the term augmented intelligence rather than artificial intelligence because artificial focuses on improving the tool rather than improving trust in the interface between doctors and the technology: 

It's worth pointing out that the rate-limiting factor to innovation and transformation is often not the medical device technology itself but the interface between the human and machine. To leverage these emerging technologies to their fullest potential, we must ensure that both the performance of the AI systems and the actual deployment transform care so that we are able to provide equitable, inclusive, human and humane care.

Improving the lives of the most people 

Many leaders advocating that AI could improve the lives of the most people have been taken by the notion of Effective Altruism. This started as a perfectly reasonable philosophy introduced by Oxford Professor William MacAskill for comparing the effectiveness of charities based on their impact on improving the world. 

Silicon Valley minds started imagining how, rather than just applying it to the people around now, we should think about benefiting even more people who might live in the future. Therefore, we need to quickly build AI, advance biotech, colonize Mars, and so on, never mind that a few people alive today might have to suffer more in the process. 

The felony fraud convictions by prominent adherents like Elizabeth Holmes and Sam Bankmen Fried have somewhat discredited the movement. These fraud convictions suggest that sacrificing trust to accelerate anything does not always work, despite the best intentions. It certainly hurts growth. A spinoff movement called Effective Accelerationism suggests we should focus front and center on scaling the technology. 

In his book Legacy: How to Build the Sustainable Economy, Oxford Professor Sir Dieter Helm suggests a better path forward lies in cultivating the growth of different kinds of capital.  (By the way, a free electronic version is available here – talk about building trust.)

The four capitals that matter for a sustainable economy include natural capital, physical capital, human capital, and social capital. He frames the problem as the need to leave a legacy supporting capability for our descendants to do what they want rather than just trying to make them happy. This legacy is the different kinds of capital. This will deliver better results for our descendants than approaches like effective altruism or efforts to achieve sustainability by slowing economic growth.

Much of the book is a deep dive into the seminal role the self-sustaining environment already plays in facilitating human well-being for free. But sometimes, we squander this inheritance through over-carbonizing, fishing, polluting, farming, etc. The other capitals can also play a role in this process: 

  • Higher quality physical capital, like buildings that need less heating, farms that require less fertilizer, and factories that pollute less, can also have an outsized impact. 
  • Higher quality human capital means people who might find a way to contribute to sustainability since they are less stressed by health, housing, and food challenges. 
  • Higher quality social capital means more people can collaborate on new challenges like pandemics, climate change impacts, or the best practices to enhance our natural capital. 

Most of these things, particularly social capital, are grown through trust. What might this look like in practice? 

  • Citizens trusting scientists and governments on the best practices to get over a pandemic.
  • Patients willing to share medical data to improve outcomes and research.
  • Experts adopting AI faster because it builds on rather than replaces their expertise.
  • Employees buying into digital transformation efforts that create more meaningful jobs.
  • Consumers upgrading their printers without worrying about locking out their off-label ink.
  • Enterprises unlocking their platforms to support faster innovation.

What would building trust look like?

Cory Doctorow elaborates on the root cause of the malaise caused by various business practices and legal frameworks that erode trust in The Internet Con: How to Seize the Means of Computation. The fundamental problem has been the enrollment of citizens, businesses, and regulators in practices that lock us into platforms, lock up our data, and lock us out of our equipment, which is somehow a good thing. Or at least many believe it’s good for GDP and the power it bestows. 

The solution is to unlock our data, unlock the platforms we use, and unlock access to our machines. Today, many of these are locked up by a byzantine mix of intellectual property rights and business models that allows companies to sue individuals, enterprises, and governments who fix their own equipment, extract their own data from platforms, or use third-party apps connected to platforms. He dives more deeply into this process of enshittification in this Financial Times essay. The new term was named 2023 Word of the Year by the American Dialect Society. 

His solution is that platforms need to respect the right of users to access their data, enhance it with third-party apps, and opt out of opaque algorithms, dark patterns, and business restrictions. They should also make it easy to exit and take their content, connections, and data with them in order to keep the platforms vested in improving their value. 

That won’t be easy. Rather than investing in more chips, we invest in unwinding those things that interfere with growing trust between citizens, scientists, businesses, and regulators. For example, even China, which is not exactly known to be a bastion of trust and openness, is already starting to do this through better monitoring and reporting. It is cracking down on fake construction progress and research misconduct.  

But better monitoring and reporting also only solves part of the problem. A fundamental challenge is that any new regulations or oversight will face significant pushback from vested interests that currently derive massive benefit from locking up our data, equipment, and platforms. That is where the $7 trillion comes in. It will buy off the vested interests so that we can collectively move on accelerating progress toward a more sustainable, just, and wealthy society. That’s a drop in the bucket for the $104 trillion dollar economy, even more so if it is backed by 100-year government bonds with a track record in building trust with investors. It would only cost us .007% of the global economy today and even less as this investment accelerates the global sustainable economy.

I know that’s a crazy idea, but it does have precedent. A hundred and fifty years ago, a growing number of people realized that maybe locking up people to work for free was not such a good thing. In the UK, they paid off the slaveowners and moved on by raising bonds that only recently were fully paid. The empire grew as a result. The US took a different route, settling on one of the most costly and deadly wars in American history. It still has not entirely moved on as evidenced by the ongoing protests about “Black Lives Matter” countered by other protests about “White Lives Matter.” 

My take

The super AI that Altman hopes to build may or may not be smarter or hallucinate any less than any one of us. But I am certain that it will be way dumber, less just, and accelerate the productive use of AI much slower than billions of people who trust each other even a little bit more. Not to mention businesses, governments, scientists, institutions, and citizens that trust each other more. 

Sure, we might not reach Mars or get the beautiful, massive, energy-sucking AI data centers quite as fast. But we will find ways to grow the infrastructure, tools, and skills more sustainably and quickly. How much faster? Well, to prove I am as outrageously over-optimistic as Altman, I'll say 10X as fast and with a 150X ROI to boot rather than his meager 10X. 

I even got a fancy new name for this movement and a supporting hashtag. Let’s all #acceleratetrust. Our future depends on it. 

Loading
A grey colored placeholder image