Main content

If you think OpenAI’s sacking of Sam Altman was just about board-level disagreement, think again

Chris Middleton Profile picture for user cmiddleton November 20, 2023
Summary:
The Sam Altman saga this weekend at OpenAI reveals the danger of the e/acc AI faction - accountability is questionable

Altman
(Dreamforce)

Sam Altman’s shock ousting from the company he co-founded, OpenAI, last week was one thing. But his rumoured re-hiring over the weekend, then newly announced move to Microsoft – alongside co-founder Greg Brockman – has exposed troubling fault lines in the tech sector.

At the time of writing, Altman and Brockman will be heading up Microsoft’s new advanced AI research team, an appointment that followed hot on the heels of peace talks breaking down at OpenAI last night. 

Before that meeting, Altman tweeted a photo of himself in OpenAI’s offices, holding a Guest badge and saying, “First and last time I ever wear one of these”. This suggested – perhaps – that he believed OpenAI would welcome him back, after a sacking that saw the $80 billion start-up announce:

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

Near midnight yesterday, San Francisco time, Altman tweeted “the mission continues” – that mission presumably being to build an artificial general intelligence (AGI) that will, in some unspecified way, benefit humanity. Except the mission will now be continuing under a huge umbrella with ‘Microsoft’ emblazoned on it.

With a speed that was both impressive and intriguing, Microsoft CEO Satya Nadella tweeted:

We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett Shear and OAI's new leadership team and working with them.

Then he added:

And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team. We look forward to moving quickly to provide them with the resources needed for their success.

On the face of it, then, Microsoft is the big winner in all this, bringing onboard the kind of maverick, brilliant, yet socially awkward entrepreneur so beloved of a certain type of coder – and a multitude of venture capitalists. The kind of leader who gets put on a pedestal with troubling speed, and hailed almost as a messiah, despite having a single, nameless emotion that can only be expressed by a blank stare.

Yet the move is also a complex one for Microsoft, despite Nadella’s assured leadership in this matter. It now finds itself riding two different horses for an AI win: the Altman/Brockman team, and OpenAI under new interim CEO Emmett Shear (formerly of Twitch) – its second in as many days. 

The rage and loyalty among OpenAI employees, many of whom threatened to quit if Altman and Brockman were not reinstated, will make managing that three-way relationship hard. The implication from Nadella’s statement is that some of them will join the new Microsoft team.

The dollars Microsoft spent to get Altman and Brockman on board are currently undisclosed, and the speed at which these new roles have been created is certainly going to garner some speculation. If the long-term plan is simply to buy OpenAI and reinstall Altman at the head of a standalone division, the move makes a ton of sense - with Shear as a willing stooge, perhaps.

The e/acc and decel divide 

But, how did Altman get himself into this mess? The precise details are unknown, yet it is clear there was some kind of breach of trust with the OpenAI board. The latter’s carefully worded statement suggests the possibility – but only that – that he did not disclose information that might have legal implications. 

For OpenAI, not revealing the reason probably avoided an all-too-public legal battle that might push investors and customers away. Indeed, Altman hinted as much when, in advance of the failed talks, he tweeted:

If I start going off, the OpenAI board should go after me for the full value of my shares.

Ouch. But what is fascinating about the newly aggressive stance from Altman is how different it is from an interview he gave back in June, when he told Bloomberg’s Emily Chang:

No one person should be trusted here. The board can fire me, I think that's important.

Fast forward to last Friday, and it seems that “one person” was indeed not trusted and has, in fact, been fired. Yet Altman’s aggressive response suggests that he didn’t mean what he said back in June.

In the interview with Chang, he then made an intriguing comment about the board needing to become “democratized to humanity”. This may be the crux of the matter. 

What is clear is that tensions have grown between OpenAI’s founding non-profit status, and the emergence, under Altman, of a for-profit powerhouse with global ambitions. The speed of tech development has been another factor.

Yet hiding in plain site is another, under-reported news story: while at the helm of OpenAI, Altman had been courting sovereign wealth investors in the Middle East. The aim was to build a new AI-inference chip division, codenamed Tigris, to rival NVIDIA. Was this the final straw? Was the extent and nature of those talks not disclosed, with backers who might have been a tough sell in the current political climate?

Yet Altman’s sacking has also revealed another uncomfortable problem: the fault lines that are opening up between an extremist AI faction at large in the world – evangelical stans who want to accelerate innovation at any cost – and everyone else.

These devotees see the likes of Altman almost as messianic figures, and describe anyone who wants to moderate progress or consider the ethical, social, economic, and cultural dimensions as ‘decels’: decelerators, a derogative spin on ‘incels’.

This self-styled ‘e/acc (effective acceleration) faction finds its natural home on X, the platform owned by Elon Musk, a man fond of decrying the “woke mind virus” (aka anyone critical of his unfettered power and politics).

Some of them – major figures like Marc Andreessen and Eric Schmidt among them – describe themselves, reasonably, as technology optimists; creators who stand against the dystopian warnings about AI frequently espoused by tabloid journalists.

That’s fair enough. Many of those warnings are alarmist and misinformed; new technologies always create new jobs, new companies, and new possibilities. Meanwhile, AI may help the world solve real problems, such as climate change, rogue asteroids, pandemics, and nuclear Armageddon. Indeed, Andreessen has published The Techno Optimist Manifesto, which describes the “techno-capital machine” as an engine for “perpetual material creation, growth, and abundance”. 

OK. But the evidence from a handful of trillion-dollar corporations suggests that the “perpetual growth and abundance” is often mainly experienced by technology companies. In the real world, the network effect might create new global platforms for creative people, for example, but it is also depressing the amount of money that can make on them to almost zero. 

Ask any musician, for example, whose months of hard work to reach tens of thousands or hundreds of thousands of streams nets them just 50 cents, or a dollar they can’t even claim. To express such a view is neither decel nor pessimistic or dystopian: it’s the real-world output of that supposed machine of opportunity. Meanwhile, Spotify’s founder and CEO is a billionaire – one who invested millions in an AI weapons company, not in music.

Yet among some e/acc devotees on X, it is getting almost impossible to distinguish between parody accounts and genuine statements of belief and intent. For example, one tweeter who styles himself BasedBeffJezos (‘chief accelerator & founder @ e/acc // thermodynamic priest / entropic flux enjoyer // Kardashev gradient climber // memetic landscape sculptor’) tweeted this in the wake of Altman’s sacking:

Decels holding back civilizational progress out of their ideological mind-virus-induced-neuroticism.

Hopefully a joke account, yet the tweet was Liked by Eric Schmidt. And all that in response to a board that was simply trying to hold a CEO to account!

My take

This is the core problem in an AI world often driven by religious fervour among people who seem to have zero interest in accountability. 

The outrage expressed by some online that a board, set up in part by Altman, had any right to even question him, let alone hold him to account, is alarming to behold. 

Techno evangelism helps no one. It never has. It closes every door in the universe except the one the evangelist wants you to walk through. Evangelists – who I once called ‘self-styled John the Baptists to Big Tech Jesus’ – see any criticism, any guardrail, any boundary as pessimism. But what it actually is, is healthy scepticism and a right to hold super-powerful men to account. 

To be optimistic about AI or any other technology – as I frequently am – should not mean being denied the right to criticise, or to point out the chasm that sometimes exists between claim and real-world effect. Give me scepticism rather than passive, supine acceptance of evangelical BS any day. Evangelism begets healthy scepticism, and it is right that it does. The right to be a realist, not a disciple.

The people who don’t want to be held to account and their increasingly bizarre followers: they are the real problem facing the planet, not the so-called pessimists. And yet the deification of dysfunctional men with money continues.

Loading
A grey colored placeholder image