Main content

A five percent chance of AI killing us all? Welcome to 2024!

Stuart Lauchlan Profile picture for user slauchlan January 9, 2024
Summary:
AI researchers around the world were asked for their conclusions about the tech. The results were...inconclusive.

apocalypse

There is some chance, above zero, that AI will kill us all.

One of the more memorable quotes from Elon Musk that came out of the ‘fawnathon’ by UK Prime Minister Rishi Sunak at last year’s AI Safety Summit. That line grabbed a lot of attention, albeit almost drowned out by the wider nausea at the sight of Sunak ‘fanboying’ up to his very own Mr X

Musk, of course, has a lot of form when it comes to apocalyptic pronouncements. As far back as 2017, he was warning the world of the forthcoming threat of a killer robot on every corner, declaiming: 

AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.

Now, of course, a healthy debate about the potential downsides of the AI revolution is essential. Last year saw a commendable rise in awareness of trust issues around the technology to balance out the vendor-drive hype cycle that lurched out of control in summer of 2023. By year end, a more pragmatic view was in evidence, with savvier suppliers noting that end users were interested, but nervous. It’s a conversation that will inevitably continue in 2024, with the topic of AI futures featuring heavily at next week’s World Economic Forum gathering in Davos. 

But the more hysterical responses will inevitably still continue, both from AI evangelists and their Armageddon pedlar counterparts. This year has got underway with the publication of a study involving 2,700 AI researchers from around the globe which concludes that there’s a five percent chance of machines making humans extinct. 

Now that number alone was enough to feed the paranoia of the more excitable media outlets, such as the Mail Online, which breathlessly told its readers: 

A far more frightening estimation came from one in 10 researchers who said there’s a shocking 25 percent chance that AI will destroy the human race.

The web site went on to cite three major threats that caught its eye:

AI allowing threatening groups to make powerful tools, like engineered viruses, authoritarian rulers using AI to control their populations, and AI systems worsening economic inequality by disproportionately benefiting certain individuals.

The only thing missing here in Mail Online terms is a failure to find a way to add Meghan Markle to the threat list, but presumably that’s a work in progress…

Caveats

So what does the study actually state? Interestingly it begins with an important caveat about the poll base of AI researchers: 

Their familiarity with the technology and the dynamics of its past progress puts them in a good position to make educated guesses about the future of AI. However, they are experts in AI research, not AI forecasting and might thus lack generic forecasting skills and experience, or expertise in non-technical factors that influence the trajectory of AI. While AI experts’ predictions should not be seen as a reliable guide to objective truth, they can provide one important piece of the puzzle.

With that said, it’s hardly surprising then that the report finds that there’s a lot of uncertainty and no real consensus about what lies ahead: 

While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction.

Other bad stuff that might keep the doomsayers awake at night: 

Scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation of large-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses) (73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequality by disproportionately benefiting certain individuals (71%).

On a more upbeat note: 

The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a Large Language Model. 

As for AI nicking our jobs: 

If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier…However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey).

Given that earlier caveat about the poll base, the study does ask its researchers to stick their necks out and predict what state-of-the-art AI systems will be able to do in 20 years time. The top three pitches: 

  1. Find unexpected ways to achieve goals (82.3% of respondents)
  2. Be able to talk like a human expert on most topics (81.4% of respondents)
  3. Frequently behave in ways that are surprising to humans (69.1% of respondents)

My take

The main conclusion is there is no conclusion: 

Participants expressed a wide range of views on almost every question: some of the biggest areas of consensus are on how wide-open possibilities for the future appear to be. This uncertainty is striking…In general, there were a wide range of views about expected social consequences of advanced AI, and most people put some weight on both extremely good outcomes and extremely bad outcomes.

This is a situation that we’re going to see a lot of in 2024. But let's keep the conversation going. I suspect 2024 will be a year in which the thought leadership gap in the tech sector around the whole AI topic will become wider. 'Leaders' who continue to view this revolution as an academic/theoretical exercise and fail to display crucial human empathy will still be on show and feted over by an adoring investor community.

But it will be those vendors which reflect both the aspirations and the concerns of their customers around this tech and provide pragmatic education and assistance to navigate the shape of things to come who will be the winners.

Onwards! 

Loading
A grey colored placeholder image