We must stop assuming AI will inevitably lead to net positive outcomes

Derek du Preez Profile picture for user ddpreez March 31, 2023
Summary:
An open letter signed by influential individuals calling for AI labs to pause their training systems for six months has raised some eyebrows. But what’s becoming increasingly clear is that, for many, AI development feels out of control.

An image of a humanoid robot with it’s face pulled off
(Image by 0fjd125gk87 from Pixabay )

An open letter signed by more than 1,000 technology experts and researchers, including the likes of Elon Musk and Steve Wozniak, is calling for a six month pause on the development of AI and has spawned a series of debates this week. The letter, penned by the Future of Life Institute, argues that AI systems with human-competitive intelligence “pose profound risks to society and humanity” and that the care and planning required to manage the profound change AI will bring to life on earth “is not happening”. 

The letter calls for those working in AI labs to immediately pause for at least six months the training of systems more powerful than OpenAI’s GPT-4 and that the pause should be “public and verifiable”. Whether or not the intention behind the letter is to really protect the interests of humanity, or an attempt by those behind it to put some competitive brakes on in the industry in order to play catch up with the likes of OpenAI (as some have suggested), it’s hard to tell. But it’s difficult to deny that the letter hits on some worthwhile, worrying points. It states: 

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? 

Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. 

OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

There’s an argument to be made that the letter plays into the AI-hype cycle too, spurring more speculation about the opportunity of these systems and inflating the potential market value of companies building AI models. However, it’s fair to say that just a few short months ago AI technology felt nascent still. It felt like it was in early development and that anything astounding was still a while off - meaning that regulators, governments, and society at large still had time to prepare and adapt. 

However, that has now changed, largely thanks to OpenAI’s ChatGPT. Yes, it still makes mistakes, yes it has been proven to result in presenting dangerous information to users, and yes it has a tendency to ‘hallucinate’ and make up information - but the large language model has also been able to pass medical exams, write articles and trick other humans to help it pass online Captcha tests. Technology development is often messy, so this was never going to be perfect, but it’s hard to deny that what OpenAI has achieved feels way beyond what we (or I) expected in 2023. 

But calling for a six month pause on development is by and large a distraction. It makes for a nice headline and will give people pause for thought, but ultimately it won’t happen - and even if it did, six months isn’t long enough for researchers, legislators and the public to get to grips with the fundamental shifts to society that these new technologies will facilitate. 

Little understanding of the risks 

The challenge at present is that, more so than previous waves of technology disruption, the power of artificial intelligence is sitting in the hands of a select few private companies (thanks to the skills needed and the cost involved in developing it), and those select few are in an aggressive race to be the first to achieve artificial general intelligence (where technology is capable of performing most cognitive tasks a person can). These companies are fully aware that the first to get to AGI will win big financially and likely define future economies. 

And as a result, whilst risk mitigation is part of what these organizations’ claim, it feels like there is an assumption amongst industry insiders and government leaders that we will ‘just figure it out when we get there’. And that the result will be good! 

We already know that artificial intelligence could have dangerous consequences for the spread of disinformation, the increase of fraud, and could compromise democracies - but what’s more worrisome is what we don’t yet know. Much like other developments in digital technology, such as the Internet and social media, many of the negative consequences we’ve seen over the past decade weren’t widely enough predicted to have any sort of material impact on mitigation of risks ahead of time. 

Much of the focus on harms at the moment centers around copyright laws and how AI could impact creativity, or how AI could replace certain parts of the labor market. And those are worthwhile worries! However, we mustn’t forget that often these systems behave in a way that even those building them don’t fully understand and there is a real possibility that they will become more intelligent than humans at some point in our lifetime. 

How do you control an AI system that’s not fully understood, smarter than the people that built it, and views humans as unintelligent tools to get things done? That may sound like the stuff of science fiction, but we have to at least consider it a possibility - and we should not assume that because we built the systems that they will ultimately end up with the same values as us. 

In modern times humans have benefited from being at the top of the evolutionary pecking order - but what does a world look like when we have truly intelligent systems that are smarter than us and could well have no real need for us? 

This may well sound like online fear mongering and an outlandish conspiracy theory, but some of the smartest minds in the world have been warning about that exact outcome for years. And whilst it may never happen, we also shouldn’t sleepwalk into a scenario where we assume it is impossible. Hope for the best, but plan for the worst. 

And even if we were to forget about the ‘scary long-term consequences’ of AGI, it’s fair to say that we don’t even have much of a plan for the short to medium term fallout. These developments come at a time when there is already huge amounts of inequality across economies and there is potential for AI to displace large amounts of the workforce. That’s less of a problem if governments and organizations are training up the next generation to work with AI in a productive way - but, is there time and will AI beat the labor market to it? 

It’s also worth mentioning that governments will likely be faced with a lot of incentive to look the other way when it comes to mitigating risks, as they seek to boost productivity and growth after what has been an incredibly challenging decade for economies. 

We’ve also already seen how democracies have struggled to not be undermined by disinformation online - with the consequences of data collection and disinformation being used to influence still not fully understood by regulators. What will that look like with the accelerated help of AI tools? 

So whilst the worst case scenario of super intelligent AI systems that don’t see the need for humans may seem far fetched, we have risks that feel much closer that are getting very little headline attention - all the while a select few companies continue to race towards a very unpredictable future…

My take

None of what is written here should be viewed as a pessimistic, inevitable march towards AI doom and gloom. There are of course huge opportunities with AI, ones that could actually make productivity and the world of work a more pleasant, creative, fulfilling endeavor. However, we need to be wise to the serious harms that could be coming our way before it’s too late. And some of those are much closer than the ‘crystal ball gazing’ risks of AI. 

A six month pause on AI development is unlikely to achieve very much. And although the open letter released this week will inevitably result in a flurry of top line discussions for the next few weeks, we shouldn’t lose sight of the underlying point that we cannot assume that AI will result in net positive outcomes. And I never agree with Elon Musk! 

Because our economic systems support competition and organizations ultimately strive to ‘win’ when it comes to new developments, slowing down AI will be hard. But as a society we have the right to ask the question: should we be doing this? And if so, how? And what’s the plan for if we do? 

Loading
A grey colored placeholder image