AI regulation - to lead with a carrot or a stick?

Derek du Preez Profile picture for user ddpreez April 4, 2023
Summary:
Artificial Intelligence (AI) is a general purpose technology that impacts many aspects of society. But its unpredictability and fast paced development make it difficult to regulate.

Global Communication Technology concept, Man hand holding Globe and Smartphone , enjoy, satisfaction to communicate with internet network information using AI for CX to link other people together © May_Chanikran - Shutterstock
(© May_Chanikran - Shutterstock)

The rapid rise of Large Language Models (LLMs), such as OpenAI’s ChatGPT, has sent the technology industry into what can only really be described as peak AI hype-cycle hysteria. Sections of the industry are essentially predicting the demise of the human race as a result of AI, whilst others are suggesting AI will eradicate disease and solve all of our productivity woes. The truth is that it’s hard to accurately forecast exactly where we will end up with the application of artificial intelligence systems, or how humans will adapt to their availability - but what we do know is that the use of AI tools will increasingly become more omnipresent. 

What’s also true is that the AI tools that we have today, that are here right now, have the potential to be misused and can lead to a number of harms, including the rise in disinformation, the devaluation of actual peoples’ work and can further perpetuate biases already seen in existing systems. And as such, legislators are already facing a challenge to regulate for the use cases that exist today - let alone the ones that are coming. 

Countries and governments will no doubt also be aware that the advancements in AI could well shape the future economies, where the leaders in its development win big on the global stage. And as such, a delicate balancing act will be front of mind, where on the one hand they want to encourage the rapid innovation and investment needed to succeed, whilst on the other hand will be seeking to instill trust and confidence in the systems that develop. 

We’re already beginning to see how this cat and mouse game is starting to play out, as regulators respond to concerns with varied action, and innovators in the field seek to advance quickly, whilst also playing down the risks. Should regulators lead with a carrot or a stick? 

Italy last week, for instance, announced that it would be placing a temporary ban on ChatGPT, with its GDPR agency claiming that OpenAI failed to check the age of users and that there was an “absence of any legal basis that justifies the massive collection and storage of personal data”. Across the pond in the US, the Federal Trade Commission has been called upon by a tech ethics group to stop OpenAI issuing new commercial releases of ChatGPT, claiming that it’s “biased, deceptive and a risk to privacy and public safety”. 

To add to the complexity, it isn’t always clear, even to the organizations building these AI models, how they work all of the time - due to the swathes of data used and the complexity of the models. And to add to that too, systems are being developed where the AI improves itself without continued human intervention. The argument will be that the training models operate within the guidelines set out by a human initially, but it’s not hard to understand why there is increasing concern about systems that ‘improve’ themselves, when progressing towards unknown outcomes. And how do you regulate a system that’s improving itself without human intervention? 

A varied response

Around the world we are beginning to see how countries are responding to the quickly changing AI landscape, with some seeking to take a step back, hoping that the problems that arise aren’t too damaging, whilst others are taking a firmer approach, looking for reassurance from new AI ventures. 

The UK, for example, has released its initial framework for regulating AI, which is a light touch approach that rules out a separate body for governing AI use. The UK is using a ‘principles-based framework’ that seeks to focus on the general purpose nature of AI, providing guidance and support from regulators that already exist. The hope being that the AI industry in the UK will thrive without too much red tape and that companies will be ‘sensible’ when it comes to estimating risk. 

This is likely going to be different to the approach that will be taken by the EU, with its forthcoming AI Act. Details of the legislation are expected soon, and countries in the Union have differing opinions about what it should involve, but given the EU’s tendency to be more prescriptive when it comes to data protection, it would be surprising if it didn’t take a harder line than the UK. 

What will happen in the US remains to be seen. There have already been some state-level interventions on the use of AI in certain use cases, such those in New York and Illinois banning the use of automated employment decision tools. But there could well be more general AI regulation from the FTC in 2023, as it recently invited public comment on whether it should implement new trade rules governing AI-powered technologies. 

In fact, we noted last week how even some technology leaders are calling for a halt to AI development so that regulators can figure out how to tackle some of the consequences. 

The stakes are seemingly high for all involved. Any one party moving too quickly risks undermining the whole ecosystem - where the winners and losers include citizens, companies using the tools, AI creators and nations as a whole. Do you lead with rapid development and hope that the problems that arise aren’t too severe to regulate them further down the line? Or do you lead with regulation and hope that this doesn’t stifle innovation and/or drive innovation to countries elsewhere? 

This impact of these decisions may sound like hyperbole, but we’re already witnessing how these tools - which are still fairly nascent compared to their potential - are being used to pass medical exams, are tricking huge swathes of the internet with fake pictures, and are making biased decisions against people from minority backgrounds. And this is just the beginning. What comes next remains to be seen. 

My take

The first thing to remember when debating the topic of AI regulation, is that the technology industry will no doubt benefit from the current AI hysteria that has gripped people. Even fear mongering drives investment, given the implication that AI or artificial general intelligence will end up smarter than a human ever could be. Overstating benefits and risks is win-win for those in the industry, regardless of what the reality ends up looking like, as long as there’s money to be made. And regulators should have that in mind. 

Secondly, whilst the future of AI is unpredictable, there are risks and harms to the current artificial intelligence tools that we already have today. And we currently have regulations in place that deal with many of the outcomes that may result from current AI tools being applied - such as data protection laws, copyright laws, and anti-discrimination laws. New precedents may have to be set as a result of new use cases, but how these frameworks can be applied and tested using these AI technologies could help guide any future legislation. 

Thirdly, much like was the case with the internet and social media, the negative consequences of these largely beneficial tools were so unpredictable that it is unlikely that any regulations created at their inception would deal with outcomes that we see today. Legislating for an unknown scenario is difficult - regulators should have the flexibility and power to take quick action, but they shouldn’t narrow their focus in a way that limits the possibilities of AI application. That doesn’t mean no legislation at all, but suggests rather applying common sense principles that include ethics, data protections, and transparency. 

Finally, whilst it’s difficult to predict how this will all play out, one of the key things we’ve learned over the past decade as it relates to other advancements in technology, is that alongside regulation, education is key. AI, particularly generative AI, can be very convincing. We need to educate people to apply common sense checks to everything they interact with online, if AI tools are soon to become widespread. How do you validate a source? How do you know an image is real? How do you know you’re talking to a real person and not a fraudster? Regulation is one side of the coin, but widespread education campaigns led by the private and public sector should be the other. 

Loading
A grey colored placeholder image