AI - where do ethics stop, and regulations begin?

Chris Middleton Profile picture for user cmiddleton December 9, 2022
Summary:
There’s an intriguing debate on the relative merits of ethics and regulation in the digital world. Where do you draw the line?

An image of a human brain lit up with colours on a backdrop of digital networks
(Image by Gerd Altmann from Pixabay )

When it comes to artificial intelligence (AI) and its growing impact on our lives, where do ethics begin or end, and where does regulation take over? Do ethics inform regulation? Or do they fill in the gaps around what is legally permissible? How do organizations navigate these issues in an international context? And should adherence to ethical principles be just a tick in a box, or a bare minimum that we try to exceed?

These are important questions and don’t just apply to AI and machine learning, but also more widely to the digital world: to our online behaviour and to organizations’ treatment of user data. 

They were also the focus of a hybrid conference this week on digital ethics, hosted by industry body techUK. The day-long event featured an impressive roster of speakers from vendors, regulators, law firms, consultants, academia, and government – in the latter case from both the UK and Singapore administrations. Seeking that city state’s perspective at UK events has become almost a given in recent months – due, perhaps, to the Singapore Digital Economy Agreement. 

But if nothing else, it also suggests that Whitehall regards Singapore as a trading partner to emulate as well as do business with – at least, when it comes to AI and other digital markets. Dr Janil Puthucheary, Senior Minister of State for Communication & Information, for the Government of Singapore, explained how it is ahead of the game:

Singapore recognizes that industry will need to be given the space to innovate, and we have taken a balanced approach in regulating AI. We have developed a voluntary model AI governance framework, in collaboration with industry, to provide businesses with a reference on how best to deploy AI responsibly. This is a pragmatic approach by guiding businesses to show that their uses of AI are explainable, transparent, and fair. 

AI Verify is Singapore's practical and objective approach to demonstrating responsible use and is currently a minimum viable product.

From the UK’s perspective, certain barriers currently impede British business from adopting AI assurance at scale, said Paul Scully MP, Parliamentary Under Secretary of State at the Department for Digital, Culture, Media & Sport (DCMS). He explained: 

These include the complexity of the standards landscape, as well as the lack of knowledge, the lack of skills, and financial resources. 

In response to that, the Centre for Data Ethics and Innovation [CDEI] has set out a number of possible interventions to overcome these challenges to adopting AI assurance. Some of them are already underway, such as the online standards repositories housed in the UK AI standards hubs, and some are forthcoming, such as the white paper on AI governance.

A work in progress, then. But what about the ethical context of these issues? techUK’s Deputy CEO Anthony Walker cleverly illustrated both it and the potential impact of AI on our lives by first saying:

Ethics should be a fundamental consideration in the development of new technology. This means that the ethical implications of technology should be considered at every stage in its development, from initial concept to final implementation. 

This can be achieved in a number of ways, such as by engaging in ethical discussions and decision-making with diverse groups of stakeholders, by conducting ethical impact assessments, and by ensuring that any potential negative consequences are addressed and mitigated. 

Additionally, it may be helpful to establish ethical guidelines or principles for the development of new technology, and to regularly review and update these guidelines as needed. Ultimately, the goal should be to develop technology in a way that is responsible, transparent, and respects rights and the dignity of individuals.

Fine words. But then he added:

That was what ChatGPT, the tool launched by OpenAI last week, told me when I asked it last night, ‘How should ethics be embedded in the way that we develop new technology?’

You read that right: Walker’s opening remarks – sensible and hard to gainsay – were scripted not by him, but by the Generative Pretrained Transformer 3 (GPT-3) powered chatbot that has become an online sensation. It gained a million users in three days.

So convincing have some of ChatGPT’s answers been that it has already been hailed as the future of education – which rather ignores the fact that, among other subjects, the system has been found to have no grasp of elementary physics. But it seems intelligent, and that inherent plausibility is dangerous. We all understand the frustrations of “the computer says no”; but what if it mistakenly says yes? What if people accept its wrong answers because they assume it must be correct? 

Exceeding what is legally permissable

But should such issues, and wider questions of AI trust and assurance, be the domain of regulation, or of that more subtle and evanescent concept: ethics? John Edwards is the UK’s new Information Commissioner at data protection regulator the ICO. Now tasked by the government with enabling innovation and growth, he said:

I've always thought that when you start having a conversation about ethics in this area, then it assumes there is a lacuna of regulation. Because what you're really saying is regulation only made it this far and then stopped. So, if regulation isn't going to tell us the answer, then this must be the domain of ethics... 

But I think that shows – if my assumption is correct, and it probably isn’t – that there is a lack of imagination or understanding about the nature of regulation in this area. In the UK and, in fact, in many OECD Western liberal democracies. What we have is a set of technology-neutral, principle-based regulatory settings, which have stood the test of time. The underlying principles reach back way into the 90s. They've been there as we've seen the internet develop. […] 

So, there is much in the existing regulatory framework that is capable of addressing what we might frame as ethical challenges. For example, should an online platform be able to exploit children: child users who are unable to express informed opinions or make sensible decisions about what's in their own best interest? The answer ethically, is clearly no. But the answer legally, under our age-appropriate design code, is also no. So, we've got regulatory alignment with ethical expectation. 

Or you might ask, ‘Is it ethical for an innovative biometric application to sense emotions in in subjects and thus screen out job applicants? Or select people for the attention of law enforcement or customs based on those readings?’ 

Ethically, you might say, ‘I’m not comfortable with that’. But legally, we [the ICO] say that such technology is junk science. And if it’s junk science, if it doesn't work, or if it doesn't accommodate people’s neurodiverse or societal differences, then it cannot pass the tests in law of proportionality, of fairness, and of data minimization. 

These are just a couple of examples that show we have an elastic regulatory framework that is capable of addressing many of the challenges that are often presented as ethical conundrums.

An optimistic viewpoint – or perhaps laissez-faire. So, is Edwards correct that ethical principles already inform our regulations, meaning that emergent problems in AI can be tackled by existing rules? Not according to Professor Luciano Floridi, Professor of the Philosophy and Ethics of Information at the University of Oxford. In a polite yet scathing rebuttal of the UK’s data protection regulator, he said:

The bad news is that picture is fictional. I mean, it's a mistake, no one should believe that. That's not what ethics are for. Digital ethics are what comes before, during, and after regulation. In the case of ‘before’, as we heard, it’s what enables you to devise good regulation: the kind of society we want to live in, fundamental human rights, human dignity, and so on. 

But when it comes to ‘during’, I wish regulations were like chess rules, but they're not. You need to interpret them. And when something goes wrong, it’s hardly ever clear cut. There's a lot of ethical judgement that goes into interpreting the law.

And when it comes to ‘afterwards’, how are we going to change the rules when they are no longer the right ones if not through ethical analysis? And an understanding of what kind of society we want to live in? So, John: you’re wrong!

In Floridi’s view, doing the bare minimum of whatever regulation dictates or allows is just not enough; good ethics demand that organizations should exceed what is legally permissible. In other words, we should do better simply because we can. And ethical behaviour may sometimes mean challenging the rules if they are no longer fit for purpose. Just as, over the years, many laws have been changed by citizen action: the death penalty, women’s vote, equal marriage, and so on.

A place for additional layers

For Carly Kind, Director of the UK’s Ada Lovelace Institute – which is dedicated to ensuring that data and AI benefit all of society – the problem is that there is not enough regulation. A bold thing to say when the government has signalled it wants a bonfire of EU rules. 

She said:

Like John, I'm a lawyer, and a data protection lawyer. So, I appreciate the ability of data protection lawyers to claim that data protection law covers most things in existence. But it is the experience of most people we talk to that there is not enough regulation, that they don't feel well enough protected. 

Now it may be that, on the books, a law exists. And there may be a range of structural factors concerning why those on-the-book regulations are not being properly adhered to, complied with, or enforced. But it is the experience of most people that their relationship with technology is an extractive one that takes away their agency and doesn't leave them feeling empowered. 

Our public research shows again and again that people would like to see more regulation, even if it comes at the cost of innovation and more products. And this is a problem that is worsening over time, not improving, despite the years since GDPR. 

I'm not advocating getting rid of GDPR. But I am saying that we have to accept that there may be a place for additional layers of regulatory protection for individuals, to make them feel like they can have more confidence and trust in these technologies, or that there's a trustworthy system out there.”

Excellent points. But in a more general sense, can we be confident that it is enough to set out a vision of a ‘good society’ and work towards that? Again, not according to Oxford University’s Floridi:

We should do that with zero confidence! It’s confidence that makes the worst mistakes. The first half of the past century saw plenty of horrors in Germany, Italy, Russia, Spain, everywhere: people saying, ‘We know the recipe for the perfect society’. The horrors that humanity managed to generate from that.

So, zero confidence. Instead, we should keep questioning, keep adjusting. But you need to have some kind of regulative ideal of where you want to go – a direction of travel, so to speak.

In July this year, the UK set out its own direction of travel, its vision of a pro-innovation approach to regulating AI: the putative ‘AI Act’. But again, Floridi seemed unconvinced by its aims, though he acknowledged it is still a work in progress:

The AI Act seems hugely concentrated on individual rights, but seems to be missing quite a lot to do with the societal aspects, which are crucial. And another big miss – and if you read it through, it's extraordinary – there's almost nothing about the environment. That is a big ethical issue. 

Do we really want to have risk-based regulation that is going to frame one of the most powerful technologies ever generated by humanity almost entirely in terms of individual rights? Missing all of the societal aspects, and completely ignoring the environment? In the 21st century, when we are living on a planet that is boiling itself to death?

My take

A superb question. And perhaps one that gets to the real heart of the matter. So many of these issues are being framed by party political interests rather than a wider ethical perspective. 

As it strikes out on its own towards an uncertain future, the UK is governed by a party that seems entirely focused on the sovereign individual and on achieving growth at all costs. Implicitly, that focus regards society and collective responsibility – which is more the EU’s approach – with suspicion, and even disdain. 

Loading
A grey colored placeholder image