Main content

Ethical AI - no need for new regulators, just a proper framework?

Stuart Lauchlan Profile picture for user slauchlan April 16, 2018
Reaction from techUK, Sage and Lord Clement-Jones to yesterday's very detailed UK government report on AI and associated implications.

Yesterday's much anticipated report from the UK government's House of Lords Select Committee on AI engaged with some of the thorny issues around ethics and regulation of emerging tech, including calling for a global summit next year to debate such topics.

While we've yet to get any indication of whether other nations will be behind the UK proposal, today has seen reaction to the report and its implications now that everyone's had a chance to read it!

Central to the report - and so often at the core of many discussions around AI - is the debate on the application of ethics. The Committee argues that AI systems need to be thoughtfully designed from the beginning, with ethics, bias, diversity and inclusion taken into consideration right from the very start.

Lord Clement-Jones, the chair of the Committee, spoke to my colleague Derek du Preez to highlight and explain some of the main themes and recommendations. One thing that may surprise some is that Clement-Jones reckons that tech firms, so often berated by governments and regulators, are actually pretty keen on sorting out the ethics angle:

I think that you'll find the there is a taste amongst tech companies for an overarching framework of ethics. It's very interesting I was talking to somebody today and they said, 'This is exactly what we need when we talk to people about funding the projects and so on and so forth, we can now directly ask them the question. Does your research, do your applications, do your development, basically conform to some of these principles? And if so, how?'.

It gives them a great lever. And I think that's exactly what people will use. And after all in the medical field people have been used to making sure that things are ethically developed for a very long time.

The report has been published in the wake of the Facebook/Cambridge Analytica data privacy revelations. When it comes to the data management side of AI, Clement-Jones makes the point:

You've got to live, eat and breathe it, you've really got to demonstrate that you understand the issues. I mean, for instance I think it's really important when we're debating the Data Protection Bill, which incorporates GDPR, that we come to be talking about the powers of the [UK] Information Commissioner [ so] we don't have a situation where the ICO just simply can't get into Cambridge Analytica, you know, for five days while another regulator can more easily do the equivalent in competition matters for instance. I think it's very important that our regulators are given the powers that then you can have the public confidence being confident that their data, for instance is being properly guarded and used.

The report also warns about the potential for a select few companies to monopolise data in the private sector, which it argues wouldn’t be a good thing, for so few organisations to hold so much power in AI. Clement-Jones would like to see  an investigation into whether the likes of Facebook and Google are already monopolising public data:

Well, of course they have to build the evidence base first and that's what we've said. We said we are rather concerned this could lead to a situation where there is the data monopolization and of course then it leads onto only having a few AI systems, which is dangerous in itself.

We want to basically have quite a strong diversity of AI systems across the board. So what we suggested is that CMA should undertake a review and see if there is abuse of dominant position in the data field. That may or may not be the case but certainly the fact that Facebook and Google and so on have accumulated vast amounts of data - and we don't know whether they share them or not. We don't know whether they're entitled to share them or not. But it is a danger that they can have all these platforms.

However, what he  doesn’t want to see is a new regulator being formed. What is needed is a central framework, which can be applied across the board:

We're very keen not to create new regulators to have this ethical framework...regulators should be informed by the ethical framework and regulate accordingly in their particular sector.

Tech reaction

Meanwhile the tech industry has been signalling its own support for the report. Trade association techUK's Head of Programme for Cloud, Data, Analytics and AI, Sue Daley, says the publication of the report is an important contribution to the wider debate around AI:

The long to-do list of recommendations maps out many of the important issues that require careful consideration. In particular the need for greater coordination and consolidation on existing ethical initiatives and codes in a way that can assist businesses looking to do the right thing. The five key principles identified are aligned with current thinking and highlight the importance of ensuring human needs and values remain at the core of technological innovation...Ethics alongside regulation, including new data protection rules, has a key role to play. 2018 should be a year of practical progress that can build confidence and support innovation.

And she looked across the Atlantic to urge the U.S. tech industry and policy makers to engage with the discussion:

At a time when some are questioning the ability of politicians to keep pace with tech this report proves that policy makers can get to grips with big issues like AI. It is particularly impressive that members of the Committee spent time learning to programme deep neural networks. Politicians across the pond should take note.

While U.S. response is awaited, UK firm Sage's VP of AI, Kriti Sharma, argues:


Once in generation a new technology arrives that changes everything. For ours, it is artificial intelligence (AI). As we approach this cross road we need to ensure industry is ready to pivot and take advantage of the productivity gains that will be delivered through the automation of mundane, repetitive tasks – using AI to free businesses up to focus on what’s important.

Implementing a universal code of ethics for AI is an extremely good idea and is something we have independently implemented at Sage to educate our people and protect our customers. This step will be critical to ensuring we are building safe and ethical AI – but we need to think carefully about their practical application and the split of responsibility between business and government, specifically when considering their application to specific industry sectors and ensuring buy-in and rapid adoption from the business community.

My take

As noted yesterday, this is a report that's well worth a read, whether you're in the UK, the U.S. or points beyond.


A grey colored placeholder image