This week the House of Lords Select Committee on AI released its much anticipated report on how the UK can better foster the development of artificial intelligence and what actions the government should be taking to ensure it is best equipped to take advantage of the technology.
The main takeaways from the report suggested that the UK needs to rethink its infrastructure needs, how it funds AI start-ups so that they have the opportunity to scale, and how government procurement can be used to drive the growth of AI.
We got the chance to speak to Lord Clement-Jones, the chair of the Committee, following the release of the report, to get a better understanding of where he thinks the government should be prioritising its efforts.
Central to the report - and what is often at the core of many discussions around AI - is the debate on the application of ethics. The Committee argues that AI systems need to be thoughtfully designed from the beginning, with ethics, bias, diversity and inclusion taken into consideration right from the very start.
Lord Clement-Jones, and the report, is calling for the UK to lead on the establishment of an international regulatory framework that gives companies a strong ethical guideline for development. Lord Clement-Jones said, for instance, the government needs to rethink its powers in the light of the Facebook/Cambridge Analytica scandal. He said:
I think that Matt [Hancock, Secretary of State for DCMS] has talked about ethics - but of course you've got to live, eat and breathe it, you've really got to demonstrate that you understand the issues.
I mean, for instance I think it's really important when we're debating the Data Protection Bill, which incorporates GDPR, that we come to be talking about the powers of the Information Commissioner we don't have a situation where the ICO just simply can't get into Cambridge Analytica, you know, for five days while another regulator can more easily do the equivalent in competition matters for instance.
So, I think it's very important that our regulators are given the powers that then you can have the public confidence being confident that their data, for instance is being properly guarded and used.
Given that AI isn’t a new development, but its importance is just becoming better understood, I questioned whether or not it’s too late to start thinking about ethics now, given many of the large technology companies out there have already built systems and hold the data. Lord Clement-Jones disagreed. He said:
No, I absolutely don't [think that’s true]. And I think that you'll find the there is a taste amongst tech companies for an overarching framework of ethics. It's very interesting I was talking to somebody at the Digital Catapult today and they said, "This is exactly what we need when we talk to people about funding the projects and so on and so forth, we can now directly ask them the question. Does your research, do your applications, do your development, basically conform to some of these principles? And if so, how?"
It gives them a great lever. And I think that's exactly what people will use. And after all in the medical field people have been used to making sure that things are ethically developed for a very long time.
However, what Lord Clement-Jones doesn’t want to see is a new regulator being formed. What is needed is a central framework, which can be applied across the board. He said:
Because you know we're very keen not to create new regulators to have this ethical framework The CMA, the ICO, the FCA, you know, those sorts of regulators should be informed by the ethical framework and regulate accordingly in their particular sector.
Lord Clement-Jones said that the main priority for government was to ensure that all of its AI efforts - which include the Government Office for AI, the AI Council and the Centre for Data Ethics and Innovation - are coordinated. However, he said two key priorities need to be the financing for AI start-ups, to ensure that they can scale, and innovation.
We need to make sure that our growth companies can get the finances they need because we've got a pretty good context for our research and innovation in AI and our start-ups seem to be doing pretty well. We've just got to make sure that, you know, we can develop unicorns, so to speak.
Secondly, we've got to make sure that our education is in good shape and our re-training facilities in terms of our national retraining scheme are in good shape so that we make sure that if there are job losses, and there are bound to be a number, that our people in those jobs can retrain - whatever age they find themselves having to look for a different occupation.
And we need to make sure that young people particularly are basically getting the right education to make sure they can make the best use of AI - and that doesn't just mean computer machine learning skills. It means also creative skills because using AI will be seen as a creative occupation.
Lord Clement-Jones was also keen to highlight that the government has a responsibility to ensure that the public trusts AI and its development. If it doesn’t, then all positive efforts could be undermined. He said:
The big risk and I think it underlies everything in the report is failing public trust. So, if the public doesn't trust the technology, and they sort of built of antibodies against it so to speak, in a way that they did with GM foods, then that would be a very poor for the development of AI in this country. And a lot of what they are doing is designed to make sure that they public does continue to trust AI in its development and application.
Finally, the report also warns about the potential for a select few companies to monopolise data in the private sector, which it argues wouldn’t be a good thing, for so few organisations to hold so much power in AI. Lord Clement-Jones would like to see the Competition and Markets Authority conduct an investigation into whether the likes of Facebook and Google are already monopolising public data. He said:
Well, of course they have to build the evidence base first and that's what we've said. We said we are rather concerned this could lead to a situation where there is the data monopolization and of course then it leads onto only having a few AI systems, which is dangerous in itself.
We want to basically have quite a strong diversity of AI systems across the board. So what we suggested is that CMA should undertake a review and see if there is abuse of dominant position in the data field. That may or may not be the case but certainly the fact that Facebook and Google and so on have accumulated vast amounts of data - and we don't know whether they share them or not. We don't know whether they're entitled to share them or not. But it is a danger that they can have all these platforms.