While the UK Government is making the right noises about Artificial Intelligence and some of the ethical and societal impact issues associated with it, individual departments aren’t showing enough openness and urgency to address these.
That was the somewhat downbeat conclusion from Lord Clement-Jones during a House of Lords debate yesterday on the recent AI in the UK: ready, willing and able? report on AI and its economic, ethical and social implications.
The report itself is a well-balanced and pragmatic one, with 74 recommendations for action. Clement-Jones, who led the select committee that produced it, noted:
The context for our report was very much a media background of lurid forecasts of doom and destruction on the one hand and some rather blind optimism on the other. In our conclusions we were certainly not of the school of Elon Musk. On the other hand, we were not of the blind optimist camp.
But the big question is whether those recommendations are translating into action and policy and not just good words from the top of government. According to Clement-Jones, the best descriptor there is that it is a “mixed scorecard” at best:
On the plus side, there is acceptance of the need to retain and develop public trust through an ethical approach, both nationally and internationally. A new chair has been appointed to the Centre for Data Ethics and Innovation and a consultation started on its role and objectives, including the exploration of governance arrangements for data trusts and access to public datasets, and the centre is now starting two studies on bias and micro-targeting. Support for data portability is now being established. There is recognition by the CMA of competition issues around data monopoly. There is recognition of need for,“multiple perspectives and insights ... during the development, deployment and operation of algorithms”— that is, recognition of the need for diversity in the AI workforce. And there is commitment to a national retraining scheme.
On the other side, the recent AI sector deal is a good start, but only a start towards a national policy framework. Greater ambition is needed. Will the new government Office for AI deliver this in co-ordination with the new council for AI? I welcome Tabitha Goldstaub’s appointment as chair, but when will it be up and running? Will the Centre for Data Ethics and Innovation have the resources it needs, and will it deliver a national ethical framework?
And there’s not much sign of enthusiastic adoption of the recommendations at ground level, he added:
There was only qualified acceptance by the Department of Health of the need for transparency, particularly in healthcare applications. In the context of the recent DeepMind announcement that its Streams project is to be subsumed by Google and, moreover, that it is winding up its independent review panel, what implications does that have for the health service, especially in the light of previous issues over NHS data sharing?
The Department for Education was defensive on apprenticeships and skills shortages and appears to have limited understanding of the need for creative and critical thinking skills as well as computer skills.
The MoD in its response sought to rely on a definition of lethal autonomous weapons distinguishing between automated and autonomous weapons which no other country shares. This is deeply worrying, especially as it appears that we are developing autonomous drone weaponry.
So the conclusion from Lord Clement-Jones is:
Some omens from the Government are good; others are less so. We accepted that AI policy is in its infancy in the UK and that the Government have made a good start in policy-making. Our report was intended to be helpful in developing that policy to ensure that it is comprehensive and co-ordinated between all its different component parts.
For the Government, Lord Henley, Parliamentary Under-Secretary of State for Business, Energy and Industrial Strategy, made the usual welcoming, but cautious noises in response:
The report has been a very useful part of the general discussion that we have had in this area… However, as the report makes clear in its title, AI in the UK: Ready, Willing and Able?, it is important that we get ourselves in a position to be ready, not for exactly what is going to happen but for a whole range of possibilities as to how things will develop over the next 20, 30 or whatever years.
Long-term thinking is critical, he argued, and that takes time:
In a rapidly changing industry and world, one must be aware of the danger of getting these things wrong. One is reminded of the introduction of the motor car, when Governments felt that they ought to regulate, thinking it best to put a man with a red flag walking in front of the motor car. Governments rapidly realised that that did not work and was rather impeding the development of that industry, and removed the man with the red flag. I hope that we can get the regulation, the ethics and everything else right.
But at least there was a flash of self-awareness on show when he added:
I think that the Government got five or six out of 10—or perhaps a little more, because the noble Lord, Lord Clement-Jones, is fairly generous—for our response to the report.
For his part, Lord Clement-Jones concluded:
The mantra that I repeat to myself, pretty much daily, is that AI should be our servant not our master. I am convinced that design, whether of ethics, accountability or intelligibility, is absolutely crucial. That is the way forward and I hope that, by having that design, we can maintain public trust. We are in a race against time and we have to make sure we are taking the right steps to retain that trust…This is only the first chapter; there is a long road to come.
Get your act together! Fine words abound about AI and the UK taking global leadership positions etc etc. Tell that to Sir Humphrey at departmental level and put action and policies as priorities, not pious pledges. Six out of ten is nothing to be proud of.