There’s an interesting snark at media representation of AI in today’s report from the UK Government on that subject. It seems that there’s “extensive and important coverage of artificial intelligence, which occasionally can be sensationalist”.
Today’s mainstream media headlines covering the publication of AI in the UK – ready, willing and able? have a heavy bias towards the ethics angle, hardly surprising in the wake of Facebook, Cambridge Analytica et al. As the report notes:
There are many social and political impacts which AI may have, quite aside from people’s lives as workers and consumers. AI makes the processing and manipulating of all forms of digital data substantially easier, and given that digital data permeates so many aspects of modern life, this presents both opportunities and unprecedented challenges…there is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years.
That said, it’s important to remember that all the work on this report and its conclusions took place before all of the recent data sharing scandals turned the public spotlight onto abuse of algorithms and the ethics of data privacy. The report observes that things were changing even before the current unpleasantness:
Beyond a general awareness of AI in the abstract, the average citizen is, and will be increasingly, exposed to AI-enabled products and services in their day-to-day existence, often with little to no knowledge that this is the case…while consumers often had relatively few AI-specific concerns for now, they were gradually becoming more aware of the algorithmic nature of particular products and services, and the role of data in powering them.
Some background context – last summer, the House of Lords, the upper house of the UK legislature, appointed a Select Committee to examine Artificial Intelligence and its potential impact, positive and negative, on the UK. This included societal, economic, regulatory and ethical considerations.
The cross-party Committee did its homework. Over 22 evidence sessions, it gathered commentary from industry, academia and beyond. Fifty-seven pieces of oral evidence were heard, along with 223 piece of written evidence. While the remit for the Committee was UK-centric, the conclusions and recommendations have applicability internationally as well as nationally.
Today the Committee published the fruits of its labours in its report which argues that there is still a lack of clarity as to how AI can best be used to benefit individuals and society and proposes five principles that it suggests could become the basis for a shared ethical AI framework.
- AI should be developed for the common good and benefit of humanity
- AI should operate on principles of intelligibility and fairness
- AI should not be used to diminish the data rights or privacy of individuals, families and communities
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI
- AI should never be vested with the autonomous power to hurt, destroy or deceive human beings.
The Committee report argues that the UK should take an international lead here:
While AI-specific regulation is not appropriate at this stage, such a framework provides clarity in the short term, and could underpin regulation, should it prove to be necessary, in the future. Existing regulators are best placed to regulate AI in their respective sectors. They must be provided with adequate resources and powers to do so. By establishing these principles, the UK can lead by example in the international community.
There is an opportunity for the UK to shape the development and use of AI worldwide…We recommend that the Government convene a global summit in London by the end of 2019, in close conjunction with all interested nations and governments, industry (large and small), academia, and civil society, on as equal a footing as possible. The purpose of the global summit should be to develop a common framework for the ethical development and deployment of artificial intelligence systems. Such a framework should be aligned with existing international governance structures.
Specific concerns include a need for greater clarity around principles for accountability and intelligibility related to AI. The Committee states:
In our opinion, it is possible to foresee a scenario where AI systems may malfunction, underperform or otherwise make erroneous decisions which cause harm. In particular, this might happen when an algorithm learns and evolves of its own accord. It was not clear to us, nor to our witnesses, whether new mechanisms for legal liability and redress in such situations are required, or whether existing mechanisms are sufficient.
And on the topic of potential misuse of AI, the argument is:
The potential for well-meaning AI research to be used by others to cause harm is significant. AI researchers and developers must be alive to the potential ethical implications of their work…further research should be conducted into methods for protecting public and private data sets against any attempts at data sabotage, and the results of this research should be turned into relevant guidance.
On the subject of public data sets, the Committee has further data concerns, including around the push for more open data sets across the public sector which carry information on citizens. This is open to being taken advantage of by larger U.S. tech firms, it warns:
Access to data is essential to the present surge in AI technology, and there are many arguments to be made for opening up data sources, especially in the public sector, in a fair and ethical way. Although a ‘one-size-fits-all’ approach to the handling of public sector data is not appropriate, many SMEs in particular are struggling to gain access to large, high-quality datasets, making it extremely difficult for them to compete with the large, mostly US-owned technology companies, who can purchase data more easily and are also large enough to generate their own.
As for U.S. firms, there’s an uncomfortable recommendation from the Committee for a Brexit-focused government intent on wooing as many overseas tech investors to the UK as possible:
While we welcome the investments made by large overseas technology companies in the UK economy, and the benefits they bring, the increasing consolidation of power and influence by a select few risks damaging the continuation, and development, of the UK’s thriving home-grown AI start-up sector. The monopolisation of data demonstrates the need for strong ethical, data protection and competition frameworks in the UK, and for continued vigilance from the regulators. We urge the Government, and the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK.
There was a positive response from the UK tech industry to the report. Sue Daley, Head of Cloud, Data Analytics and AI for trade association techUK called it “very detailed” but cautioned that there’s still a need to “make AI & ethics real to biz leaders something techUK is working on”.
She’s quite right, of course. I feel there’s a lot of good stuff here. This is a wide-ranging report that’s had a lot of work put into it. Some of the recommendations are very soundl some are, I fear, rooted in idealism rather than pragmatism, but none the less interesting for that. Now, let’s see what government decides to do about them, nationally and internationally.
That global summit idea is going to be one to watch, especially when supposedly to be implemented by a Brexit-centered government. I can’t see France or Germany letting the UK take center-stage on that, let alone the U.S.
Image credit - Free for commercial use