Make businesses accountable for AI ethics, says UK lawmaker

Profile picture for user pwainewright By Phil Wainewright September 20, 2017
Chair of the UK parliament's Select Committee on AI, Lord Tim Clement-Jones, argues the case for AI ethics advisory boards in business at Lib Dem conference

Tim Clement-Jones speaking in the House of Lords © BBC
Lord Tim Clement-Jones in parliament

Businesses should be obliged to set up ethics boards to advise on the hidden impact of AI algorithms, according to the chair of the British parliamentary committee currently looking into regulation of Artifical Intelligence.

The potential for hidden bias is a "huge concern," says Lord Tim Clement-Jones, who chairs the Select Committee currently looking into the economic, ethical and social implications of artificial intelligence (AI). He is also the Liberal Democrats' shadow spokeperson on the digital economy, and was speaking at a meeting during the Lib Dem conference earlier this week in Bournemouth.

AI ethics boards

While emphasizing that he does not speak on behalf of the committee as a whole, which is due to publish its findings in spring next year, he argues the case for businesses that use AI to set up ethics advisory boards:

How do we know in the future, when a mortgage or a grant of an insurance policy is refused, that there is no bias in the system? There must be adequate assurance, not only about the collection and use of big data, but in particular about the use of AI and algorithms. It must be transparent and explainable, precisely because of the likelihood of autonomous behavior. There must be standards of accountability, which are readily understood.

I'm going to be arguing for this accountability to be reflected in the setting up of AI Ethics Advisory Boards and the adoption of guiding principles by business who have strong AI algorithm components, so that the implications of the adoption of particular forms of AI, et cetera, are fully understood and are considered in terms of the impact that they will have on employment, inequality, and so on within those businesses.

I do think that is an issue coming thundering down the track. They'll need to draw lines in terms of what they think is appropriate to be done by AI within a business — because change, quite frankly, can be as rapid or infinite as we want and the impact can be as assistive to or in substitution for human employment and skills as desired.

I believe that companies generally will need to demonstrate a greater sense of purpose. That will be the implication of having an ethics advisory board.

This view will not necessarily prevail — Clement-Jones notes that he has some outspoken colleagues on the committee — singling out film maker Lord David Puttnam, broadcaster Baroness Joan Bakewell, former LSE director Lord Tony Giddens and science and economics writer Viscount Matt Ridley — and says he's expecting some robust debate:

There will be some tensions, particularly between the optimists and the pessimists, and between if you like the voluntary and the regulators. So we're going to have some interesting debates as we go on.

The Lib Dem peer singled out four issues to highlight in his talk, ranging across data ownership, the impact of AI and corporate accountability for its actions, and the need to raise public awareness.

Data capitalism

Clement-Jones began by discussing the concentration of data held by global Internet brands and whether this leads to an unfair competitive advantage.

We do need to understand just the power of big data and what is known as 'data capitalism' — that's becoming a phrase that is increasingly used. What's being collected, and when, and what it's being used for, and who it's being shared with, how long it's being retained, and when can it be expunged.

We do need to look beneath the outer layer of what are called the tech majors, the big Internet platforms, to see what the consequences are, of signing up to their standard terms. What redress do we have for misuse or breach of cyber security, or for identity theft? What data are they collecting and sharing?

Ownership of data is increasingly concentrated in the hands of the Internet brands. And the question is, are competition laws adequate to protect the consumer and encourage innovation and market entry from newer, small competitors?

Impact of AI

The Lib Dem peer,  who is a partner in global law firm DLA Piper, says the potential impact of AI, particularly on those who are vulnerable or disadvantaged, is an area of concern.

We need to [understand] the impact, sometimes beneficial, but also sometimes prejudicial, of AI, machine learning and the impact of algorithms, which are employed on the big data that is collected. Chatbots too are a growing feature of our lives — semi-autonomous, interactive computer programs that mimic conversation with people using artificial intelligence.

Anybody who's been reading Cathy O'Neil's recent book over the summer, Weapons of Math Destruction, will only be too aware of the impact of algorithms on our lives already, and the implications in particular of vulnerable and disadvantaged individuals and communities.

This was the point in Clement-Jones's talk where he introduced the idea of businesses appointing their own ethics board on AI, while noting that self-regulation is unlikely to be sufficient:

But of course, there will be legal implications, regulatory aspects, and I think we'll need to determine to what extent corporate bodies or individual actors are actually legally liable. It won't purely be a question of voluntary governance.

Accountability, responsibility and remediation

This leads on to Lord Clement-Jones's third point, which concerns the need for an independent arbiter, known in the UK as an ombudsman, to resolve complaints and disputes related to the impact of AI. Clement-Jones pointed out that he currently serves as chair of Ombudsman Services, the body which provides this service in the UK. An AI ombudsman would have to be alert to the effect of data collection and algorithms, particularly in view of the rise of the Internet of Things, he warns.

The traditional role of an ombudsman in helping to create fairer markets, which work for consumers as well as businesses, does need to be rethought.

If ombudsman schemes are to continue to be effective in improving business practice and in tackling consumer detriment, then their role and capabilities must changed. These schemes must understand and engage with these issues of fairness in this emerging digital and AI world. And they need to take a much more preventative view in terms of providing redress for the consumer.

Raising public awareness

The final issue is the importance of raising public awareness of the impact of AI.

Perhaps the greatest priority of all, is the need to ensure public understanding of the issues relating to data and the application of algorithms and AI.

We've got to basically get out there to the general public and we've got to talk to them about what all this means. It's not just simply guaranteed by the increasing prevalence of AI and algorithm based functions. And they appear in everyday form, don't they? From search engines, online recommender systems, voice recognition, translation, and fraud detection.

In fact, public awareness of AI and machine learning is very low, even if what it delivers is very well recognized. It's clear that where there is awareness, there is no great evidence that concerns expressed — such as the fear they could cause harm, replace people and skew the choices available — are being publicly addressed and communicated. So public engagement is absolutely crucial. And of course, that is all about building trust, and that brings us straight back to ethical governance and regulation issues.

An optimistic outlook

Despite highlighting these four areas of concern, Lord Clement-Jones ended on an optimistic note:

It's vital that we treat AI as a tool and not as a technology that controls us. With software that has been described as 'learned, not crafted,' it will be increasingly important to know that machine learning in all its forms in particular has a clear purpose and direction for the benefit of mankind.

I'm an optimist. I don't subscribe to the Stephen Hawking school of philosophy. But it's clear that what we've got to do is create a virtuous circle of trust and communication, aligned with ethical behavior and transparency of AI use and algorithm construction. And our job is to make sure that governments, business, and academia in close partnership with the public, come to some rather important conclusions on this rather soon.