AI and ethics - challenging questions from the UK with global applicability

Stuart Lauchlan Profile picture for user slauchlan November 19, 2018
Summary:
AI and ethics - more questions in search of societal and industrial answers.

got-ethics-white-board
The debate around Artificial Intelligence and ethics is an ongoing and far from resolved one that has seen an escalation in volume during 2018, not just within the tech sector but across national and international legislative fora.

The quality of debate varies of course. For every considered work-in-progress, such as Salesforce’s formation of an Office of Ethical and Humane Use of Technology, there’s Elon Musk rising to his feet to scream ‘We’re all doomed!’ as he flees the advancing killer robots at the bottom of his garden.

This week saw UK legislators in the House of Lords, the upper house of Parliament, debate the conclusions of a recent report on AI and its economic, ethical and social implications. While the report is UK-centric in its recommendations, its conclusions raise global questions and can be applied to governments, legislators and businesses around the world.

The report itself is a well-balanced and pragmatic one, with 74 recommendations for action. Lord Clement-Jones, who led the committee of legislators in its production, noted:

The context for our report was very much a media background of lurid forecasts of doom and destruction on the one hand and some rather blind optimism on the other. In our conclusions we were certainly not of the school of Elon Musk. On the other hand, we were not of the blind optimist camp.

But that said, the big question was would the members of the House of Lords approach the subject matter in similar mind or would the debate result in the all-too-common Terminator/Daleks/mad computer analogies that politicians seem to find so amusing? Or a further validation of a proudly-worn tech ignorance among the good and the great?

The Lord Bishop of Oxford gave some cause for initial alarm when he opened his remarks with:

At the beginning of my engagement with AI, what kept me awake at night was the prospect of what AI might mean for the distant future: the advent of conscious machines, killer robots and artificial general intelligence.

But wait - this has been a learning experience for the Lord Bishop:

But what kept me awake as the inquiry got under way—it really did—were the possibilities and risks of AI in the present. AI is already reshaping political life, employment, education, healthcare, the economy, entertainment, the professions and commerce. AI is now routinely used to drive micro-advertising in political debate, referenda and elections across the world, reshaping political discourse and perceptions of truth.

And in the event, the debate in the House of Lords did indeed grapple with some challenging ethical and regulatory questions that have universal applicability. There weren’t many answers, of course, but questions are often more interesting than answers anyway.

Business questions

For business, the questions around AI are straightforward, said Lord Holmes of Richmond:

What data do you have and what you want to do with it? AI offers such potential, but as with all the other elements of the Fourth Industrial Revolution, it should never be something in search of a solution, but more the potential to solve some of the most intractable problems for business, government and individuals…perhaps the most significant point to consider is that we may hold our smartphone in our hands, but it is the size of our data footprint that we should think most about.

Societal exclusion is a key risk factor, he suggested, with a new variant on the digital divide looming:

Public engagement is the real key. The massive success—or not—of AI will rest upon it. Do people feel part of the AI revolution? Have the public been engaged and do they understand that this is not for a few people in large organisations or government? Everybody has to understand and be positively part of this. If not, AI is likely to be the latest GM in our society…It will not be enough for a few people in the tech majors or government to believe that the public will just accept AI because they have decided that there are benefits, when there has been no explanation of where those may be felt and, crucially, where the risks may fall.

Lord Hollick picked up on the theme of a fractured society as well, coming at the issue from an economic perspective:

In a recent speech at the Royal Society, [US economist] Professor Stiglitz examined the impact of the adoption of automation on income and wealth distribution and highlighted the increasing polarisation in the workforce between the skilled and the unskilled. Citing US figures, Stiglitz noted that the real wages of the unskilled and semi-skilled worker have declined over the last 35 years, with male workers experiencing a 42-year decline. He warned that, in the absence of a new policy framework, this trend will continue, but across a wider section of the workforce, as AI is deployed to carry out both routine and complex tasks…With the right policies, AI could usher in a period of prosperity, but without the right policies it could further polarise society and undermine social cohesion.

Ethical issues

But it was Lord Reid of Cardowan who raised some of the most far-reaching questions around ethics and regulation, outlining some big challenges ahead:

In reaping the benefits of these new systems and ceding control as our infrastructure comes to depend upon them, I believe that we need to mark a watershed in how we think about and treat software…As humans, we have law-based obligations as part of our social contract within a civilised society. We have promise-based obligations as part of contracts that we form with others and we have societal moral principles that are the core of what we regard as ethics, whether derived from rational reason or from religion. Responsible humans are aware of these ethical, moral and legal obligations. We feel empathy towards our fellows and responsibility for our children, employees and society. Those who do not do so are called sociopaths at best and psychopaths in the worst case. Ethics, morality, principles and values are not a function solely of intelligence; they are dynamic, context-dependent social constructs.

But now there are complications, he argued:

The commercial value of displaying empathy means that AI entities will emulate emotion long before they are able to feel it. When a railway announcement says, “We are sorry to announce that your train is late”, the voice is not sorry. The corporation that employs and uses that voice is not sorry either. However, the company sees value in appeasing its customers by offering an apology and an automated announcement is the cheapest way of providing that apparent apology. If it is not capable of experiencing pain and suffering, can it be truly empathetic?

Furthermore, as a machine cannot be punished or incarcerated in any meaningful sense—although it might be rehabilitated through reprogramming—the notion of consequence of actions has little meaning to it. If a machine apologises, serves a prison sentence or is put in solitary confinement, has it been punished? The basis of responsibility built on an understanding of ethics and morality does not exist. It is certainly not the sole by-product of the intelligence level of the machine.

Lord Reid also picked up on particular responsibilities that the tech industry needs to get to grips with. It’s a theme that the likes of Salesforce CEO Marc Benioff and Apple’s Tim Cook have aired of late - the ethical responsibilities of business. It’s also one that requires a lot of soul-searching, warned His Lordship, and a big change of tack by IT business leaders:

Most software today is sold with an explicit disclaimer of fitness for purpose and it is virtually impossible to answer the questions: by whom, against what specification, why and when was this code generated, tested or deployed? In the event of a problem with software, who is responsible? The human owner? The company that supplied the software? The programmer? The Chief Executive of the company that supplied it? I would therefore argue that machine intelligence needs to be subordinate in responsibility to a human controller and therefore cannot in itself be legally responsible as an adult human, although it might in future have the legal status of a corporation or of a minor—that is, intelligent, but below the age of responsibility.

For his part, Lord Clement-Jones closed the debate with his own worldview:

The mantra that I repeat to myself, pretty much daily, is that AI should be our servant not our master. I am convinced that design, whether of ethics, accountability or intelligibility, is absolutely crucial. That is the way forward and I hope that, by having that design, we can maintain public trust. We are in a race against time and we have to make sure we are taking the right steps to retain that trust…This is only the first chapter; there is a long road to come.

My take

Remarkably thought-provoking debate - although as our sister site diginomica/government notes, there's little sign of this turning into deliverable policies at government level in the UK at least.

Loading
A grey colored placeholder image