Why are we failing at the ethics of AI? A critical review

Neil Raden Profile picture for user Neil Raden May 17, 2022
Summary:
Sometimes the AI ethics conversation appears to be a morass of well-meaning platitudes, without forward direction. A notable article via the Carnegie Council for Ethics in International Affairs brings these issues to a head.

man-in-box

Anja Kaspersen and Wendell Wallach are senior fellows at Carnegie Council for Ethics in International Affairs. In November 2021, they published an article that changed the AI ethics conversation: Why Are We Failing at the Ethics of AI?

Six months later, the questions the article raised are no closer to resolution. This article was a don't-hold-your-punches review on the state of AI ethics, with which I am in almost complete agreement. If we want to advance the AI conversation, this is still a good place to start.

I’ve quoted a portion of their article, with my comments interspersed:

While it is clear that AI systems offer opportunities across various areas of life, what amounts to a responsible perspective on their ethics and governance is yet to be realized. This should be setting off alarm bells across society.

It hasn't set off alarm bells - more like a whimper from parties fixated on the word" ethics" without a broader understanding of the complexity of current AI technology. The authors continue:

The current inability of actors to meaningfully address AI ethics has created a perfect storm: one in which AI is exacerbating existing inequalities…

I've repeatedly pointed out that AI didn't invent bias. We did. That doesn't mean AI doesn't own it now, but solutions are only partially derived from technology. Remedies that are derived exclusively from AI technology are insufficient. To frame solutions, we have to be realistic about the history of bias in quantitative and statistical decision systems that endure:

…while simultaneously creating new systemic issues at a rapid pace. But why hasn't this issue been effectively addressed? Ironically, the lack of progress on AI governance cannot simply be explained by a lack of effort.

Unfortunately, the effort has been driven by a focus on ethics, which may help inform what's right and what isn't, but does not guide discovering the sources and solutions for bias in AI.

In fact, the last few years have seen a proliferation of initiatives on ethics and AI. Whether formal or informal, led by companies, governments, international and non-profit organizations, these initiatives have developed a plethora of principles and guidance to support the responsible use of AI systems and algorithmic technologies. Despite these efforts, few have managed to make any real impact in modulating the effects of AI.

A case in point: I reviewed the Stanford Human-Centered Artificial Intelligence conference for Shared Prosperity Targets for the AI Industry. I followed up on some other material from the submitter's site, the AI and Shared Prosperity Initiative.

The premise, briefly, is that “when AI companies develop new technologies, they should be required to perform a distributive impact assessment to ensure that inventions enhance human job opportunities rather than solely displacing human workers" - sort of like an Environmental Impact Statement. 

Right off the top, why limit it to workers? Are we not concerned with the welfare of the young, the old, the infirm, oppressed minorities? Wouldn't a scheme like this, even if it didn't displace human workers (presumably those making $7.25/hour), merely reinforce the status quo?

I'd put this in a class of aspirational proposals but without any foundation for success. But even more importantly, it seems to overlook the baseline we have at present and why we haven't fixed that.

Have a look at these key findings from the Robert Wood Johnson Foundation Survey:

Leading technology companies now have effective control of many public services and digital infrastructures through procurement or outsourcing schemes. For example, governments and health care providers have deployed AI systems and algorithmic technologies at an unprecedented scale in applications such as proximity tracking, tracing, and bioinformatic responses, triggering a new economic sector in the flow of biodata.

Extremely troubling is the fact that the people who are most vulnerable to negative impacts from such rapid expansions of AI systems are often the least likely to be able to join the conversation about these systems, either because they have no or restricted digital access or their lack of digital literacy makes them ripe for exploitation.

That's exactly right. If you push something out that is blatantly harmful, the group most affected is often the group least likely to have any influence on changing it. Still, they are often like canaries in a coal mine – how they are affected can bubble up and get addressed. See, for example, the Ofqual mess of student testing in England. Back to the AI ethics failure post:

Society should be deeply concerned that nowhere near enough substantive progress is being made to develop and scale actionable legal, ethical oversight while simultaneously addressing existing inequalities.

So, why hasn't more been done? There are three main issues at play:

First, many of the existing dialogues around the ethics of AI and governance are too narrow and fail to understand the subtleties and life cycles of AI systems and their impacts.

First of all, these dialogues occur mainly in Europe and the United States to a lesser extent. In addition, the participants’ agenda is focused on “the good” for AI from their academic or post-academic perspective. The EU initiatives, UNESCO, etc., have, as the authors argue, a too narrow view. 

Often, these efforts focus only on the development and deployment stages of the technology life cycle, when many of the problems occur during the earlier stages of conceptualization, research, and design. Or they fail to comprehend when and if an AI system operates at a level of maturity required to avoid failure in complex adaptive systems.

Or they focus on some aspects of ethics while ignoring other aspects that are more fundamental and challenging. This is the problem known as "ethics washing" – creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.

I feel that ethics washing is not from a too narrow approach to technology (though that is a factor). I don't believe you can engage in ethics washing unless you are corrupt, ignorant or insensitive:

The second major issue is that to date all the talk about ethics is simply that: talk.

There is a lot of talk. A few years ago, Harvard and the other premier MBA programs introduced ethics courses into their curriculum. I think everyone pretty much treated it as a joke - essentially a form of virtue signaling without actual substance:

Major areas of concern include the power of AI systems to enable surveillance, pollution of public discourse by social media bots, and algorithmic bias: in a variety of sensitive areas, from health care to employment to justice, various actors are rolling out AI systems that may be brilliant at identifying correlations but do not understand causation or consequences.

I wouldn't fault any organization for not grasping causation. ML doesn't address it. Consequences are a different matter.

There are also questions unaddressed around potential downstream consequences such as the environmental impact of the resources required to build, train, and run an AI system, interoperability and the feasibility of safely and securely interrupting an AI system.

Organizations can be the victims of their biased systems, such as exposure to fines, loss of prestige and degradation of their brands from a poor customer experience, disrupted distribution from faulty models or their interpretation. Subsequential bias is the secondary and tertiary unintended effect of your model. As your model operates, no matter how hard and thorough you scrubbed the unethical aspects, the model's results can create an opportunity for unethical secondary and tertiary effects. I call these subsequential phenomena.

A third issue at play is that discussions on AI and ethics are still largely confined to the ivory tower.

The last place I want to go for advice on creating good AI models is a Professor of Philosophy. 

There is an urgent need for more informed public discourse and serious investment in civic education around the societal impact of the bio-digital revolution. This could help address the first two problems, but most of what the general public currently perceives about AI comes from sci-fi tropes and blockbuster movies.

Now, as the dialogue progresses, it is true that the ivory tower gets far more ink on this subject than other thoughtful people. I think that's somewhat true because concerns about AI were mainly about the Singularity and robot empowerment. I believe that's why ethics burst on the scene with AI but not its predecessors, which were just as problematic. 

Concepts such as ethics, equality, and governance can be viewed as lofty and abstract. There is a critical need to translate these concepts into concrete, relatable explanations of how AI systems impact people today. Non-technical people wrongly assume that AI systems are apolitical by nature, not comprehending that structural inequalities will occur, particularly when such systems encounter situations outside the context in which they were created and trained.

“Impact People” is only part of the problem. 

Language is rooted in culture—the new is only understood by analogy to the familiar—and finding the right metaphors or tools is particularly difficult when so much about AI is unlike anything that has gone before.

Large-scale technological transformations have always led to deep societal, economic, and political change, and it has always taken time to figure out how best to respond to protect people's wellbeing. 

My article,  AI and Human Rights, delves into this. That each burst of technology often has devastating effects on human rights.

We must work to build on existing expertise and networks to expedite and scale ethics-focused AI initiatives to strengthen anthropological and scientific intelligence.

I don’t know. “Anthropological” is as academic as ethics. While both may inform your decisions, neither provides an answer and runs the risk of a dialectical morass. This seems to invite more ivory tower input and those of the ethics and moral philosophy background to continue to keep it up. The assumption behind lecturing people about ethics is that they don’t know right from wrong. Most do. They just don’t know what to do about it.

Establish a new dialogue, empower all relevant stakeholders to meaningfully engage, unpack practical and participatory ways to ensure transparency, ascribe responsibility, and prevent AI from driving inequality in ways that potentially create serious social harms.

I am impressed by this article but a little disappointed by the wishy-washy conclusion.

Loading
A grey colored placeholder image