In a previous diginomica article, I reviewed a report, “A Review of Why Are We Failing at the Ethics of AI?” by Anja Kaspersen and Wendell Wallach, senior fellows at Carnegie Council for Ethics in International Affairs. The authors' premise is:
The last few years have seen a proliferation of initiatives on ethics and AI. Whether formal or informal, led by companies, governments, international and non-profit organizations, these initiatives have developed a plethora of principles and guidance to support the responsible use of AI systems and algorithmic technologies. Despite these efforts, few have managed to make any real impact in modulating the effects of AI.
They see three primary reasons for this:
First, many of the existing dialogues around the ethics of AI and governance are too narrow and fail to understand the subtleties and life cycles of AI systems and their impacts.
The second major issue is that, to date, all the talk about ethics is simply that: talk.
The third issue is that discussions on AI and ethics are still largely confined to the ivory tower.
I couldn’t agree more with the first two, but I’ve come across an “ivory tower” paper, The Forgotten Margins of AI Ethics, that begins with this:
How has recent AI Ethics literature addressed topics such as fairness and justice in the context of continued social and structural power asymmetries?...We note that although the goals of the majority of... papers were often commendable, their consideration of the negative impacts of AI on traditionally marginalized groups remained shallow... the field would benefit from an increased focus on ethical analysis grounded in concrete use-cases, people’s experiences, and applications as well as from approaches that are sensitive to structural and historical power asymmetries.
If that isn’t enough, they describe the Western canon of ethics:
Ambition for universal principles in ethics has been dominated by a particular mode of being human, one that takes a straight, white ontology as a foundation and recognizes the privileged Western white male as the standard, quintessential representative of humanity
Translation: a survey of AI Ethics literature and conferences is grounded on this Western ethics, leading to the exclusion of marginalized considerations in AI Ethics. This is evident, and not much of a surprise as the white Western ontologies are the bedrock of AI Ethics as we know it.
Ruha Benjamin (disclosure: I am highly impressed with her work) makes a useful connection to the “Jim Crow” in coining the term the “New Jim Code”. She argues that we might consider race and racialization as, itself, a technology to “sort, organize, and design a social structure” Thus, AI Ethics work addressing fairness, bias, and discrimination must consider how a drive for efficiency, and even inclusion, might not actually move towards a “social good”, but instead accelerate and strengthen technologies that further entrench conservative social structures and institutions.
This is a pretty powerful indictment of the AI ethics field. It desperately needs to turn its attention away from the current echo chamber of traditional ethics and the Western Canon and look to broader social issues with specific emphasis on harms. By getting their arms around the situation on the ground, they need to pay more than lip service to existing and historical injustices and power asymmetries, envision an alternative future, and work toward it to bring about a material change to the most negatively impacted by technology.
Unfortunately, most “ethicists” are unprepared from both education and training to engage in this kind of activist research. That will cull the herd, which is overpopulated, in my opinion, and will allow for the field to move forward by:
Examining the broader social, cultural, and economic context as well as explicitly emphasizing concrete and potential harms is crucial when approaching questions of AI ethics, social impact of AI, and justice.
Efforts to “de-bias” data, do the algorithmic audit, and toothless fairness testing like Disparate Impact may achieve the opposite effect.
This is a long paper, so let me wrap this up and add my thoughts with a summary of their conclusions, beautifully put:
If AI Ethics is to protect the welfare and well-being of the most negatively impacted stakeholders, it needs to be guided by the lived experiences and perspective of such stakeholders. The field also needs to treat AI Ethics as non-divorcible from day-to-day life and something that can’t emerge in a historical, social, cultural, and contextual vacuum. A review of the field via analysis of the two most prominent conferences shows that there is a great tendency for abstract discussion of such topics devoid of structural and social factors and specific and potential harms.
Given that the most marginalized in society are the most impacted when algorithmic systems fail, we contend that all AI ethics work, from research, to policy, to governance should pay attention to structural factors and actual existing harms. In this paper, we have argued that given oppressive social structures, power asymmetries, and the uneven harm and benefit distributions work in AI Ethics should pay more attention to these broad social, historical, and structural factors in order to bring about actual change that benefits the least powerful in society.
Missing from this paper is that “harm” is not limited to people. Elections affect people, but as their own entity, bias can have a devastating effect on government. Bias in assumptions can lead to overly dysfunctional supply chains, or create delivery routes that impact businesses unfairly.
The forces lined up against actual progress in fairness in AI are strong. Organizations may not care about fairness. Companies will resist any efforts that affect revenue. Government will oppose any actions that cost elections (disinformation, gerrymandering, voter suppression), law enforcement will cast a blind eye to discrimination with a herd mentality, and international organizations, like UN agencies, generally lack the teeth to impose their guidelines.
All of this makes the guidance and recommendations of this commendable paper crucial. I’ve been writing about AI ethics for five years, and when I reviewed early articles, I was guilty of the same naivete of the ethicist cabal. Currently, AI ethics is emerging as a lush commercial field with Explainability, AIOps, and bias detection (if that’s even possible). The authors of this article use nice words for things that I have been a little more direct about, as I’ve said many times: AI didn’t create bias. We did, so we have to fix “we,” not just algorithms.