Can fairness be automated with AI? A deeper look at an essential debate

Neil Raden Profile picture for user Neil Raden January 6, 2021
Summary:
I've addressed whether fairness can be measured - but can it be automated? These are central questions as we contend with the real world consequences of algorithmic bias.

code of ethics

In part one, I examined some noted ethicists' opinions about fairness measurement - and found some reasonable, and some incomplete (Can we measure fairness? A fresh look at a critical AI debate).

In this article, I will begin with an example that was in dire need of fairness assessment. I will also introduce another method for fairness assessment. And finally, I'll try to resolve some different opinions between Reid Blackman, myself, and some Oxford scholars.

I want to start with an example where the fairness measurement described in Part 1 could have avoided nearly catastrophic results. We often talk about A.I. Ethics in terms of how it affects specific subgroups, and even then, one person at a time. We often neglect how errant careless A.I. affects whole populations, communities, or even the economy, the climate, and the ecosystem. Or even the breakdown of civil society.  (next article)

Ofqual: Like the College Board, in the U.K., the A-level Level college entrance exams are administered by Ofqual. Their A.I. adventure got them in serious hot water. Because of COVID-19, exams could not be given on-site. Instead, Ofqual carelessly, almost nonsensically, put together an algorithm that mixed scores partly based on each school's previous scores. On the basis that "the average is as close to the bottom as to the top," the hue and cry were furious. Artificially deflated scores penalized students in disadvantaged schools, and their A-list university future was trashed.

What can we learn about the Ofqual case? Because they were transparent about how they developed the algorithm, the problem was quickly discovered. However, the resolution was that tests for this year would be teacher assessed, and not based on tests the students did not take. It's safe to say that grave damage was done to some students, so their transparency and haste in correcting this is a good lesson on how much damage ill-advised A.I. can do. Algorithms must be subject to quantitative fairness testing before being released to the public, or the continuing reflection and reinforcement of prejudices hold society and business back. It is a well-known practice for coders to simply insert open source code into code they are developing with no clear understanding of what it contains. Proprietary algorithms should not be embedded without understanding their operation or to obscure sub-par code from inspection. 

Before we can construct fairness metrics, we have to understand what the model did. How can you evaluate a fairness metric or modify the model if you don't have a grip on its internal "thinking.?" The prevailing method for reverse-engineering the output of any predictive algorithm for explainability is SHAP   (SHapley Additive exPlanations). It was invented by Lundberg and Lee and published in a paper in  2017. SHAP values are used for complex models (such as neural networks or gradient boosting) to understand why the model makes its predictions.

The term SHAP derives from an aspect, Shapley values, of game theory, which is the study of how and why people and programs make decisions. Formally, it uses mathematical models of conflict and cooperation between intelligent, rational decision-makers, people or otherwise. Game theory needs two things: a game and some players. For a predictive model, the game is reproducing the model, and the players are the features. Shapley keeps track of what each player contributes to the game, and SHAP aggregates how each feature affects the predictions. The gap between the two connected model projections can be construed to be the effect of that additional feature. This is called the "marginal contribution" of a feature.

Skipping a few dozen steps, from there, you can calculate the SHAP value of a feature, such as age,  and summing the SHAP yields the difference between the prediction of the model and the null model. Hopefully, without the need to see a dozen formulas, you can see how this is applied to understanding fairly complex relationships between features. This is an excellent first step in understanding where to look for bias. There is an excellent and thorough explanation of this without much mathematical notation at Explaining Learning to Rank Models with Tree Shap.

The whole field of explainability is brimming with new ideas. SHAP is just one. The sequence one might follow is: 

  1. Test the model
  2. Understand what happened (using SHAP, for example)
  3. Evaluate the output with your fairness metrics
  4. Run and repeat.

A new wrinkle in the "automating fairness" debate

Consider Why Fairness Cannot Be Automated: Bridging the Gap between EU Non-discrimination Law and AI, by Sandra Wachter, Brent Mittelstadt, & Chris Russell. The Oxford University Oxford Internet Institute (OII) is a multidisciplinary research and teaching department of the University of Oxford, dedicated to the social science of the Internet.  But a press release published 14 December 2020 by the OII, "AI modeling tool developed by Oxford academics incorporated into Amazon anti-bias software," seems to contradict the body of their paper:

A new method to help better detect discrimination in AI and machine learning systems created by academics at the Oxford Internet Institute, University of Oxford, has been implemented by Amazon in their new bias toolkit, ‘Amazon SageMaker Clarify', for use by Amazon Web Services customers.

From the title, one might assume it contradicts these approaches to measuring fairness, but on closer examination, the authors claim it can be measured, but it can't be solved computationally. They point out: "Due to the disparate nature of algorithmic and human discrimination, the EU's current requirements are too contextual, reliant on intuition, and open to judicial interpretation to be automated." They don't claim fairness can't be automated in principle, but rather that the government regulations (EU in particular) are too imprecise for automation. 

They propose why fairness cannot and should not be automated, and propose Conditional Demographic Disparity (CDD) as a baseline for evidence to ensure a consistent procedure for assessment (but not interpretation) across cases involving potential discrimination caused by automated systems. First, let's define CDD. It's relatively simple:

If you recall, in Part I, Reid Blackman proposed an example of why he feels this is ridiculous. Here is what he said: 

Suppose I'm torturing you and you scream, ‘How dare you?! This is unjust! This is unfair!' I reply, ‘Look, it's perfectly fair because I'm torturing relevant subpopulations at equal rates. White, black, Asian, Latinx, gay, trans, and more - it's 10% of each population.' Now that's absurd. That's because fairness and justice cannot only be captured by notions of statistical parity.

The answer here is that we need tools to ferret bias, and we need tools to evaluate fairness. However, we don't have tools to completely do both. That is, I believe, what Reid meant when he said, "There is no mathematical formula for fairness and there never will be." Math can't solve fairness, but it can measure it, and it can discover, at least to some extent, what it's causes are. 

The OII paper admitted that CDD is no cure for bias, but it does provide a baseline to investigate models that exhibit bias. 

My take

I've seen narrow increases in "fairness" celebrated as progress, such as loosening the credit approval algorithm for borrowers whose criteria are not 100%. Bravo, it's good to see some progress. But buried in the rationalization is that the requirements were biased and discriminatory in the first place. This isn't fair. These tepid efforts are baby steps to address long-standing discrimination.

It's clear that ferreting out bias is a necessary step to improving fairness of models. 

  1. Bias cannot be solved (completely) computationally. Computational tools to help are sorely needed. Unfortunately bias detection uses machine learning and neural nets, both of which are likely trained with biased data. 
  2. Bias elimination has to be a process of bias-detection and remediation, governed by clearer laws and regulation, education and ethical frameworks.
  3. Fairness can and must be measured but cannot be solved computationally.
Loading
A grey colored placeholder image