The problem with 'Ethics by Design' - why this WEF report gets AI Ethics wrong, and 25 techniques for producing trustworthy AI

Neil Raden Profile picture for user Neil Raden February 26, 2021
Summary:
A recent WEF report called "Ethics by Design" reinforces problematic misconceptions about AI ethics. Here's where I think the report went wrong - and my own personal 25 point list for avoiding AI project speed bumps.

professor-questioning

I recently delved into a December 2020 WEF report, Ethics by Design: An organizational approach to responsibile use of technology. Unfortunately, this white paper underscores the numerous flaws in the current approach to AI ethics I've been critiquing on diginomica.

It essentially applies the theories of Richard Thaler, Daniel Kahneman, Amos Tversky, and especially, the paper, Treating Ethics as a Design Problem, by Nicholas Epley and David Tannenbaum. There is no counter-argument, as the paper seems to assume that these theses are entirely applicable.

The entire paper is constructed around these psychological concepts, Epley's Pillars, and how to implement them: Attenuation, Construal, and Motivation. They underpin narrative as a whole, intending to get people, primarily implementers, to think about AI ethics.

How did the authors validate the approach? We have to take it on faith, it seems. I've read that paper, and it never occurred to me that those theories could be the basis of solving the manifest problems we have with A.I. today.

A colleague of mine described it this way: "It is what we call a ‘peanut butter report -- it is trying to cover a whole lot of different organizations on a whole lot of topics, and identify common themes. So that is what this is --  a report of themes with examples along the way. For companies or individuals new to this area, I am sure this is a helpful start, but for folks with any expertise, it's not useful. I believe this doc is aimed at all of the C-suite & companies that haven't started thinking about this to get them to start."

She is too kind, though I do like the peanut butter description. It feels to me that the report authors said to themselves: "Let's wrap this around some construct from psychology and the premise that AI Ethics is a matter of the right people (executives) jawboning employees and creating collaboration and empathy." Think about reducing traffic fatalities. Do you go on a program of sensitizing your engineers to traffic safety ethics, or do you have practices and programs to ensure you are producing the safest vehicles? 

So how do genuinely maleficent AI applications get into the mix? Take the example of COMPAS, that was (and still is) used widely by the judicial systems to predict recidivism and sentencing of criminals. Its renderings were so vastly over-the-top for people of color that it was clear something was wrong. The system developers were motivated to be the first to rush something to market and did not take the time to evaluate just how bad their system was.

What could have been done? Could we have sent them to a three-day class on AI ethics? Probably not. Did the AI engineers gleefully create a system they knew would put black people in prison three times longer than white people? Hopefully not. The executives' business model and a clear lack of understanding of techniques to test models for fairness, which I'll explain below, was lacking. That is my major criticism of this report, that the executives are the ones to imbue ethical practices. In this case, it was the executives who caused the problem. 

From the report:

Leaders must prepare their people to be aware of the ethical risks posed by emerging tools, equip them to make ethical choices even in situations in which information is imperfect or ambiguous, and motivate them to act upon that judgment in ways that advance prosocial goals.

This raises two questions for me about appointing leaders to be the ethical compass. First of all, where did these leaders develop their ethical sensibilities about AI? Second, leaders are not typically concerned with these kinds of issues, they have strategy and results to worry about. And don't forget, one of the most corrupt companies in history, Enron, had a 75-page ethics manual. 

Ethics can't be taught in a class or a seminar, or even in a book. You learn ethics as you mature. Or you don't. How much is there to know when it comes to developing systems in the social context? What is there to think about? Don't unfairly discriminate, don't engage in disinformation, don't invade people's privacy, don't conduct unfair computational classification and surveillance. Just don't do this stuff. But it's gotten obvious that thinking about ethics is not the solution. You need countermeasures.

But some pressures come into play:

  • When your organization pressures you to do things that may not seem ethical
  • You adopt an "it's only the math" excuse or "that's how we do it." You engage in fairwashing (described below)
  • You don't know that you're doing these things
  • The whole process is so complicated that there is opacity in the operation, and the result
  • Your organization is not arranged for introspection before you embark on a solution
  • There is an "aching desire" to do something cool that obscures your judgment
  • Four fundamental tensions 
    1. Deploying models for customer efficiency versus preserving their privacy 
    2. Significantly improving the precision of predictions versus preserving fairness and non-discrimination 
    3. Pressing the boundaries of personalization versus preserving community and citizenship
    4. Using automation to make life more convenient versus de-humanizing interactions

None of these things requires deep psychological manipulations on the part of management. And the whole narrative gives me a creepy feeling that implies a paternalistic approach - those above you engage in activities to make you ethical. I think it's the other way around. In fact, I know it. This sort of thinking is endemic among "strategy" consultants who only see the C-suite perspective.

A quick word count from this report illuminates how unhelpful it is:

Touchy-feely stuff

  • Attention: 50
  • Construal: 41
  • Motivation: 37
  • Thaler: 12
  • Epley: 23
  • Executive 27
  • Culture 47

Pertinent to the subject of A.I. Ethics

  • Fairness: 4
  • Bias: 11
  • Privacy: 16 
  • Discrimination: 0
  • Explainability: 1
  • Transparency: 4
  • Developer 7

Another quote from the article: "The behavioural scientist Iris Bohnet states: 'Instead of trying to de-bias mindsets, the evidence suggests that we should focus on de-biasing systems.'" And that is precisely what we should do.

Avoiding AI speedbumps - a 24 point list

What organizations need are tools to help them understand if their AI initiatives are unfair or even illegal. We don't need to take a course in deontological versus consequential ethics to see that. If you want to put out trustworthy AI applications, you need tools. Here is a list of things you should acquaint yourself with to avoid AI speedbumps:

X(AI)  Explainability to:

  • Justify
  • Control
  • Improve
  • Discover

Opacity

  • Deliberate opacity by corporations, governments, and, increasingly, data brokers
  • Opacity when the investigator is not qualified to understand the process
  • Opacity is inevitable because of the machine learning algorithm's scale and a lack of tools to discern their operation.

Shapley Additive Explanations

A  game theory approach to understand which features affect the result.

Confidence intervals and contrastive explanation""

"Why this output (the fact) instead of the other" is a natural way of limiting an explanation to its fundamental causes.

Contrastive explanations are preferred over non-contrastive descriptions in terms of understandability, informativeness of contents, and alignment with their decision-making. This preference leads to increased general importance and willingness to act upon the decision.

Differential Privacy

Statistical noise that is slightly biased can mask individual data before it is shared. See my article The Fragility of Privacy.

Digital Phenotyping

Defined by Jukka-Pekka Onnela in 2015 as the moment-by-moment quantification of the individual-level human phenotype in situ using data from personal digital devices in particular smartphones. Digital phenotyping - Wikipedia.

Slope Bias

Slope bias can occur when the validity coefficients for a predictor differ for different groups.

Moral Licensing

See my article Moral Licensing.

Quantitative Fairness Testing

For the moment, there is no way to adequately describe the many emerging suggestions for measuring fairness without a bundle of equations. I'll deal with this in a future article.

Disparate Impact

Practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Disparate impact - Wikipedia.

Anthropomorphizing/Assignment of Responsibility

The danger of anthropomorphizing AI. Does AI make a decision? Does AI learn? 

These digital isomorphisms of the human brain are universal. Then there are lasso penalties, bagging (a portmanteau of bootstrap aggregation), and boosting. The net effect of all of this is to assign responsibility to the A.I. model. That's not acceptable. 

Wishful Mnemonics 

Get a grip on what your AI is capable of telling you.

Using words like "Understand," "Decision,"  and "Learn" are anthropomorphizing. AI does none of that.

Counterfactuals for Explainability

To provide dynamic or ad hoc explanations of AI output in ways that are intelligible to humans

Enables the user to consider contrastive explanations and counterfactual analyses, such as why one decision was made instead of another, and to predict how a change to a feature will affect the system's output.

Conditional Demographic Disparity

CDD metric gives a single measure for all of the subgroups' disparities defined by an attribute of a dataset by averaging them. The formula for the conditional demographic disparity is as follows:

CDD = (1/n)*ini *DDi

Where: ∑ini = n is the total number of observations 

DDis the demographic disparity for the ith subgroup.

A.I. Inevitability Narrative

AI developers, rather than dealing with bias, adopt it as a strength to prove their inevitability narrative that will solve all problems.

Methods and practices of teams

AI development teams need to develop a dynamic that avoids problems

Primal role of engineers, not executives, in ethics

This is not the role of executives. Engineers have the understanding and skill and can use tools to identify potential errors.

Incentives/punishments are awkward after-the-fact measures. A single person does not develop AI sensitivity to ethical practice lies in the group.

Variable Ethics by Application 

Ethicists see AI Ethics as a problem of Individual ethics, but the developer of a radiology application has a different set of principles than the designer of a guidance system for a Hellfire missile

Bias as a Business Process

Credit, Insurance, and Lending… All have bias burned into their process.  An individual applicant's status is a crucial determinant of accepting or rejecting an application. 

Certain attributes are illegal, such as race, age, gender, etc., in Life and Health insurance, they are the significant indicators of risk. These kinds of businesses have to be extremely careful their models work within established boundaries Elements of fairness to their total constituency are also a consideration

Legacy effect of NLP training

Training large NLP models is a compute-intensive process. Training sets often scraped from social media and Wikipedia have been shown to be biased against people of color. They improve regularly, but the trained models may not be the most current. If they were trained a few years ago, they may be inaccurate and be full of unfair issues.

Fairwashing

Society requires AI systems to be ethically aligned, which implies fair decisions and explainable results. There are possible pitfalls behind this. Specifically, they think that there is a risk of fairwashing when malicious decision-makers give fake explanations for their unfair decisions.

Immutability

A severe ethical gaffe would be putting an AI application in production that a third party can alter. Microsoft's Tay was a prime example

Integrated Gradients

A technique for attributing classification model's prediction to its input features. It is a model interpretability technique to visualize the relationship between input features and model predictions.

Adversarial Robustness

Machine learning models are vulnerable to adversarial attacks from perturbations added to inputs designed to fool the model

Imperceptible to humans

Adversarial examples are inputs to machine learning models designed to intentionally fool them or to cause mispredictions. 

My take

This World Economic Forum report is not to be taken seriously. It is a curious combination of psychology and origination change management for AI. Three years of discussion about AI Ethics has only revealed that we need tools to do better, and not rely on ethical theory and moral philosophy coming from paternalistic executives to solve this problem.

Loading
A grey colored placeholder image