Are we prepared to solve for algorithmic bias?

Profile picture for user gonzodaddy By Den Howlett September 9, 2018
Summary:
Solving for algorithmic bias is one of the great challenges in technology. Here are examples and methods for examing the problem at a first principles level.

algorithmic bias

Brian Sommer's opus (see here and here) details problems with recruiting technology and the biases that are endemic among recruiters or rather recruiter practices. Taken together, recruiting as a process represents but one example where suboptimal outcomes are inevitable in the man-machine interface we call AI-enhanced activities.

Dead on arrival?

Sommer's approach is interesting because he attempts to solve for the war for talent from first principles of processes that identify potential job seekers. Most coverage - including our own on occasion - has chosen to take the problem from the retention point of view. Sommer appears to advocate for job seekers as career seekers. I like that idea as one approach but as Sommer points out, there are significant issues in the ATS technology characterized by this starting statement:

The systems used by employers to manage job openings across their enterprises and screen incoming resumes from job seekers, kill 75 percent of candidates' chances of landing an interview as soon as they submit their resumes, according to a recent survey from a job search services provider.

Sommer is not specific but it strikes me that the ATS systems in use suffer from two problems - inadequate NLP training and poor pattern matching across taxonomies. These are solvable problems albeit they require effort. But then I started to think about the ongoing discussion about how bias remains a common problem among AI-infused systems and that in turn led me to explore the problem of bias as a first principle problem.

Understanding human biases

Here's the context: diginomica makes no bones about the fact that it takes positions. These are biases. Most of the time we like to think of them as 'strong opinions, loosely held' an idea I picked up in 2006 from Bob Sutton, he of the No Asshole Rule. Since that time, I've modified my thinking as to how strong/loose should be applied in the sense that as more information emerges about a situation, we should be prepared to challenge the assumptions about what got us to where we are at the time the next piece of information impacts our decision making.

That approach helps me be constantly aware of confirmation bias, what I like to call 'the bias of past wins'  and the 'bias of acquired prejudice.' On an up to date view about confirmation bias I enjoyed listening to Ben Thompson, he of Stratechery on a Farnam Street podcast where he talks extensively about the need to ask the question: Am I simply confirming what I already believe to be true? Check it out below:

On the bias of past wins, I tend to the view that success makes us blind. Technology is changing so rapidly and introducing new possibilities at such a clip that we can all too easily assume there's plenty of time to work through any fresh issues. That may be true in mature markets where there is a hegemony of large competitors that succeed by virtue of scale but that's not to say things are frozen in time. Think how Tesla, while remaining a minnow in the automotive industry has triggered a wave of invention among competitors.

Then there is the bias of acquired prejudice. This one sits alongside the bias of past wins and confirmation bias but it acts to shield us from alternative ways to solve problems and actively makes AI-infused code behave in unacceptable ways. At its more insidious, it blocks out alternatives altogehter. Check the following video for just one example:

 

Solving algorithmic bias from first principles

While noodling on this topic, I stumbled across What does it really mean for an algorithm to be biased? from earlier in the year. The article is an academic look by Eric Wang of Stanford at algorithmic bias as a first principle problem. The article makes the argument that most people trying to solve for bias fall into two broad camps:

  • There are the optimists, who think algorithmic reasoning is always rational and objective, regardless of the situation. They might even believe that uncomfortable or undesirable results of the data simply reflect “politically incorrect” truths in the data.
  • There are also the pessimists, who are more numerous. The pessimists think algorithmic reasoning is fundamentally flawed, and that all “truly important” decisions should be left to humans. The EU’s General Data Protection Regulation (GDPR), for instance, includes a blanket ban on fully automated decision-making in situations that “significantly affect” users.

Wang posits that neither position is tenable because on the one hand optimists tend to gloss over existing bias that has led to poor outcomes while pessimists are not living in the real world. Both positions are arguable but the more important assertion is that there is a lack of a conceptual framework about the nature of bias or a theory of bias that can be applied to building algorithms.

Wang goes on to distill this into two formal models that have arisen in the last few years. First is an epistemic approach:

In an article published last year in Science, Caliskan, Bryson, and Naranayan provided one of the most famous and striking examples of algorithmic bias: biased word embeddings.

word embedding is a model that maps English words to high-dimensional vectors of numbers. The model is trained on large bodies of text to correlate semantic similarity with spatial proximity—words with similar meanings should be closer in the embedding space. This property makes them immensely useful for a number of techniques in natural language processing.

This model is aligned with some of the issues that Sommer describes. Examples might be doctor = male, nurse = female. There are promising solutions that focus on 'debiasing', the details of which are included in the article and which need time to absorb. Check it out for yourself. But such are the nuances of algorithmic outcomes that the author is careful to note that:

What constitutes a terrible bias or prejudice in one application might actually end up being exactly the meaning you want to get out of the data in another application.

In addition, there are other considerations to take into account such as the concept of 'fairness' and how that is situated in a specific decision process.

The second model is what the author calls 'utilitarian.'

Corbett-Davies et al. (2017) highlight an important aspect of machine learning algorithms that the “three-space” model generally ignores: more often than not, algorithms are part of a larger system, with a very specific task to accomplish.

Sure, university admissions departments might be making grand philosophical decisions about which personal qualities they want to use to select students. But for every university admissions department, there are a hundred corporate data scientists who have been tasked with maximizing advertising clicks or minimizing retraining costs, regardless of what data is used to make that conclusion.

Quoting from the Corbett-Davies paper, Wang says:

Here we reformulate algorithmic fairness as constrained optimization … Policymakers wishing to satisfy a particular definition of fairness are necessarily restricted in the set of decision rules that they can apply. In general, however, multiple rules satisfy any given fairness criterion, and so one must still decide which rule to adopt from among those satisfying the constraint. In making this choice, we assume policymakers seek to maximize a specific notion of utility.

Put into terms we can readily understand:

The policymaker designing our algorithm must balance two considerations. The first is the immediate utility of the algorithm’s choices: the benefits of preventing violent crime, balanced by the social and economic costs of jailing. The second is the long-term fairness of the algorithm’s choices: if maximizing immediate utility results in jailing one group significantly more than another, then the algorithm will only exacerbate social inequalities through its effects, ultimately causing a net harm to society. From a utilitarian perspective, fairness can therefore be boiled down to preventing the social harm caused by worsening inequality.

The same thinking can be readily applied to many workplace situations among which Sommer's questions around ageism, for example, resonate loudly.

My take

Wang correctly points out that:

Formal theories are necessary if we want to enjoy the benefits of algorithms without the drawbacks of algorithmic bias. But the conceptual frameworks that have been proposed—one a framework of bias as biased belief, and the other of bias as biased action—are almost completely opposite. Each has its benefits and drawbacks, and it isn’t clear at the moment whether a coherent synthesis of them is possible.

But regardless of which framework (if either) prevails, they are only a first step. Definitions of fairness are only useful if someone actually wants to introduce fairness to their algorithm. And if fairness is costly, then we have no reason to expect that any of the techniques we develop to address bias will actually be used.

Wang's final comment is something I am sure Sommer will hate and will press vendors about at this week's HR Tech conference. As should you if you want to work towards avoiding the kind of job seeking funnel leakage Sommer so eloquently described.

And this is where the proverbial rubber hits the road. Can those who are building the applications of tomorrow access, collaborate and synthesize the academic work that underpins going back to first principles? As Wang asks - are they even willing to do so?