Are hiring decisions ready for AI? How repeatable algorithms can harm people

Neil Raden Profile picture for user Neil Raden July 31, 2019
Summary:
AI marketing literature extols the benefits of algorithmic hiring. But the problem of algorithmic bias and hiring fairness raises serious questions.

AI robot finger on laptop keyboard with bokeh background © Zapp2Photo - shutterstock

In a Nov 30, 2018 article in Forbes, Using Machine Learning To Find Employees Who Can Scale With Your Business, Louis Columbus posits that "Artificial Intelligence (AI) and machine learning are proving adept at discovering candidates' innate capabilities to unlearn, learn and reinvent themselves throughout their careers."

Traditional methods have been inadequate in identifying internal candidates for new positions and/or retraining. Columbus: "Add to this dynamic the fact that machine learning is making resumes obsolete by enabling employers to find candidates with precisely the right balance of capabilities needed and its unbiased data-driven approach selecting candidates works."

I have a problem with this premise.

Repeatability of an algorithm can work against people. For the most part, algorithms in production have no posterior evaluation in place, which derives from the misconception that algorithms are accurate and will not make mistakes. However, algorithms "fire" at a much higher cadence than people, and repeat a bias at scale (part of the appeal of algorithms is how cheap they are to use).

Kathy O'Neill, in her bestseller, Weapons of Math Destruction , gives an example:

A college student with bipolar disorder wanted a summer job bagging groceries. Every store he applied to was using the same psychometric evaluation software to screen candidates, and he was rejected from every store. This captures another danger of AI: even though humans often have similar biases, not all humans will make the same decision. However, given the same inputs, the inferencing algorithm will always make the same decision. That is desirable, but only if humans are to judge for themselves. Perhaps that college student would have been able to find one place to hire him, even if some of the people making decisions had biases about mental health.

The Canadian startup Knockri , developed an iffy, in my opinion, AI system that scans videos of job applicants and evaluates speech and facial expressions. Based on the analysis, the application measures collaboration, direct communication, persuasion, and empathy. The company claims that the tool is effective for screening candidates for jobs where communication skills are critical, "such as consultants."

My feelings about this? First, I've hired dozens of consultants over the years, many of whom were somewhat lacking in all or most of these categories, but they performed. Secondly, just like the bi-polar college student, I would not trust a hiring decision to a tool that measures such soft attributes. My guess is it would create as many false positives as false negatives.

AI is also being deployed to examine a candidate's social media. Surely you've been advised that anything you put in social media is there to be analyzed. The Canadian company Frrole developed DeeperSense which compiles a candidate's social media profile to establish a candidate's "personality."

Megan Farokhmanesh writes in Verge:

Frrole'sDeepSense AI doesn't have kind things to say about me. My stability potential - a person's willingness to "give it their all" before they quit - ranks as 4.6, a "medium" assessment marked by an ominous red bar. Other traits, like my learning ability and need for autonomy rank only slightly higher, while a short personality assessment is kinder: an optimistic attitude, a sunny disposition, a good listener.

Because users of algorithmically-driven outputs do not, ordinarily, understand probabilities or confidence intervals (even if these are provided because the internal workings of the model are not transparent), they may be comfortable accepting the algorithm's output (if they even have this option). The obvious solution is to understand how to develop better, less biased decision-making tools by combining the capabilities of humans and AI jointly.

Moving from manual methods to unattended decision-making systems can subject an organization to risk from bias, privacy intrusions, security, and issues of fairness. The dividing line between what is prudent risk evaluation in hiring practices and what is plain ethically wrong gets fuzzier with AI.

Ethics in AI is an issue when it involves a social context. Social context refers to the immediate physical and social setting in which people live or in which something happens or develops. It includes the culture that the individual was educated or lived in and the people and institutions with whom they interact. Almost everything that has to do with HR involves a social context - people. When there is a social context in AI, there are ethical questions.

In April 2019, the European Commission published a set of seven AI ethics guidelines for Trustworthy AI . The very first one was Human Agency:

AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes.

At this point, the state-of-the-art is only capable of being a thresher, providing insight, but not as a decision-maker. Leaving hiring and career decisions solely to a device without "Human Agency" is a mistake.

Consider this: Have you ever been part of a group interviewing a candidate? Typically, there isn't unanimous agreement about the candidate. This situation would involve some discussion and sharing of impressions, possibly follow-up interviews to address concerns raised. Nothing like this would happen in an algorithmically-driven evaluation. Being declined for a position can be devastating: loss of income, stress, perhaps having to relocate to find a suitable spot. These are not trivial decisions.

Sophie Searcy, Senior Data Scientist, Metis, claims eighty-five percent of all AI projects during the next several years will deliver flawed outcomes because of initial bias in data and algorithms.

The discriminatory effect of good intentions presents the obvious ethical issue: "I can build it, but should I?"

Loading
A grey colored placeholder image