“You’re not our kind of people” - why analytics and HR fail many good people
- Summary:
- HR analytics tools are incredibly powerful. But deferring to them without human thought can be dangerous.
We’re entering some especially dangerous space right now as it pertains to HR decisions and the use of analytic, machine learning and algorithmic technology. Sadly, a lot of firms and people about to use these new HR Power Tools may be well-meaning but they’re wielding very powerful tools while having little or no training in them or knowing what these things can do.
One of the big analytic draws for HR software vendors today is a big data analytic application that can identify characteristics of recruits (not just employees) that are likely to stay a long time with an employer. It sounds great as it spots attributes of people that have been present as long-tenured employees. The technology looks at a large pool of former and current employees and seeks out common characteristics in both long retention and short retention workers. After crunching the model, the algorithm is applied against a control database to see how well the model worked on a control data set.
This sounds benign but here’s where science, good intentions and unskilled business users can badly bork things.
Correlation is NOT Causation
Just because an arithmetic model found a 92% correlation between some factors and retention doesn’t mean that everyone with those factors will actually stay with the employer a long time. Likewise, someone who does stay a long time might not have any of those factors. In and of themselves, these correlations might be interesting but they are not predictors or guarantors of future performance.
Anecdotally, we all know people who changed jobs simply because their spouse/significant other got a career opportunity in another city. Few algorithms check for that right now as their data sources don’t have access to a worker’s spouse’s future employment information. These correlations are not perfect, do not factor in all variables and can’t really predict future acts. But too many business people immediately jump to the conclusion that a high correlation equates to causation. That's a common mistake.
I’ve heard from a lot HR people in high turnover industries (e.g., food service and retail), that they should be able to target recruits with highly probable retention characteristics. What they’re really saying is that it is okay for them to discriminate if a math model told them it’s okay to do so.
Here's food for thought. Early in my career, I knew that people with certain college degrees were very likely to leave the company within two years. I assumed that just the presence of one of those degrees in a recruit was reason enough to pass on them for employment. As a business person, I thought I was making a sound business decision by avoiding a large portion of the available workforce.
Before I could really act on this information, I interviewed one of these recruits. I grilled her on the applicability of her college background and my concern re: her potential longevity with the firm. I was so moved by her intense eloquence in answering my questions that I hired her. I’m pleased to say that she’s still with the company. I, on the other hand only lasted 18 years there!
What I learned is that hiring is an individual decision – not a machine driven decision. The best people are the ones that a hiring manager/executive spends time to understand. If the person possesses the right skills, behaviors, culture, etc. that match the company and role, then great.
Too many HR and operational people are trying to fob off the hard work associated with doing a great employment assessment to a math formula. If an executive is so bad at interviewing, either get a better executive to screen applicants or just get a different executive altogether.
Algorithms and their results favor large populations
The current crop of big data HR tools I’ve seen often utilize datasets from large-ish employers and/or those employers with lots of employee records. The larger the dataset means that there is a greater opportunity to find lots of correlations and to create a statistically meaningful set of findings.
We need more people to consider those groups of people who aren’t found in abundance and ask how the analytic tools, the algorithms, and, most importantly, hiring managers will “see” people who aren’t well represented. For example, suppose you are from a large, rural, devoutly religious, mixed race family that recently immigrated to the United States from Tongo and you’re applying for a restaurant position for a fast-food chain in mid-town Manhattan.
Chances are you would be the only person from Tongo that this firm has ever received an application from. Moreover, the Recruit Retention Algorithm tool will not score you well as your value-set, education, etc. are not correlating well against parameters often found in their other longer-tenured restaurant workers as those workers may have different values, different education, etc.
Should this restaurant chain pass on your application? I would hope not but a low retention correlation score could keep you from even being considered. You could be a great worker who would stay a very long time but your lack of similarity to other local hires will get you, incorrectly, excluded from being considered let alone hired. Similarity is NOT a predictor to future success.
For companies seriously wanting to expand the diversity of their workforce, these algorithms could trigger hiring managers to do just the opposite: hire more of the same kind of people that they’re already chock full of today. It’s these unintended consequences of trusting the algorithm that can create problems for people and employers.
In the wrong hands, these tools could trigger Stepford corporations. Like the film, The Stepford Wives, an employer could create a workforce of always perfect, almost identical, workers who fit a pre-determined mold. God help the poor soul who is different.
Don’t think vendors know the science/limitations
A colleague of mine and I once tagged team an interview with an HR vendor. This executive told us of a Wall Street banking firm client of theirs who liked to hire finance majors who rowed crew at Harvard or Yale. This client thought these people made the best investment bankers. The vendor made a solution to find these very people for this client.
I did not react well to this. I pointed out that there might be better candidates out there who went to B-schools elsewhere. I don’t think the Ivy League schools have an absolute monopoly on great business people. Just look at the successful IT business CEOs who didn’t even graduate college and you’ll see that a degree from a prestigious university is not necessarily a predictor for future success.
I wonder how many non-standard people wouldn’t have had the careers they experienced if these algorithms (or the people who will use them) existed in prior years. Take the case of Chester Nimitz:
Despite being reared well away from any ocean in the hills of South Central Texas, he would go on to lead a great naval armada to victory, and become this country’s first five-star, fleet admiral.
I don’t think Admiral Nimitz rowed crew at Harvard.
HR analytic vendors will likely disagree with me saying that they look at behavioral attributes of recruits not just demographic factors. But, I’ve got issues with those, too.
For example, when sensors on a device report a heating issue, an analytic tool can predict when the machine will likely fail. The business will take preventative maintenance actions and will avoid a lot of expensive, unplanned downtime. When the behaviors of machines are captured and analyzed, no one is getting fired or bypassed for a job. No one’s family is adversely impacted. And, most importantly, the data is very accurate and not subject to a number of other variables like how a person feels one day.
The behavioral or attitude tests given job seekers ask people about hypothetical choices they might make or how they feel about something the day they’re taking the assessment. What the employer gets is often something that is subject to a lot of interpretation by both the jobseeker and the hiring manager and a lot of variation. An example: today I feel lousy but if you asked me the same question yesterday I would have been much more optimistic.
When I see these tests that want to know if I’m more decisive or more indecisive, I find myself leaning toward the latter the longer this excruciating test continues. By the end of these highly subjective exercises, I’m not sure what I believe in anymore. If the quality of the data going into these tests is so variable and fraught with ambiguity, how can meaningful results emanate?
People are complex creatures. I reject the idea that a person’s entire existence and future potential for an employer can be summed up into a single descriptor like: Initiator, Facilitator, Open or Explorer.
Conclusion
In recruiting, the mere availability of an analytic tool is insufficient for a ‘solution’. Until vendors provide substantial, material training to hiring executives about what these tools really report and how they should/shouldn’t be used, it’s not a solution. Putting such powerful tools into the hands of the untrained is simply irresponsible.
Businesses need to re-discover HOW to properly interview recruits and how to identify the right cultural, behavioral and other cues that people naturally give off. If your interviews frequently involve testing a candidate’s knowledge of their own resume, you’re a poor interviewer. At the end of an interview, you should know what makes this person tick, how they handle unpleasant situations, how they motivate others, what’s their propensity to be a team player, etc.
Businesses need to also look hard at how these solutions might over/under represent certain pools of people. Is this the outcome your firm really desires?
I highly recommend everyone read the cover story on the June 22 issue of Time (see image left). It could make you rethink what you know about yourself and others. Here are the Cliff Notes version of that article
Image credit: DoItYourself.com