I, like a number of HR industry analysts, have been uncomfortable about some of the uses of algorithms, artificial intelligence, machine learning and other technologies that have been exploding on the HR scene in recent years.
The single greatest concern is that individuals who have no training in statistics, correlation/causation distinctions, legal risk associated with machine generated recommendations, etc. might use these new technologies and expose their company to great risk and/or adversely impact the livelihoods of many innocent potential workers.
In a nutshell, many of the new, cool, supercharged recruiting solutions can quickly and mechanically identify other potential job seekers with similar characteristics to current employee groups that have shown some measure of success or retention with the company.
While on its surface that sounds admirable and cost-effective, the problem with these tools is that they rely on a test database which only includes existing employees. If a company has failed to hire many women or minorities in the past, then very few of them will appear in the solution’s data population and thus will generate a statistically insignificant subset of individuals to establish a meaningful pattern.
The end result is that a company would perpetuate its existing hiring biases and choose to ignore the vast numbers of potentially great employees. Managers might rely on a tool that would automatically and mechanically weed out otherwise potentially valuable candidates.
Imagine my surprise when more than one HR vendor expressed concern to me that some of these technologies, especially those put in the hands of the wrong or untrained individual, could cause litigation issues for the employer and/or software vendor. That’s a very real concern and some day, I suspect, some software vendor CEO will wake up and realize the true potential harm that some of these products could create.
But is this problem only related to recruiting software? No. There are tools that can recommend not only which candidate should be allowed to move through the recruiting process, but, there are also tools that can identify who the highest ranked candidates are based on criteria that a machine scores rather than a recruiting or hiring manager. This begs the question: “Can an algorithm discriminate?”
Since a human being provided the general framework, data sets, and other input factors into the algorithm and its environment, then the tool is capable of discrimination simply by the fact that it is acting on the data and biases of its creators. The classic GIGO — Garbage In, Garbage Out — problem is ever present.
There are other HR products using this kind of advanced technology. Some vendors will show you tools that match different people and personality types to different teams or organizational units.
For decades, I have understood something that the management guru Peter Drucker explained in his book The Effective Executive years ago. Drucker opined that performance reviews and training programs that are oriented into turning every manager or executive into a multi-disciplined, interchangeable Renaissance person are doomed to fail.
Instead, he argued that we should embrace people’s strengths and then find other team members to backfill for that person’s weaknesses. Long before I read Drucker’s book, I knew that there are some people who are brilliant thinkers but painfully shy in public speaking settings. Some people may be great at selling and others are great at implementing. And on and on. I often paired people strong in one suit with people who are strong in the complementary skills that person needs. Together they create a great combination.
These methods work because the strengths of one lift the weaknesses of the other and vice versa without damaging the power those strengths convey upon individual performance.
As a side note, it is amusing to hear so-called modern day expert/gurus talking about this as though it is a new found science. It’s not.
Using team construction technology does not bother me as much as some of the AI and machine language technology that is focused around recruiting. The tools are used to identify which of your existing company employees work best when combined with a particular manager or other team members. However, I have heard some industry analysts suggest to technology vendors that they can extend this personality matching technology into the recruiting process. In doing so, job candidates can be evaluated on the way their personality matches or clashes with that of their potential supervisor.
It is at this point where the science will need to be bulletproof to withstand legal challenges and could present PR issues for an employer. If the technology utilized even creates the appearance of discrimination, the damage to a company’s employment and product brands could be significant.
But for now, some vendors are starting to question whether it is right to utilize these new technologies. They are now focusing attention on finding which are the appropriate situations under which customers can safely utilize these powerful capabilities. That introspection is definitely needed.
Editor’s note: There is one major caveat to these selection techniques. The current state of the art should not eliminate the face to face candidate assessment. Any vendor who remotely suggests that is asking for trouble.