Would you want to be hired by a bot? Why AI's role in recruitment needs checks and balances

Profile picture for user catheverett By Cath Everett December 17, 2019
Summary:
Can - and should - AI tech be used in the recruitment process? And if so, what checks and balances need to be in place?

hiring

In an HR context, the particularly manual nature of the recruitment process meant it was originally hailed as being one of the first and most obvious areas for automation in the form of artificial intelligence (AI) software to make its mark.

As a result, lots of vendors piled into the market or, in some instances, simply repositioned their offerings and rewrote their marketing materials to ensure a prominent AI spin. A few years on though, as the sector hits peak hype, just where are things really at and are we ever likely to see a fully automated hiring process?

In the view of Katrina Collier, author of ‘The Robot-Proof Recruiter’ and founder of The Searchologist, a consultancy that aims to help employers improve the candidate experience:

There’s still a lot of hot air and a lot of AI that isn’t AI. So there’s a lot of rah-rah and it’s overwhelming for many recruiters, which means that they’re just a bit confused by it all.

James Wright, a consultant at executive search firm Carmichael Fisher (which has just gone into administration), concurs that the market is “very much in its infancy” and that adoption levels are currently low. As he says:

There’s some great technology out there, but a lot of it isn’t very user-friendly as it’s been developed by technologists rather than HR professionals. It’s also typically very expensive to adopt unless you have the ability to introduce it at scale, and a lot of HR departments are very new to this so don’t know how to go about implementing it internally.

As to which elements of the recruitment process are currently being automated, a key focus is on sifting and shortlisting CVs in order to save time and money. Dean Sadler, chief executive of recruitment software provider Tribepad, explains:

People apply for more jobs when they’re online, so recruiters are receiving more and more CVs for every role – it’s not uncommon to see 400 or 500 for one job. So if you spend a minute looking at each one, that’s potentially 500 minutes, or about a day and a half – and if you’re dealing with 30 roles, it becomes impossible. Therefore, automating the process using AI enables recruiters to take back control.

Going down this route, it is claimed, helps improve consistency and reduce discrimination against candidates as the software undertakes searches for certain pre-determined criteria based on key words, thereby offering the potential of broadening out the typical talent pool. This means, so the theory goes, that the likelihood of shortlists being influenced by the unconscious bias of recruiters should lessen.

Unconscious bias in AI

But there are certain large caveats to such claims as evidenced by Amazon hitting the headlines in October 2018 following revelations that it had scrapped the development of a secret AI-based recruitment tool because it demonstrated bias against women. The initiative was intended to automate the sifting of job-seekers’ CVs and to rate candidates in terms of suitability by assigning them a score of one to five stars.

But a year after the project was launched, the Amazon team realised there were problems relating to applicants for technical roles, such as software development. The problem was that the 10-year’s worth of CVs that had been fed into the AI system reflected the industry’s male dominance. This meant the system had taught itself to prefer male candidates and penalise CVs that included female-related words. It also downgraded applicants from two all-female US colleges.

Although the online shopping giant tried to edit the software to ensure it responded neutrally to female terms, it could not guarantee that gender discrimination would not be introduced in other ways. Therefore, the project was canned in early 2017 without ever having been used in anger.

But to make matters even more tricky here, the US-based Electronic Privacy Center (EPIC) has also just filed an official complaint against recruitment software supplier HireVue with the Federal Trade Commission. According to the Washington Post, EPIC attests that the use of “AI-driven assessments” to evaluate job applicants’ skills and personality characteristics, such as their emotional stability and ability to learn, constitute a wide-ranging threat to US workers.

The assessments are based on video interviews, which are used to analyse thousands of data points relating to a candidate’s voice, word selection and facial movements. While individuals are not told their scores, these ratings are meant to help guide employers as to which job-seekers they should hire.

As a result, the point of the complaint is to stop HireVue from automatically scoring applicants and to make public the algorithms and criteria it uses for analysing people’s behaviour, with the aim of protecting job-seekers from unknown and unacknowledged bias.

But the technology is also coming under increasing scrutiny from other quarters too. In January 2020, the US State of Illinois will enact legislation passed in August that requires employers to inform both applicants and regulators how the decisions made by their AI-based video- systems were arrived at. The aim is to prevent job candidates from being discriminated against due to hidden biases in the definition of a ‘model employee’.

Where to automate the recruitment process?

A key challenge in this context though, says Sadler, is that AI comprises two key elements – algorithms and logic/data. He explains:

The algorithm has to be written by someone and they’ll inevitably have their own views on things, which they’ll programme in. So when you come to use that algorithm, there are inherent biases within it. Even if the algorithms are quality-assured and all conscious and unconscious bias is removed, employers still need to feed data into the AI engine about the situation in their own organisation, which means the AI will be biased towards a certain type of CV. But the issue is that if you have an inherently-biased algorithm and inherently-biased data, there are risks involved.

In other words, it is vital for employers to focus on ensuring that their data is clean. “Huge amounts” of data are also required to “make sure the algorithm is as accurate as possible”, which is not always easy to get hold of, Sadler adds.

To make matters worse, a study undertaken by Wright on behalf of Carmichael Fisher revealed that the vast majority of job-seekers were turned off by automation. A huge 86% of those questioned said they would prefer their CV to be evaluated by a human rather than a robot.

Nine out of 10 indicated they would be happier with a human rather than a machine conducting their job interview, while just under three quarters confirmed their perception of an employer would be damaged if the recruitment process was fully automated. Wright takes a similar line, saying:

I’m strongly against automating the entire recruitment process - but that’s not to say AI can’t be used in a positive way to validate or challenge recruiters’ thoughts. Overall though, people are very anti engaging with software alone because ultimately ‘people buy from people’. Otherwise it’s too impersonal and cold and takes the excitement out of the process, even for digital natives like Generation Z. So although AI has many positives, I’d worry if people interpreted it as a way of fixing everything.

Nonetheless, Wright believes the technology definitely has potential in areas that help to “augment human intelligence”. These include gamifying psychological assessments and using tools, such as Textio, to remove unconsciously discriminatory words from job descriptions in order to help create more diverse shortlists.

Collier, on the other hand, sees the software as having a useful role to play in providing “clarity and certainty” in certain more transactional elements of the recruitment process. For example, chatbots could be used to answer basic queries, machine learning software employed to book interviews, follow up with candidates and generally keep in touch. She explains:

It’s about using AI for the boring stuff to free up recruiters’ time to focus on more valuable activities. Today, the technology isn’t mature enough to remove bias and data is often poor, so if you’re using it for assessments, I’m not convinced it would do a better job than a human. Certainly as long as the market remains candidate-driven, I can’t see AI taking over. While over time, that might happen in the volume market, it’s less likely with highly-skilled people where the market is subject to shortages.

My take

Despite the hype surrounding the use of AI software in the recruitment process, employers would do well to handle the technology with care for the time being, not least until some of the legal issues are clarified.