AI use in recruitment is growing – but users are ignoring big risks

Chris Middleton Profile picture for user cmiddleton October 26, 2022
Summary:
The latest report from a multinational law firm finds that few organizations are considering the risks of using AI to automate the recruitment process

An image of a laptop with a human shape being ported out of the screen
(Image by Mohamed Hassan from Pixabay )

A new report from multinational law firm Littler reveals that the use of artificial intelligence (AI) and automated tools is growing in recruitment, as companies look for ways to reduce the time, labour, and admin involved in sifting through hundreds of job applications. 

According to the Littler 2022 European Employer Survey Report – which also covers a wider range of issues, including pay equality, flexible working, and employee well-being – 28% of European companies are already using AI and automated tools in support of hiring activities, while 19% plan to do so over the next year. A further 17% are considering adopting it in the same timescale, leaving just over one-third of respondents (36%) with no plans to do so at all.

Strategic adoption is higher in small and medium-sized businesses than in large enterprises, because these are often organizations that lack the staff to deal with high volumes of enquiries, CVs, and letters, many of which may come from similarly qualified people. 

However, the situation is more complex and nuanced than that, the report explains:

The differences between small and large company respondents are particularly telling: Whereas substantially more small company respondents have developed a plan around the use of AI in recruitment (68%, versus 42% of large employers), large companies are out ahead when it comes to data privacy compliance (50%, versus 23% of small employers) and coordinating with vendors (40%, versus 26% of small employers). 

This makes sense given the operational challenges a large company must navigate when it comes to deciding who will drive such an initiative, as well as the heightened compliance risks bigger corporations face (and the additional resources and expertise available to address them).

But the big question is: why is an international law firm concerned about the rise of AI and automation in recruitment in the first place? 

One reason is the risk of bias against some candidates or groups becoming automated via AI and other tools, which are designed to seek out the applicants most likely to succeed – or, to put it another way, to screen out those most likely to fail. 

If in the past, senior managers rejected applicants on gender, age, faith, sexuality, or racial grounds, or denied jobseekers from specific groups, locations, or postcodes/zipcodes, then a pattern-based algorithm might interpret this data as meaning that such candidates don’t succeed. In this way, the bias becomes automated.

Laura Jousselin is Partner at Littler’s Paris office and helped produce the report. She says:

There are human criteria that you cannot really assess with AI. Plus, there is a risk with AI that it's not fully compliant with the law regarding recruitment.

There was a tool used by [redacted corporation] that helped pick resumés for hiring. They realized that they chose mainly male resumés and not female, because they were analyzing data from the previous 10 years, which was discriminatory. 

AI sometimes doesn't take into account the legal rules that we need to comply with.

You are still responsible 

So, what do decision-makers need to consider – especially those who may be adopting software tools primarily to save time and money? She says:

Companies need to be aware that they remain legally responsible. So, even if it's actually the vendor’s fault, because they did not correct the bias in their software, the [client] company is the one that remains responsible. 

So, if you want to avoid these kinds of risks, you really need to make sure in your agreement with the vendor that all of these issues are covered, and that you mitigate the risk of your liability.

Of course, historic bias – which may be deliberate, unconscious, accidental, or rooted in key individuals – is more likely to reveal itself in years of hiring data than in the software itself. This is why steps need to be taken in both the selection process and the tool’s design to counterbalance the risks. 

For unsuccessful applicants, however, it may be hard for them to prove that automated bias in an AI system denied them equality of opportunity. On this point, Jousselin offers some reassurance – at least, under current EU law. She adds: 

Especially in France, when you bring a discrimination claim, you don't need to prove that you've been discriminated against. You just need to give the judge the first element that makes you think you have been. Then it's the company that has to provide evidence that they did not discriminate.

What’s the true cost?

Good news. But this demands both transparency in the data and explicability in the AI system, she explains. 

The knock-on effect of this is important for decision-makers to grasp: any money saved in deploying AI and automated tools in the recruitment process might be lost several times over to successful legal challenges.

On this point, the survey has worrying news: 

While more than half (54%) of those using AI/technology solutions in recruiting said they have developed a plan that identifies specific goals and tests outcomes, less than one-third have conducted an assessment to ensure data privacy compliance (31%) or coordinated with vendors to conduct reviews of AI algorithms and identify potential biases (28%).

In other words, most organizations are focused on the efficiency, time, labour, and money-saving metrics associated with AI/automation, but comparatively few are considering the risks of bias and discrimination that – unintentionally or otherwise – may be enabled by it.

Just as alarmingly, Littler finds that even fewer organizations – just 27% of respondents – have tested a range of different AI tools to determine the best fit for their needs. 

My take

All of this suggests that most users remain motivated by efficiency gains and cost savings –which are significant considerations – but risk falling prey to marketing spiels when they should go with the best strategic and operational fit instead. In the meantime, many are blind to real legal risks.

In short: most companies could do much better.

Loading
A grey colored placeholder image