You’re fired! Can we trust algorithms to decide who gets sacked and who doesn’t?

Cath Everett Profile picture for user catheverett September 22, 2022
Summary:
Sacking employees by algorithm may not be widespread as yet, but it undoubtedly has legal, ethical and reputational ramifications that need to be evaluated.

An image of a hand flicking away an outline of a person
(Image by Gerd Altmann from Pixabay )

In a new twist on the now infamous 2013 Oxford University study, which forecast nearly half of all US jobs would be automated out over the next two decades, 60 Facebook contractors learned last month that they had been laid off at random by an algorithm." 

Reports claimed that an executive at Accenture, which provides staffing services to Meta, informed the Austin-based workers of the situation via a video call.

Russian firm Xsolla, which provides payment processing for the gaming industry, also laid off 150 staff in August 2021, reportedly as a result of slowing growth, after an algorithm identified them as being ‘unengaged and unproductive’.

Estee Lauder, meanwhile, reached an out-of-court settlement with three make-up artists in March this year after it made them redundant following a video interview to reapply for their positions. An algorithm assessed the video and ruled the women’s performance did not compare favourably with that of other employees."

So just how common is this kind of behaviour among employers and could it be a sign of things to come? According to Kate Benefer, Partner in the Employment team at legal firm RWK Goodman:

“Using technology to dismiss people is a relatively new idea and it’s not common. This kind of approach is still at the very early stage, with AI tending to be used more commonly on the recruitment side to filter out candidates.

Understanding the legal risks

But employing tech in this way does carry a number of legal risks. The first well-documented one relates to the potential for discrimination based on possible unconscious bias programmed into systems by developers and data scientists. A classic example of this situation was the AI-based internal recruitment tool built by Amazon, which irredeemably discriminated against female candidates and was subsequently canned. " 

Another legal risk, in Europe at least, centers on the idea of fairness. In this context, the idea of holding meaningful, two-way consultations during a redundancy process and being clear on which criteria are being used for decision-making are key. Benefer explains:

If disciplinary allegations are put to people, it means they can respond appropriately. But if, as in the Estee Lauder case, decisions are based on your reactions and facial expression, it’s much harder to consult on that. So it becomes more one-sided, which is problematic from a fair dismissal perspective.

This means it is important for employers to clearly lay out their rationale for making staff redundant as doing so makes such decisions “more difficult to challenge”. However, Benefer adds:

If you say you’re sacking someone because the computer said to, it’s difficult to defend claims on the basis of meaningful consultation or fairness, especially in a pooling situation where some people are selected for dismissal and some are not - and especially if you end up getting rid of more women than men, for example. The question there is how can you say ‘I’m not biased’ if you don’t know whether the technology is or not?

A third risk, again in a European context, is that of conforming to the European Union’s General Data Protection Regulation. As Benefer points out:

Individuals have the right to object if they don’t want decisions made about them without human input and they’d prefer their data was processed by a person. So you could find individuals starting to say to employers they don’t want their data interpreted by machines. 

But it is just as imperative that organizations are open with staff about whether they have chosen to automate their decision-making or not. Benefer explains:

I expect when many companies drafted their privacy policies, they said they wouldn’t process personal data using algorithms, which means if they do so now, they’d be reacting contrary to those policies. It’s a relatively easy situation to rectify – you just have to be transparent about the changes, but I suspect many organizations haven’t focused on the data angle yet. 

Ethical and reputational challenges

Sacking workers by algorithm also creates a range of ethical and reputational challenges too though. At the crux of the matter is how and what the technology is used for, believes Bill Mitchell, director of policy at the BCS – The Chartered Institute for IT. He says: 

You have to check the provenance and integrity of the data, and that the system is measuring what should be measured. From an ethical perspective, there are two ways AI systems are used. The first is treating employees in a way that means they feel helped and supported to do their job better – coaching apps are a great example of that. The second is measuring how good people are at their job - for instance, whether they’re typing or answering calls quickly enough. That’s taking more of a micro-management approach and is about exercising greater control over them. 

The point here is that automating poor management practice will not improve it or make employees feel any more valued. As a result, he says, it is important to:

Really think before you use AI and don’t make the mistake of simply chucking IT at the problem. For example, if you’re thinking of using it to monitor if staff are at their desks, it might make sense to evaluate your management approach.

But there are also the optics of the situation to consider. As Benefer points out:

Putting legal issues aside, it doesn’t sound nice to be sacked by computer without apparent human involvement. Dismissal law is important, but looking at it from an individual’s perspective, a lack of human input makes it feel like things weren’t handled properly. So there’s more likely to be complaints and upset as it feels totally impersonal, and it’s not good for the reputation of companies known to be treating their employees in this way. If you’re focusing on engaging, attracting and keeping people, this kind of treatment will put them off joining, so it’s important to factor that in when deciding how much this kind of approach should be used. 

But Alistair Sergeant, Chief Executive of digital transformation consultancy Equantiis, believes that using algorithms for decision-making in this context has both pros and cons. On the upside, he says:

It’s quite common practice for organizations to be asked to give a blanket percentage back to the budget, which includes headcount. It’s not a moral review that questions the impact on people. The focus is on what can be saved from the bottom line. So you could argue that layoffs would be much cruder without the use of algorithms and that organizations can use data more effectively to make informed decisions.

The importance of context

On the downside though, Sergeant acknowledges that:

There are always three parts to an opinion – my view, your view and the bit in the middle bringing things together. Xsolla, for example, made its decisions based on how engaged people were. But if they’re not engaged, who’s to blame for that? Is it the fault of the staff member or the manager? This kind of context isn’t considered by algorithms, but there are huge moral implications if decisions are made based on data without getting all sides of the story. It breaks trust quite considerably without understanding why, so it comes back to the human values you want to demonstrate in your organization.

A key consideration here is looking at the root causes of any challenges, performance or otherwise, rather than simply at the symptoms. But it is also about using data “to paint a picture rather than decide the future”, Sergeant says. Benefer agrees: 

AI might sit behind a decision, but it would help with the employee engagement side of things and doesn’t sound quite as negative if you ran the program to make the provisional selection and then had people make the final decision. It’s certainly less risky.

Unfortunately though, there is currently no non-technical guidance or ethical best practice to help here when introducing AI systems. In legal terms, there have also been no test cases to establish what should be considered appropriate behaviour or not. And as Benefer says:

The issue won’t go away. Everyone’s looking for technology to make things easier so people won’t stop using it - although I feel it’ll remain quite rare for a while due to the competitive employment market, which means companies won’t want to be seen to treat people badly. But the more normal this approach becomes, the more the risks start to change as it just becomes the way things are done.

My take

Ethical, moral and reputational issues aside, these kinds of situation is something that the AI industry needs to evaluate, explore and start taking some responsibility for, for its own good as much as that of wider society. As Mitchell puts it:

The thing I worry about is that if this kind of practice becomes more widespread, what will it do to public trust? If AI starts to be seen as a hostile micro-managing device and there’s a situation of distrust around using it, it will undermine any attempts at adoption.

Loading
A grey colored placeholder image