Main content

AI and employment – the regulatory challenges laid bare

Chris Middleton Profile picture for user cmiddleton May 13, 2024
Summary:
AI has severe human rights implications in the workplace. But proving that breaches have taken place will be a challenge. The UK’s human rights regulator explains why.


An image of a finger pointing to a digital scale
(Image by herbinisaac from Pixabay)

The problem of protecting human rights in an AI-centric world will be a flashpoint for the foreseeable future. Among the challenges are the risks of historic human biases – about gender, ethnicity, age, religion, sexuality, disability, postcode, and more – being automated in employment and recruitment processes. Plus, some companies may be tempted to use AI to monitor employees, micro-manage them, and/or force them to work harder and faster, with less and less human agency.

However, while organizations’ absorption of AI into their decision-making processes is not covered by specific legislation that protects employee rights – yet — workers remain protected by exactly the same rules and regulations that applied before AI was introduced.

The problem therefore becomes proving that a breach has taken place. This is why employers will need unambiguous insights into algorithms’ decision-making processes, should an employee or unsuccessful applicant believe they were discriminated against, or that their rights were violated in other ways. ‘The AI did it’, ‘the computer said no’, and ‘it’s an AI, so it must be right’ will be no defence.

The Equality and Human Rights Commission (EHRC) is the UK’s equalities regulator. Last month, it updated its guidance on AI, citing the Post Office Horizon IT scandal as an example of the harm that can arise when an organization trusts its software more than its employees.

For anyone not familiar with that case, undisclosed faults in Fujitsu’s Horizon accounting system led to over 900 Post Office employees being convicted of fraud, theft, and false accounting over a 15-year period, with thousands of others pursued, fired, or forced to cover the supposed shortfalls – which did not exist – out of their own pockets.

With suicides, and countless lives and careers ruined, the scandal is, arguably, the UK’s worst miscarriage of justice. Its subtext was an organization trusting an IT system, and its wealthy supplier, far beyond the point at which common sense should have intervened.

The risk is that AI may introduce this type of problem on an even larger scale, unless these issues are tackled in advance.

Focus

AI has certainly been a strategic focus for the EHRC since 2022’s explosion of interest in Large Language Models and generative tools. Accordingly, the regulator is partnering partnering with the Responsible Technology Adoption Unit (formerly the Centre for Data, Ethics and Innovation), Innovate UK, and data protection regulator the Information Commissioner’s Office (ICO), on the Fairness Innovation Challenge, working with AI innovators to identify solutions to fairness in AI systems.

The EHRC is also examining technologies such as real-time facial recognition, irresponsible use of which could become “ingrained and normalized", it warns.

The organization's work on AI is also in line with the UK Government’s five ‘pro-innovation’ principles of responsible AI adoption. Those are:

  • a system’s safety, security, and robustness must be ensured
  • there should be appropriate transparency and ‘explainability’ in its usage
  • fairness in deployment must be maintained
  • along with accountability and good governance
  • plus, decisions should be contestable and open to redress.

Each of which underlines the point about organizations needing detailed insight into their systems’ workings and design – assuming vendors have disclosed them, of course.

Unfortunately IP protection does not figure in the UK’s benchmarks for responsible AI deployment. This is because the government is waiting for courts to decide whether vendors’ scraping of copyrighted content to train their systems amounted to theft. However, common sense suggests that questions of data provenance and ownership may become an issue for any employers that are using those systems.)

As AI Policy Lead at the EHRC, Robert Bancroft is the man in the hot seat of protecting employee rights and, more generally, human rights in the AI age. Speaking at a Westminster policy eForum on AI in employment, he said:

Our primary aim is help people understand their rights when it comes to AI – and especially how both the Human Rights Act and the Equality Act apply. Ultimately, we want people to be able to identify and challenge discrimination caused by AI. But we also want to make sure that the law is updated in line with new technologies to protect people from breaches of their rights.

Good stuff. However, this is no knee-jerk reaction to unforeseen events; the EHRC has long been observing broader trends happening in the workplace. Bancroft explained:

Last August, we published a research report exploring the major drivers of change in the world of work, and analyzing the equality and human rights implications. This identified the increased use of automation as a long-term trend. That impact will fall disproportionately on particular protected-characteristic groups, such as older people and those with disabilities. We also found that there is a risk of pre-existing inequalities being embedded in decision-making, given the use of historic data.

“For example, when CV-screening algorithms are trained on predominantly male CVs, they may discriminate against women. Meanwhile, targeted online job adverts have been shown to have gender and ethnicity biases – for example, young women not seeing STEM-related job adverts, or Asian men being shown ads about being taxi drivers.

So, what has the EHRC done, beyond urging greater visibility and action? It turns out that it has a key role to play in minimizing the damage caused by the legislative attempts to make technology safe in other ways. Bancroft explained:

The Data Protection and Digital Information Bill, which is currently going through Parliament, would – if it passes as the Government intends – make some pretty big changes to UK GDPR. And not in a good way.

For example, it would leave people's individual data and privacy rights in a worse place, potentially leaving them open to discrimination. And it makes changes to the rules around automated decision-making, meaning it will be allowed in a much wider set of circumstances.

“When thinking about that from an AI perspective, it's really concerning.

Why is that the case? Bancroft said:

Because AI often relies on our personal data, that affects us in the workplace as well as in our private lives – for example, if an employer sought to use biometric technology to monitor their staff. We've given evidence to the Joint Committee on Human Rights’ inquiry into human rights in the workplace, drawing out the risks from surveillance in particular, whether that's email monitoring, keystroke analysis, video surveillance, or, more worryingly, the increased use of biometric surveillance.

We highlighted that there is a gap in the protections available to any workers who are taking claims related to privacy rights to an employment tribunal.

On that point, he added:

That brings me to a legal case we have supported. This is a case where Uber Eats used biometric verification technology and facial recognition to ensure that drivers were who they said they were. In this case, [an Asian] driver was removed from Uber and effectively sacked because the technology failed to verify his identity.

He brought a discrimination claim on the basis that the technology contained a racial bias and was less accurate in verifying people from an ethnic minority. The case was settled before it went to the final hearing, which was a great success for him.

But it's worth mentioning that we aren't seeing many legal cases with an AI element – yet. This could be down to a number of things, including that people don't know how AI is being used, and don't know how to challenge such decisions.

So, it may be that a flurry of legal cases will arise in the near future, as more and more people get up to speed on their rights and on employers’ use of the technology.

Bancroft continued:

At the EHRC, we have a number of statutory, hard-edged regulatory tools that we can use. But ultimately, they rely on a breach of the law [having been committed]. [But proving that can be] complex and prohibitively expensive. And that complexity slows down compliance and enforcement action and adds layers of expense.

Many AI companies are hugely profitable and able to spend a lot of money on lawyers – something that we, as a small, publicly funded regulator, cannot compete against. To put it bluntly, we've had no increase to our budget in nearly 10 years, not even in line with inflation.

My take

Ouch. Bancroft’s last point echoes matters raised in the House of Lords’ inquiry into Large Language Models last year [see diginomica, passim]. Giving evidence to the Communications and Digital Committee, regulators made the same basic point: they can rise to the challenge of regulating in the AI age, but only if government gives them the tools and the funding to do it. Without it, they will always be working in vendors’ shadows.

Loading
A grey colored placeholder image