AI ethics - how do we put theory into practice when international approaches vary?

Chris Middleton Profile picture for user cmiddleton April 22, 2022
Summary:
Many of the issues around AI and ethics are understood, but what practical steps should we take to protect citizens?

ethical

Many governments around the world have rightly put ethical development and deployment at the heart of their AI thinking. Core to this complex issue is a set of interconnected problems - AI systems that may automate societal problems, either due to a systemic lack of diversity in development teams, perhaps, or the use of training data that contains historic or structural biases. The design of systems may also be a factor.

The result may be the algorithmic exclusion of individuals or groups because of their ethnicity, gender, sexuality, religion, or socioeconomic background. For example, facial recognition systems that misidentify black or Asian people because of a lack of relevant data; or CV-scanning applications that reject applicants from some postcodes/zip codes because, historically, human employers have actively excluded those jobseekers.

But as mentioned in my previous report on the UK’s AI Strategy, barely 15% of companies cover AI ethics when training employees in the technology.

So, what does the industry make of this tricky subject? Louise Sheridan is Deputy Director at the UK’s new Centre for Data Ethics and Innovation (CDEI), the first organization of its kind in the world. Its recent work on algorithmic transparency has directly influenced UK government policy: a good thing in the wake of the 2020 A’Levels grading fiasco.

Speaking at a Westminster eForum on AI strategy in London this week, she explained why she believes AI ethics are so critically important to success in this field:

We all know that concerns about risk – for example, financial risk – are a major block to innovation in industry. The government already helps organisations overcome financial risk with support such as grants or tax credits. But concerns about ethical risks still hold organisations back. So, we cannot innovate using data and AI without addressing ethical risks and developing data governance that is worthy of citizens’ trust.

The impact of efforts to address unfair bias in decision-making have often gone unmeasured, or have been painfully slow to take effect. Data gives us a powerful tool to see where bias is occurring and to measure whether our efforts to combat it are effective. If an organisation has hard data about differences in the way it treats people, it can build insight into what is driving those and seek to address them.

But of course, data can also make things worse. New forms of decision-making have surfaced numerous examples where algorithms have either entrenched or amplified historic biases, or indeed created new forms of bias or unfairness. Action-steps to anticipate risks and measure outcomes are needed to avoid this. But organizations often lack the information and ability to innovate with AI in ways that will win and retain public trust. This means that organisations often forego innovations that would otherwise be economically or socially beneficial.

Uncertainty

According to the CDEI, 25% of medium-to-large businesses cite uncertainty about how to introduce ethical governance as a barrier to innovation. Industry is often unwilling to invest in AI over fears of regulatory risk and reputational damage, she explained.

Around 11% of vendors say that issues around ethics, governance, and public trust are the greatest barriers to AI adoption. That is the same percentage that has identified lack of skills as a barrier.

But frequently when organizations do innovate, they do so in ways that undermine trust. If consumers don't trust that their data will be used responsibly, or that the right protections will be in place, they will be less likely to consent to share the data needed to build AI systems. And if consumers don't trust that the data being used to train AI systems is fair, representative, or accurate, they're unlikely to trust the AI.”

So, what can be done about it? David Frank is Government Affairs Manager at Microsoft. He outlined how regulation could help:

Regulation is important to ensure that systems are used in a trustworthy way, and obviously this is key for AI. Because how you show and realise the benefits of an AI innovation is by people having sufficient trust to use it.

Microsoft thinks regulation should be risk based and reflective of the societal and technical nature of AI. What that means in practice is that there should be responsibilities on both those who develop an AI system and those who deploy it.

There's an international perspective here, he added: 

The UK can play an important role via the National AI Strategy and the work that will flow from it in shaping regulation. Starting by developing a robust framework at home, and then taking that forward internationally as part of the UK’s historic position on rules-based regulation...Trade agreements can also play a part in this. We note that the UK/New Zealand trade deal includes obligations to build out governance and policy frameworks around AI. We would hope that these could be incorporated in future trade deals.”

So, internationally, where does Microsoft’s needle point in the debate about how to regulate AI - towards as the EU’s horizontal approach, the UK’s more centralized stance, or the US’ laissez-faire policy, which is focused more on individual risk assessments? Frank said:

There are definitely some uses of AI where a horizontal approach, which may be focused on a particular type of deployment, is appropriate, because of the potential for harm. And by harm, I mean discrimination against ethnic minorities, genders, and other communities.

The challenge of a more sectoral approach is there are already a bunch of rules in a sector like insurance, for example, to prevent discrimination, or to enable redress if people feel they have been discriminated against. We might see a way for those regulations to be widened to include decisions made by algorithm.

However, it’s incredibly important, whichever approach is taken, that there is clarity. People need to build systems with a level of accountability, and people who buy and deploy them need auditability. It’s about being able to challenge decisions, and ‘explainability’ will be key to ensuring trust – so that people understand where an AI system was used, and why that solution was considered appropriate.”

New problems

Matt Hervey is Partner and Head of AI at international law firm Gowling. He warned that AI can contribute to a broad range of new ethical problems:

There are general concerns about AI. I don't think those concerns are unique to AI, but the technology certainly accelerates these changes. Things like sophisticated fraud through deep fakes, the rise of political extremism, algorithms choosing more and more extreme material, and of course, crystallising discrimination via an AI.

But on the other hand, he said, there are key sectors that deserve a special focus, such as Finance, Automotive, Transport, and Healthcare. Hervey argued that at the moment it suits the UK, for example, to focus on sector-based regulation for several reasons:

First, there is a real lack of bandwidth among people who understand AI. And that’s not just a matter for regulators, but also for private companies as well. There is a real shortage of people who could develop regulation. Of course, we wait for the Alan Turing Institute's findings on that.

We should focus on those sectors that are most likely to drive innovation, because those innovations will have spill-over effects into other sectors. For example, the development of autonomous vehicles in this country has shown the benefits of a rolling program of legislative change to enable R&D. I think a fundamental part of law and regulation will be how we assess the safety of AI systems, and that will be very much determined by their specific application within certain sectors. How do you prove a self-driving car will be safe? How do you prove a new medicine is safe? And so on.”

International

But there is a more general and international dimension too, he acknowledged:

When it comes to regulating AI at a more general level, the AI Strategy does warn that international activity may overtake a national effort to build a consistent approach.

Frankly, I think that horse has already left the stable. We are well behind the EU when it comes to a central approach to AI regulation. I think, as GDPR has shown, the Brussels effect is strong, and UK companies will end up complying with EU law if it is at a higher level than our own. The final point on central regulation is the lack of technical solutions for transparency and bias. What the EU is really doing is codifying reasonable behaviour.

Adrian Weller is Programme Director for AI at the Alan Turing Institute. He said that the UK can – and will – lead this debate:

Building the right sort of safe and ethical AI will be a boost for the UK as a center for leading AI research, and for the economy. The UK’s strength in innovation, technology, law, and regulation sets us in pole position to lead thoughtfully on key issues of AI governance. We want to establish the right guardrails, encourage the right kind of innovation, which we should require to be trustworthy: good for individuals and society. Getting that right will boost the right sort of economic growth. It will avoid the potential for a backlash against AI and algorithmic systems which cause harm. Indeed, it will help customers embrace new technologies for good reasons, since these technologies will be good for them.

A key point I want to get across is that doing this will provide concrete stepping stones to moonshot or big-bet AI progress. Another key area I want to mention is robustness, by which I mean helping to ensure that an AI system trained in one environment will reliably work just as well in another. Working on that is really, in the long term, about moving towards a Holy Grail: AI systems that can employ generalisable concepts [aka general artificial intelligence]. Concrete solutions to problems will help us get there – for example, in medical image diagnosis systems. In ideal conditions, we have systems that can diagnose from a radiology scan just as well as human experts can, but they often fail in real-world settings.

Partly that's because they're not sufficiently robust. We need these systems to be robust in different lighting conditions, in different machines, in different users, and also with different population subgroups.

My take

Quite. At the core of this debate is a welcome dose of reality - general AI doesn’t yet exist, and many systems described as AI are really just tools that crunch Big Data to recognize hidden patterns – things that no human being could do alone without vast amounts of time and infinite resources.

So, the key point is that AI systems, in a general sense, rely on human beings to gather the training data – though there are simulated solutions in this space too, plus machine learning, of course.

But because many large databases have been gathered in ways that express human frailty, flaws, and prejudices – for example, data from the US legal system that dates back to before the civil rights movement – that data is sometimes inherently biased.

AI systems may then automate societal problems, as the COMPAS sentencing guideline system did in the US a few years ago, when it was shown to be recommending harsher sentences for black people, and more lenient ones for white – based on years of data that, essentially, revealed racial prejudice in American law enforcement.

These are problems that must be ironed out before AI is adopted at scale and in ways that may have a broad social impact. The good news is that government recognises this. But by not pushing ethical training to the top of the skills agenda, organisations are failing to take active steps to solve these problems.

Never forget, they will be the ones blamed in the court of public opinion if they fail.

Loading
A grey colored placeholder image