AstraZeneca’s rules for implementing responsible, ethical AI

Chris Middleton Profile picture for user cmiddleton June 21, 2022
Summary:
AstraZeneca delivers a candid presentation, which acknowledges that easy answers to ethical AI are hard to come by.

An image of a human hand reaching out to a digital hand
(Image by Gerd Altmann from Pixabay )

Multinational pharma and biotech giant AstraZeneca has shared some of its rules towards implementing responsible and ethical artificial intelligence (AI) within its business.

For the British-Swedish behemoth, there are four foundations necessary for AI governance: inventory (of technologies to log); definition (of AI application); the governance framework and controls themselves; and the overarching AI standards and policies.

But along the way, there are several specific challenges. These are ongoing and include:

  • getting the right scope on defining AI – e.g. is it a definition that would be recognized globally and for every application?

  • the existence of draft, rather than mature, AI regulations in many territories

  • limited examples of success from early adopters to compare with and benchmark against

  • the problem of balancing innovation with good, responsible governance – both locally and globally

  • the organization’s structure – a mix of global and federated, making it harder to achieve universal governance standards (should those be desirable)

  • plus, the difficulty of governing third-party AI solutions. Can they conform to the organization’s centralized AI standards and governance policy?

Keep asking questions

Speaking at the AI Summit 2022 at London Tech Week, Wale Alimi, AstraZeneca Lead in Artificial Intelligence Governance, told delegates that the company is still exploring these issues and asking questions in a global marketplace. There is no simple, monolithic solution to the challenges, he suggested. Alimi said: 

We're a global organization, so which regulations do we need to be cognizant of? And how do we ensure that what we're rolling out would not conflict with those regulations?

We've got a huge business in China, and China has rolled out its own regulations. So, we need to be checking and confirming that things are aligning with those. Plus, we expect European regulations to come through, so are they conflicting? And how do we manage that?

Alimi’s presentation came as the issues of data protection, AI regulation, and the impact of related systems, such as autonomous technologies and real-time facial recognition, are the subject of intense global, regional, and national debate.

As previously reported, last week the UK government signalled its intention to depart, in some respects, from European data protection rules. But it has also ‘baked in’ the principles of ethical AI development and implementation as part of its national AI strategy.

Meanwhile, there is intense debate in the US about whether to follow the EU’s lead in protecting citizens from Big Tech’s data intrusions, and whether to regulate technologies such as AI and real-time facial recognition at federal level.

Should the onus be on protecting citizens and consumers, as Europe believes? Or more about enabling private-sector innovation to take place, until harms can be addressed with light-touch regulation? This is the policy long pursued by the US. But can businesses really be trusted to act responsibly, post Cambridge Analytica et al?

At state and federal level, the US is inching towards a more European approach to reining in the power of Big Tech. Meanwhile, the UK has signalled its intention to strike out alone – away from Europe, and perhaps towards Asia.

Multinational problems

This politicking and these cultural differences pose severe challenges to any multinational, as it forces the company to decide whether to create a global solution that attempts to please everyone. The alternatives are to pick a side and maintain higher standards than are necessary in other territories, or adopt a piecemeal, local solution when it comes to governance and standards.

Alimi explained that AstraZeneca’s response has been to create a global discussion center, so that the organization can work collaboratively towards the right answers: a process of constant organizational learning. They said: 

We have an active consultancy office where people come from across the globe to ask questions around ‘What should I be doing or not?’

The federated structure of the organization means deciding how much we can centralize, and how much we can standardize the things we are rolling out. Do we put in place guardrails and leave each business area to determine what they can put in place?

Well, that's the approach that we've taken, with some level of oversight from us as a data office.

Meanwhile, the organization is asking itself whether it needs specific governance and standards structures for AI, or whether AI can “piggyback” on existing rules, rather than exist in a policy ivory tower.

Governance of third-party solutions is a particular challenge. He said:

When we procure AI solutions, or when we procure IT systems that have got some AI capability in there, are we expecting them to live up to our principles? And if so, how can we demonstrate that they're doing so?

And when we collaborate as an R&D organization – we do a lot of scientific collaboration – how are we ensuring that our third parties are living up to our principles?

These are some of those challenges we have to go through. I wouldn't say we are there yet. We are still dealing with them as time goes on.

So, what lessons have Alimi and his team learned from grappling with these challenges locally and globally? He said:

We went down the route of implementing an AI global standard, but then on the back of that, deciding what policies and operating procedures to put in place locally.

What we have done in AstraZeneca with all our high - and very high-risk projects, is we expect at the point of deployment that the lead scientist or lead project manager will certify that they have lived up to our ethical AI certification.

My take

An interesting and fascinating presentation, as it is transparent about the real-world difficulties of governing a fast-emerging technology that, at some point, may make decisions about human lives.

Alimi should be commended for being open about the local and global complexities that organizations face, especially multinationals. AI, it seems, is a long and difficult journey, not the simple destination of marketers’ sales pitches.

Loading
A grey colored placeholder image