The NHS has 86 live AI projects, says NHS England AI Director

Chris Middleton Profile picture for user cmiddleton July 11, 2023
Summary:
The National Health Service (NHS) potentially has an AI advantage compared to private healthcare organizations, given its national scale. But there are plenty of risks to consider too.

healthcare, people, technology and medicine concept - close up of doctor in white coat with stethoscope and tablet pc computer over blue background with charts © Syda Productions - Shutterstock
(© Syda Productions - Shutterstock)

With decades of patient records on over 65 million citizens, the NHS’ database – which include datasets on diagnostic imaging, maternity services, mental health, and emergency care, among others – is comprehensive, and lacking in the fragmentation of private healthcare systems. It is also highly attractive to suppliers, including software vendors. 

The NHS has numerous live AI projects, according to Dominic Cushnan, Director of AI, Imaging and Deployment for NHS England – note the national, rather than UK-wide focus. Speaking at a Westminster Health Forum policy conference on AI in health and social care, he said:

We currently have over 86 live AI projects through multiple guises, from traditional, interesting innovations that need to be tested, right through to those that have got medical-device clearance and need further support. Generating the evidence that the product is efficacious so it can be used in a clinical setting.

Those 86 live projects represent a significant uptick from the 30+ that were ongoing in 2019, when government data suggested that 52% of NHS Trusts had begun AI research programmes. It seems likely that even more projects would be ongoing today had the pandemic not dominated the work of local care providers.

However, the question of smart medical device clearance is a tough one, competitively, for the NHS. In the US, the Texas Medical Center (TMC) in Houston – the world’s largest single life-sciences centre, with 61 institutions and over 100,000 employees – has a huge, dedicated tech accelerator, TMCx, designed to speed innovations into clinical settings. By contrast, the NHS’ Transformation Directorate, into which NHSX was folded last year, has a broader, more general digital transformation remit.

However, the UK is certainly a world leader in AI innovation and investment, said Cushnan:

Current UK investment in AI, across government, is about £2.5 billion. And we know that the UK is at the forefront of progress, third in the world for research and development, only beaten by the Chinese and the Americans. 

We are home to one-third of Europe's total AI companies, twice as many as any other European country. And our world-leading status is tied to a thriving research base and pipeline of expertise, graduating through our universities, and the ingenuity of innovators and policymakers, and the government’s long-term commitment to invest in AI.

Good news. Then Cushnan added a note of pragmatism and caution amidst the flag-waving: 

Artificial intelligence has the potential to transform healthcare delivery, both for the clinician, and for patient outcomes. But the role of artificial intelligence on the clinical side is very much to augment, and not to replace, human expertise. We need to be clear that these types of technologies are there to support clinicians in making their decisions.

Innovations include virtual wards, plus AI systems for analyzing, segmenting, and categorizing a variety of medical scans, images, and other data. The strategic aim is to move away from treating sick patients after the fact and into the realm of predictive health. 

He continued:

A lot of the technologies are still maturing, and we're seeing a very buoyant and mature market in diagnostic products. So, the entire raison d'être of the research agenda has been around generating the evidence. 

But we can only use these technologies if we manage to leverage and unlock the large-scale capabilities of the entire NHS. And that includes supporting frontline digitization through other programmes, and figuring out what the best way is to share data appropriately through the health and social care system.

These are major ethical and governance considerations, therefore, and relate to workforce skills in a healthcare system that has been stretched to breaking point over the past four years. 

That includes skills in data-sharing, management, and security – sensitive areas in the wake of the DeepMind scandal last decade. In 2017, after a year-long investigation by the ICO, the Royal Free London Foundation NHS Trust was found to be in breach of the Data Protection Act by handing sensitive data on 1.6 million patients to Google, unencrypted and without it being pseudonymized.

Lest we forget, this is also an area rife with public distrust. In August 2021 – in the midst of the COVID-19 crisis – more than one million Britons opted out of NHS data-sharing in just one month, over concerns about government plans to hand patient data to private companies

The subtext, of course, is the public’s fear of precious health services being privatized by a government that seems hell-bent on running the system down – a hot-button topic that seems likely to be pushed by growing partnerships between Whitehall and the likes of Alphabet, Amazon, Microsoft, et al. Put simply, at what point does running supportive technology services become running the health services themselves?

Arguably, an NHS-wide AI strategy can only accelerate the involvement of private companies in public healthcare. So, how concerned is the NHS itself about this? And what measures are in place to protect patients from data-gathering by any unscrupulous vendors, using private data to train their commercial products?

Cushnan said:

The way the NHS is set up, the federated model – not just for AI, but for all products – means every company has to go to a hospital, or to a general practice network, to sell their wares. So, from a legal perspective, we have mechanisms in place already to manage this, both from the commercial framework that the NHS provides, and our activities to make sure that products are regulated.

So, I would be clear that, yes, people are selling to the NHS, but they have clear regulatory requirements in terms of what they need to produce to get their product to market, and when we purchase them as well. Both from a foundational hospital perspective, right through to NHS England and the work that we're doing.

Scaling ethics and governance

One such company is Reading, UK-based responsible AI provider, Chatterbox Labs, whose products aim to validate others’ AI models. CTO Dr Stuart Battersby told delegates:

We deal with the unintended consequences of AI. We have [as a community] talked about moving to predictive technology, a switch enabled by AI, but as we do that, we need the right guard rails in place.

So, what might those unintended consequences be? In a frank and useful presentation, Battersby said:

In healthcare, a lot of the focus is on medical devices, but don't forget back-office functions. When we think about cohort analysis, the supply chain, who gets treatment first, and so on, these are all things that should be evaluated as well.

This alone can be an organizational challenge, he explained:

When you start, the issue often gets chucked to the data science team, because products have ‘AI’ in the name. But they alone shouldn't be tasked with this responsibility. They're great at what they do: they build fantastic models, but they're not trained in ethics, law, or compliance. 

You need an old, wide structure to do that. And that goes all the way up to the leadership of your organization. You need the people who do the operational side of things, your data scientists and analysts, but you also need input from subject experts, governance, compliance, legal, and regulatory, all the way up the board. 

Getting a team with a diverse background in terms of experience really helps drive that conversation. But then you have to figure out what you want to actually measure your AI against. 

Wherever you are sourcing your AI – maybe you're developing it in house, maybe you're buying it in – what matters to you? What are the ethical principles of your organization? But those high-level ethical principles really need to be backed up by something. What's the detail behind your ethical principles, what gave you those nice statements? 

And then you actually need to measure that the AI you are building or buying in actually does conform to those things. You need the facts about what the AI is doing! 

The problem here is that people often jump immediately to the notion of ‘explainability’ or transparency. That's an important factor, but so are a bunch of other things. Are the system or data biased, and are they robust? Are there privacy risks within that? Can the model be imitated? 

Repeatability and scale are really important here too. You need a system and an ethical process that enables you to do this repeatedly, and which isn't tied to a particular technology or cloud environment.

All excellent points, and rarely made.

An impossible task

Another challenge was expressed by a different vendor: Edinburgh, Scotland-based medical imaging specialist, Canon Medical Research Europe. The company’s President Dr Ken Sutherland offered a refreshing note of caution in the face of the extravagant claims being made for AI in this context:

It's essentially impossible to build a big enough database, or a diverse enough database [for some medical applications]. 

The challenge is, if you're going to invent an AI to, say, recognize a lung tumour, then you are going to need examples of tumours in all sorts of people from all over the world. All genders, all ethnicities, all of these different variables – weights, height, you name it. We have to have examples of different people with and without that condition in a massive data lake. 

But that's probably not realistic, by even a national organization. Imagine: you train your AI on the whole population of the UK, and you do that statistically and representatively. But when you export your AI model to India or China, you will find it is nothing like as accurate in that population.

A challenge for a UK that, post-Brexit, finds itself adrift from a continent of valuable data. So, what’s the solution? Sutherland said:

To create reliable, robust AI models that are truly generalized means using data from all over the world. And the only realistic method of doing that is federated [or collaborative] learning.

Essentially, this is the process of training global machine learning algorithms on data that does not leave its local nodes, wherever they are located in the world. Sutherland continued:

To create safe locations where investigators can go to access data that doesn't leave the NHS, but you can go there and do your training. And it's okay to do that locally, because as long as you're aggregating what you learn at each different site, then you can create a generic model that is usable around the world.

That's a key piece of technology innovation that's going to help us. […] And if we've got that transparency about where the data is, who's continuing to own that data, and we're proving that we're following robust processes and creating effective AI, then I think we should be able to push it into broad-based adoption.

My take

An interesting event that aired some of the practical challenges of implementing AI in health and social care – an area where the UK has an advantage in having a single public healthcare organization accessible by the public. Handled ethically and appropriately, that data could be a boon for citizens, as long as the government focuses on positive outcomes rather than on monetising its greatest asset.

Loading
A grey colored placeholder image