How can we trust the NHS with 'big data' if it can't even get an app store right?
- Summary:
- The NHS wants to use our data for 'big data' projects to improve services. But a new report this week suggests that maybe it isn't quite ready to be trusted.
The NHS has been dabbling in data projects over the past couple of years, most notably in the form of care.data, a project that aims to pull together all of the UK public’s GP health records, store them in a central database and share them with the Health and Social Care Information Centre (HSCIC). The NHS believes that the sharing and analysis of information across the NHS will will help to ensure that the quality and safety of services is consistent across the country.
However, following a privacy campaign and a backlash from the public, NHS England has had to reassess its approach. Despite assurances that the data used will be pseudonymised, and being told that the project was a “moral obligation” by those in charge at the time, people weren't convinced that the NHS could be trusted. As a result, care.data has been 'paused' whilst a confidentiality review takes place.
Personally, I am in favour of projects such as care.data, as I think they could make a real difference to the way that health services are delivered in the UK. But when I read reports like the one released last week, I'm not surprised by the public resistance. How can we trust the NHS with 'big data' when it can't even get the basics right?
NHS health apps
In a report entitled “Trust but Verify”, published in a BMC Medicine online journal, it was highlighted how applications that are used to monitor a citizen's personal health, which are made available in the NHS' Health Apps Library, have been leaking personal data. The report states:
[A] third study describes a systematic security and privacy review of apps in the accredited NHS Health Apps Library, a space in which consumers might reasonably assume that such issues would be robustly addressed. However here, too, the authors found inconsistency and poor discipline, with apps storing medical data in ways that left them susceptible to interception or data leaking, as well as highly variable uses of privacy policies. In one case, an app was found to transmit a form of data explicitly claimed not to be transmitted in its privacy policy.
The key here for me is that whilst I may not trust an unverified app made available on the Android app store, if it is found on an NHS website, one would assume that it had been sufficiently checked out. In fact, if I found an application on an NHS portal, I would take it as an endorsement or a recommendation for that app.
Kit Huckvale, a PHD student at Imperial College London and one of the authors of the study, told the BBC that he and his colleagues looked at 79 different apps listed on the NHS library, where they supplied the apps with fake data over a six month period to see how the information was handled.
Of those 79 apps, a whopping 70 sent personal data to associated online services and 23 did so without even encrypting the information. Huckvale added that more than half the apps had a privacy policy in place, but many were poorly worded and did not let users know what types of data were being shared.
The study also looked at the general reliability of widely available health apps – using insulin dose calculation as an example of an app that is widely adopted by patients. The findings are sobering and a stern reminder that not everything we find on an app store should be trusted. The study notes:
Insulin dose calculation is a basic task, for which we might reasonably assume we can trust a computer better than our human faculties. However, assessment of apps performing this calculation found a litany of errors that force us to consider critically the current ecosystem. It is alarming to read that 91% of dose calculators lack validation to check the data quality of user input and 67% risked making an inappropriate dose recommendation.
There was a disappointing lack of transparency too, with 70% lacking documentation for the formula used, 46% of developers failing to respond to requests for information, and two developers flat-out refusing to share their algorithm with researchers, citing commercial reasons.
Quality was no higher for paid apps than free ones, and no higher in the Apple store than the Android store, despite Apple having more stringent entry criteria for apps in general. Most errors pointed patients toward taking a higher dose of insulin than was needed, with the potential for avoidable hypoglycemia.
What can be done?
The BMC report looks at a number of options for improving the quality of medical apps that are made available via app stores. Some of these include boosting app literacy amongst consumers, creating an app safety consortium, for owners of the app stores to enforce the ability to evaluate medical calculator apps transparently, for app store owners to take full responsibility for every aspect of security and quality for medical apps, or even for government regulation.
All have their pros and cons and you can read them in more detail in the report.
However, the authors of the study highlight the crux of the argument by noting that quantity isn't better than quality and that we are in fact a long way off of creating apps that give us valuable medical care. They conclude:
Any one of these approaches will add complications and cost to the simple act of downloading an app, but this may still be preferable to avoidable serious adverse events inflicted by software bugs or sloppy practice. Do we want 100,000 medical apps, most of which are shoddy? Or do we want 1,000 that we can rely upon? It is the patients, their caregivers, and their healthcare professionals who should drive what an appropriate level of rigour might be, and ultimately they are the only ones who can exert pressure to change the system.
We believe most people would be surprised at the low standards of apps described by these three
important studies and disappointed that the safeguards they rely upon in other spheres of life, such as truth in advertising, professional practice standards, or clinical testing of medical products, appear to be absent in this exciting and much-hyped area of techno-utopianism. In considering whether a bottom-up or top-down approach is best, we must also balance innovation and diversity of approaches against patient wellbeing—there is no point “disrupting” the established healthcare system if the new era is not safer for patients than the old one.“As medical innovators, this has been a difficult set of data to fathom. We eagerly look forward to a time when medical apps might be relied upon to do much more complex tasks than simply calculate formulae or illustrate inhaler technique; for example, recommending personalized dosage schedules, analyzing patterns in user behavior, interacting with the Internet of Things, perhaps even controlling implanted medical devices. The potential for benefit remains vast and the degree of innovation is inspiring, but it turns out we are much earlier in the maturation phase of medical apps than many of us would have liked to believe. To build the future we want, in which patients can trust their medical apps, we need to verify that they function as intended.
My take
Is the NHS really surprised that we don't trust it with our data? I reached out for comment from NHS England, but hadn't heard back at time of publication.
Equally, the study highlights well that we are a long way off of using digital technologies to valuably replace older systems for things like healthcare. Until we can trust these 'apps' and use them for meaningful, personalised services, people should continue to use tried and tested approaches.