Ethical concerns around healthcare data exploitation demand legislative action

Stuart Lauchlan Profile picture for user slauchlan January 3, 2019
Could your activity tracker and health app be making your privacy rights sick?

healthcare technology
Increasingly sophisticated Artificial Intelligence tech makes it easier for companies to gain access to people's health data and requires legislative action to counter unethical exploitation of the same.

A study, led by the University of California, published in the JAMA Network Open journal, analysed data covering more than 15,000 Americans, warns that privacy standards around the 1996 HIPAA (Health Insurance Portability and Accountability Act) legislation need to be revised and updated in the face of advances in machine learning and AI.

And if you got a Fitbit or an Apple Watch for Christmas, here’s the bad news - it’s the data collated by apps on such smart devices that is producing a tempting source of information for more unscrupulous entities.

Professor Anil Aswani of the Industrial Engineering & Operations Research Department (IEOR) in the College of Engineering, who led the research, said on the Berkley website:

The results point out a major problem. If you strip all the identifying information, it doesn't protect you as much as you'd think. Someone else can come back and put it all back together if they have the right kind of information. In principle, you could imagine Facebook gathering step data from the app on your smartphone, then buying health care data from another company and matching the two. Now they would have health care data that's matched to names, and they could either start selling advertising based on that or they could sell the data to others.

The study makes a formal recommendation to introduce new policies:

First, policymakers should consider developing regulations to restrict the sharing of activity data by device manufacturers. Although these organizations are collecting and sharing sensitive health data, they are likely not bound by existing regulations in most circumstances. Second, privacy risks from sharing activity data can be somewhat mitigated by aggregating data not only in time but also across individuals of largely different demographics. This consideration is particularly important for governmental organizations making public releases of large national health data sets, such as NHANES (National Health and Nutrition Examination Survey).

The need for new regulation is urgent, argued Aswani, noting that advances in AI will make it even easier for firms to capture data:

HIPAA regulations make your health care private, but they don’t cover as much as you think. Many groups, like tech companies, are not covered by HIPAA, and only very specific pieces of information are not allowed to be shared by current HIPAA rules. There are companies buying health data. It’s supposed to be anonymous data, but their whole business model is to find a way to attach names to this data and sell it.

Ideally, what I’d like to see from this are new regulations or rules that protect health data. But there is actually a big push to even weaken the regulations right now. For instance, the rule-making group for HIPAA has requested comments on increasing data sharing. The risk is that if people are not aware of what’s happening, the rules we have will be weakened. And the fact is the risks of us losing control of our privacy when it comes to health care are actually increasing and not decreasing.

Global concerns

While this study was based on US stats, the ethical questions surrounding the impact of AI and other tech on health data privacy is a global concern. In the UK, legislators are demanding an explanation from Google’s DeepMind AI operation as to why it is transferring control of Streams, a digital health app it developed using NHS identifiable patient data, over to Google itself.

DeepMind’s pitch on this is that the move over to Google will open up global reach for the apps and enable it to become an “AI-powered assistant for nurses and doctors everywhere”. But legislators in the UK are concerned that the transfer of ‘ownership’ means that the app now falls outside the scope of an independent review panel set up to monitor DeepMind’s handling of patient data.

In December, UK Health Minster Lord O’Shaughnessy said that questions would be asked of Google:

We have sought reassurance that none of the current contracts with National Health Service trusts will be transferred to Google, and any changes will require the agreement of the trusts. The patient data processed for Streams will remain controlled by the trusts, and will not be used for any purpose other than the provision of direct patient care, as specified in existing agreements. We are working with DeepMind and Google as they consider how to provide assurance on the use of patient data as Streams grows into a global product.

None of which provides any reassurance that the Department of Health has a clue what’s going on and is playing catch-up with Google. Previously O’Shaughnessy said that the department was working with the Information Commissioner’s Office and the Centre for Data Ethics to ensure that “anything that happens as a result of the transfer of Streams” is legally compliant.

My take

A useful adage here for the UK health minister might be - prevention is better than cure. But it seems a bit late for that bit of medicinal advice to be taken on board here. The Secretary of State for Health Matt Hancock is a big fan of AI and wants to see a revolution in its use within the NHS. He does cite the need for care:

Standards of interoperability, privacy and security complement each other. It is my profound belief based on half a lifetime’s experience that good-quality data management will both improve privacy and security, as well as improve innovation and user experience. Health data is often deeply private, and its privacy and cyber security must be protected. But we don’t do that by preventing its use - we do it with clear rules about its use based on consent and strong architecture. The new Data Protection Act, implementing the gold-standard GDPR into our laws provides the right legal basis for privacy and must be respected in letter and spirit across health and social care.

We have seen a number of false starts on the use of patient data in the NHS. There are still many scarred by the debacle. But again, we need to learn the right lesson. The wrong lesson would be to leave patient data alone as too much trouble. The right lesson is that we can get this right if we are up front with patients about what we are doing and why. We know that patients will give their consent if they hear from people they trust about the difference that their data could make.

It's a bold vision, but not one without risk. After last year’s scandals surrounding Facebook, the issue of data privacy isn’t going to become any less important in 2019, which kicked off with a warning from Privacy International that 61% of Android device apps tested in an investigation automatically transfer data to Facebook the moment a user opens the app, whether a user is logged into Facebook or not. The privacy group warns:

If combined, data from different apps can paint a fine-grained and intimate picture of people's activities, interests, behaviors and routines, some of which can reveal special category data, including information about people’s health or religion.

We’re going to see a lot more focus on the ethical questions surrounding AI-enabled exploitation of health data. As the fitness tracker industry booms, there’s potential for a lot of good to be done, but also for a lot of not-so-good. Legislators and the healthcare industry need to get ahead of this before harm is done.

A grey colored placeholder image