Main content

Data protection - the COVID-19 dimension

Chris Middleton Profile picture for user cmiddleton November 30, 2020
The Westminster eForum on data protection debates a key issue in 202 - how a successful fight against the coronavirus is rooted in trust in data.

Image of a person at work wearing a face mask to protect against COVID-19
(Image by Engin Akyurt from Pixabay )

When Bloomberg published its international COVID-19 resilience rankings earlier this month, showing how different countries compare in their battles against the virus, a key statement was buried in the analysis:

Success in containing COVID-19 with the least disruption appears to rely [...] on governments engendering a high degree of trust.

Where administrations have let the virus get out of control through a lack of early, decisive action, trust in the systems that have been put in place - along with citizens' trust in the science, the technology, the data, and in each other - are all key to enforcing concerted remedies. Governments preside over this, setting both the detail and the tone.

Yet in some countries, trust in government competence is low, which has led to growing protests against whatever measures are put in place to protect the public health. As a result, the virus spreads further and more people die from a preventable, isolatable disease. Meanwhile, cults form around conspiracy theories, encouraged by bots and troll farms: another sign of falling trust in central authority. In any vacuum panic sets in, and people seek out voices that speak to those fears.

The same broad principle applies to data privacy and protection: trust and good governance help to make services work for everyone. This is especially true in the COVID era, when citizens are being asked to accept greater intrusions into their lives in order to track and isolate the virus. The risk that some of those intrusions might become permanent while we are distracted by catastrophe is a critical issue for discussion.

As many as 85 percent of the population have been working from home in some countries, which itself has data protection implications: the perimeter of organisations has widened to embrace employees' homes, including their insecure wifi routers, laptops, tablets, and smart home devices. Meanwhile, confidentiality is rooted in a common-law duty of care, not just in data regulations.

The problem, of course, is that survey after survey reveals that while most of us routinely accept Ts & Cs without reading them to use different apps, platforms, and services, trust in data-based systems is also falling, especially in the wake of the Facebook/Cambridge Analytica debacle and other scandals that reveal the average citizen to be a minnow in a pool of data-guzzling sharks. 

In the UK, news of data/tech contracts being handed to friends of government advisors (who are themselves setting up research agencies) is just chum in the shark-infested waters. The UK's contract tracing system has also been a multibillion-pound exercise in ‘chumocracy'; at times it has appeared to be as much about tracing government contacts with private-sector cronies as it has about isolating a killer virus.

So, what to do?

These were among the issues debated at the Westminster eForum on data protection. The core context was Brexit and how this is affecting data governance for a UK that is adrift from both the EU and the US (see my separate report). But a useful stream zeroed in on the implications of COVID-19 for data protection and public health, and vice versa.

Dawn Monaghan is Head of Information Governance Policy at NHSX, the division of the UK's health service that drives digital transformation and tech governance. She explains that healthcare itself has been forced to become remote, with video consultations and even remote blood pressure checks, all of which impacts on the way healthcare providers use data.

Those things are all covered by the same information governance frameworks, regimes, legislation, and codes, and in particular the same set of principles. But during the last nine months, the shift and impact that has occurred in information governance has not been necessarily with the Data Protection Act, in the principles. The impact has been in the ability to satisfy the common-law duty of confidentiality. This causes us to have concerns about the sharing of confidential patient information in ways that are other than for direct care. 

In normal times when we would implement a policy or programme there would be lots of debate and collaboration, arguments and disagreements, and often they get bogged down. But in a pandemic things have to move very quickly. In a public health emergency it's a different set of circumstances.

So what does this mean for healthcare - not just in the UK, but in any country that has been amassing data to help beat the virus? She says: 

Purpose is always king. People's lives are at risk, so that purpose has been very clear, which means it has been easier to gain the confidence of frontline staff and other people, lobby groups, to understand that purpose. But the Secretary of State's order, which basically says you are required to share this data for public health purposes, has shifted the mindset and the culture within the system from a duty of care to a duty to share, and shifted liability to the Secretary of State.

In May or June, we were getting a positive vibe from citizens, patients, and those in care. There was a realisation that data and information are crucial in a pandemic, not only for research, statistics, and policymaking, but also for individual care - and the impact on care for family, friends, and wider society, and indeed globally. 

But the downside is that transparency has not been keeping up - a worry that data was being collected for things that we weren't being told about, and things that we wouldn't want it to be used for.

This perception - justified or not - has had troubling and widespread implications, she suggests, in that it has encouraged some people to believe that we now live in a data free-for-all.

I've heard quite a few people say, ‘Oh it's brilliant now because we can share data for anything with anyone, because we've ripped up the rule book'. No, we absolutely have not. We're following GDPR, just like we have always followed GDPR, but the common law duty of confidentiality and the way in which we can set it aside has been made a lot easier, because we're in a pandemic.

So we've got to be absolutely clear going forward, when we are no longer in a public health emergency, that we've got an exit strategy and we know where all that data has gone, what it's been used for, where it's being stored, and when it's no longer available for that. That it is deleted and is treated in the right way. But also when it needs to be kept for proper purposes and long-term research. We've got a legal gateway for being able to do that, so an exit strategy is underway.

The data strategy commissioned by the Secretary of State has eight outcomes, two of which are key to this, she explains:

The first is the simplification of information governance at the frontline. And the second is making sure that we focus on transparency and get that right.

Networking in a pandemic

So how has the pandemic affected other organisations' data policy? Matthew Houlihan is Senior Director of Government and Corporate Affairs at networking and communications behemoth, Cisco, whose products have been at the core of 2020's culture of mass, enforced remoteness. He says that lockdown made some problems all too apparent for organisations that had working from home thrust upon them, both operationally and culturally.

One of the features that we saw was not only a lack of devices, but also a huge increase in the use of out-of-date devices, which didn't have the right software or security. [...] We saw a 90% increase in the use of out-of-date devices in the first three weeks of March 2020. Obviously that creates a challenge in terms of ensuring that those devices, if you're going to be using sensitive data on them, are protected.

Related to that, if you're working remotely you want as a company or public sector organisation to ensure that your employee is allowed to access the system or data when they're working remotely. So, one of the challenges is verifying users remotely.

As people work more remotely, they're more likely to use more cloud-based applications, and we saw a 40% increase in authentications to cloud applications from March to June 2020. This highlights the growing complexity of the data protection ecosystem: you're increasing the number of links in the chain, increasing the number of people in the ecosystem. And with cloud applications also comes an obligation to ensure that you know how that data has been transferred internationally and all the legal ins and outs of that. 

We think that building trust in the digital tools that people use for mobile is clearly critical if we're going to make a success of remote working on an ongoing basis. But the evidence right now isn't great for that.

According to Cisco's own 2020 consumer privacy research, 60% of people are moderately or very concerned about the remote access tools they are using. Consumers are also concerned that their data will be used for unrelated purposes in the pandemic, that it won't be anonymised or delivered properly, and will be used more broadly than advertised. 

Data on the way that people are working from home/remotely is a further consideration, if employers seek to monitor the productivity of their employees - something that is likely to amplify mistrust, not restore workers' confidence. 

My take

An important debate that reminds us not to lose sight of the way things were before the coronavirus hit, in our determination to hit the reset button. Great leaps forward must not come at a cost to citizens' future privacy and security. The sharks need to be kept at bay, not bred in state captivity. Trust is everything, and some companies and governments will need to work much harder to restore it.

A grey colored placeholder image