Now the UK has left the European Union (EU) the opportunity to forge new policies is clear, according to experts at a seminar organized by tech trade association techUK on the topic of re-inventing UK data protection. However, the specter at the feast – or spectator at the regulatory bonfire, perhaps – is data adequacy with Europe.
The inconvenient truth in any narrative about independent Britain in a digital world is that the cloud isn’t a fog of code in the sky, it’s data centers built on land under local regulations. For the UK, most of those are in the EU – or the US. So, depart too far from the terms of GDPR/the Data Protection Act 2018 and Britain’s tentative data adequacy agreement with Brussels might be torn up.
Factor in eight of the UK’s top 10 national trading partners being in the EU, and you have a problem that can’t be swept aside by rhetoric and ambition. Proximity has always been important in the world of unit shifting, of course, but it’s also important in the realm of bits and bytes.
However, techUK’s upbeat panellists presented a different perspective, namely that the UK has in fact an opportunity to lead reformation of data protection laws worldwide.
Most people think GDPR has served a useful purpose by helping reset the balance between citizens and Big Tech – an example that other nations and some US states are following. That said, few people think it perfect - GDPR is onerous and creates online friction, which consumers hate. The question is, how can data protection be made better, and more finely tuned to everyone’s needs?
Until last year, John Whittingdale, MP, was Minister of State for Media and Data. At the techUK event this week, he said:
There is a feeling that having ‘delivered Brexit’, in the famous words, but actually not done much with it since then, data is one of the real areas where we have the flexibility to make changes to the UK economy.
[Regulations] deliver high standards of data protection and that’s not something the government wishes to dismantle. […] This is not about applying a torch to GDPR. However, there is a strong view that, in some areas, it is uncertain and vague, which has led to companies adopting an unduly cautious approach.
The need for simplification and clarity was the first ambition [of the government’s data consultation], to allow data to be used more easily for purposes that are in the interests of citizens and the economy – for example, scientific research and innovation.
In Whittingdale’s view, the UK can reach data adequacy with other countries more quickly outside the EU, as decisions no longer need ratification by 27 other nations. He added:
The new Information Commissioner, John Edwards [who took up the role in January], has come from a country [New Zealand] where they have data adequacy but different rules, and that is valuable experience we can learn from.
Dr Mahlet Zimeta is Head of Public Policy at the Open Data Institute (ODI), the organisation co-founded by Sir Tim Berners-Lee in 2012. She agreed that reducing the administrative burden of GDPR is important, especially for SMEs, but warned this is only one consideration:
If that short-term intervention has medium- or longer-term consequences in terms of trust in data use cases, then the result in the longer term will be reduced data available in the economy. So, we encourage policymakers to have a ‘theory of change’ approach. […] No one wants to see a short-term intervention, a surge of data availability, then it crashing down again and staying there for the next 20 years.
Trust and stewardship are essential in any consideration of changes, according to Imogen Parker, Associate Director for Policy at independent research organization the Ada Lovelace Institute:
I suggest the government think about building the right data ecosystem for the UK and that absolutely means looking at the hard end of law and regulation – where a lot of the consultation does look. But it also means thinking about the practical tools that will make well-governed data effective. A really good example of that is around further mapping, development, and piloting of different models for data stewardship. For example, data trusts. Public trust is going to be essential to any aspiration to increase data-sharing. We're seeing an increasingly informed and engaged public […] asking questions about where their data is going, and what the implications might be.
The ODI’s Zimeta added that legislative change may not be the right way to effect transformation in the UK data economy. Citing an expert roundtable on health data, co-hosted by the ODI and the Wellcome Trust, she said:
The feedback we got from attendees, who were experts in managing research with health data, is that they were sceptical about legislative change. Instead, they had a preference for improved guidance.
They do not rely on broad consent as a lawful basis for research. They emphasise the importance of research ethics for data use and reuse. Basically, their research practices are not guided by law [primarily], but by what is considered good practice in the community of professionals and researchers. And that's an international community with international standards around research ethics. They said that behavior is not going to change if UK laws change, because that is not how trust works in their sector.
On GDPR, she noted:
GDPR has done a lot and that’s impressive, but it also benefited from first-mover advantage. It began as being about consumer rights and individual rights, but those origins don’t help navigate social benefits or collective goods, such as public health or the environment. Its origin as an individual right means it's not ideally suited to the issue of collective rights and collective harms.
This is an interesting point, and one that may have been overlooked in policy discussions. She explained how the personal and the social are sometimes an uneasy fit:
My DNA, my genetic data, is data about me. But it's also data about my biological family. So, if I give my consent to my data being recorded, analysed, or published, then that data is going to disclose things about people who have not given their consent. So, the model of individual consent is not necessarily the right one for collective rights or addressing collective harms.
She then raised a critical point - equality is not a blanket issue in data protection; it needs to be approached in a more informed and nuanced way that addresses existing inequalities and biases:
Privacy harms are not equitable. Some communities are over surveilled. Ethnic minorities might show up disproportionately in some kinds of data because of the nature of that surveillance, or the questions asked in data gathering, while being underrepresented in other datasets.
Data breaches don't affect all communities equally: some might be particularly vulnerable to a breach. The data might reveal something about them that is stigmatised. So, it's a misconception to think that privacy harms are equally distributed across society. We need to think again about collective harms to communities and groups.
As to what the solution to this might be in policy terms, she offered:
The opportunity for the UK might be coming up with a data protection regime that can acknowledge collective rights and collective benefits – such as the environment, public health – to allow us access to data without having to think about individual consent each time, for the collective good. But the economic benefits from data availability must be inclusive, and they must be sustainable. Likewise, the economic benefits from innovation with data must also be inclusive, equitable, and sustainable. Otherwise, public trust will go. It will make it harder to have more innovation, and to work with data in the longer term, if people see data contributing to social and economic inequalities.
If the community isn't ready for [reforms] or doesn't have the capability to make the most of them, then they may not have the intended effect. We know that data capability is itself unequally distributed across society, and across businesses. The risk is that the new data availability might benefit – in the words of the great Billie Holiday song, ‘them that’s got shall have, them that’s not shall lose’. There’s a market concentration.
On the impact of data availability, will everyone be able to make the most of the opportunity? I'm thinking about SMEs and the ‘levelling up’ agenda. I'm thinking about the sectors that might already face the biggest challenges in AI adoption. In some cases, what inhibits leaders might be a lack of confidence in working with the data that’s already available. So, let’s make sure they feel sufficiently supported and up-skilled.
And again it comes back to that core issue of trust, she added:
Public trust in data use cases can be supported with greater data literacy, and so understanding the duties, the rights, the obligations [is important], and understanding the redress mechanisms. Our research has shown that trust in data practices is not static. Conditions of trust are dynamic; they change over time. They are contextual, they depend on use case. And they are relational, which means they depend on the history of the relationship with a power dynamic.
We don't think a one-size-fits-all approach will work or be sustainable. We think that the key to unlocking sustainable and trusted data availability is dynamic trust mechanisms. So, the real question is, how can you build something that's adaptive, which means it needs to be trusted – and trusted by many stakeholders?
My understanding of these reforms is that they are intended to be adaptive by being outcomes-based rather than prescriptive. But underneath that must be trust. And the ingredients of trust: public literacy, plus equitable, inclusive, and sustainable outcomes. It's got to be thinking long term when making short-term interventions.
Excellent points. Let’s hope the UK Government can provide evidence of long-term strategic thinking on this topic. And, critically, that it can be trusted to put that into action. The consequences of not doing so could have a high price tag.