Digital identity - should bad behavior be flagged in reusable IDs?

Chris Middleton Profile picture for user cmiddleton March 19, 2024
On the face of it, digital identities would solve a broad range of problems by authenticating people online, minimizing fraud, criminality, and fake accounts. The reality might be rather different, found a policy conference.

An image of a digital fingerprint
(Image by ar130405 from Pixabay )

In my previous report on digital identity from last week’s Westminster policy eForum, speakers explained how any concerted move towards adopting a ‘one size fits all’, reusable digital ID would risk excluding a broad range of vulnerable groups, while perhaps locking in and automating economic and social privilege. 

Such a move would also risk creating an ‘us and them’ environment in society at a time when the dominant political discourse in some nations is increasingly one of distrusting immigrants, minorities, and anyone who is ‘other’. My first report shared examples of ID schemes tilting towards such outcomes around the world, even as the UN’s Sustainable Development Goals include the well-intentioned aim of giving everyone some form of legitimate identity by 2030.

Once again, the specter of “the innocent have nothing to fear”, the phrase famously used by Conservative politician Michael Howard in the 1990s, raises its head. Why? Because those words often fail to ring true, even in mature democracies – the Windrush scandal in the UK being just one recent example. 

But there are other, more nuanced and practical risks at play in any move to adopt a reusable, portable digital identity system. These include: what if the data about any individual is flawed, incorrect, or refers to someone else of the same name? And, as a result, what if a marker is wrongly attached to their name and becomes associated with their identity online? What remedy and redress would be available to them?

Another problem would be just as serious: the risk of exclusion from essential services. What if an individual is unable to obtain an authenticated digital ID in the first place, because they lack official documents in the real world? The most recent data from the UK’s Office for National Statistics says that 17 percent of the population of England and Wales do not hold a passport, while 28% of citizens over the age of 17 do not have a driving licence.

So, needing an ‘official you’ when visiting every website, app, and platform (including government services, such as tax and healthcare) would be a very different thing to simply signing in to an account, or into Facebook, Apple, Amazon, or Google, every time you shop online. At least, that was the view expressed in some presentations to the eForum.

But there is another dimension to this, observed some speakers: might other members of society find their pasts – rather than their lack of legitimate documents, for example – following them everywhere they go online, even in situations where it might not be relevant or appropriate?

Take one extreme example. Should someone’s criminal past – or present – be stored and publicly flagged in every online visit? This was the flashpoint question raised by a couple of presenters at the eForum. Chair Ros Smith, Senior Technologist for Digital Identity at telecoms regulator Ofcom, both asked and answered the question, before handing the poisoned chalice to other speakers.

She said:

How do you balance inclusion, human rights, and the protection of individuals through the use of digital IDs? 

For example, someone who is on the Sex Offenders Register could be highlighted when they join a social platform or forum, so that other users could see that this person is not someone to trust or talk to. That example would rightly be seen as protecting a teen, a child, online. But [with a single digital ID] it might also put an identifier on that individual [that appears elsewhere].

Chad Chandramohan, Chief Technology Officer of digital inclusion charity the Good Things Foundation, picked up the theme, saying:

Yes, it’s a very difficult example from the perspective of an identity service. One of the worries [about reusable, comprehensive digital IDs] that has been expressed in policy discussions has been this data-sharing element.

The example given here is clearly a valid one, where you would need to have those markers in place [to protect children online from predatory adults]. But if you have an underlying identity service, which starts to carry some of these markers too [to other locations]… 

For example, should HMRC see that specific marker in relation to someone’s tax information? One might argue not, because it's simply not relevant. But should a platform that's much more of the kind that we're talking about [a social platform] see that marker? Absolutely. 

So, there are certain use cases where it is absolutely valid to see those markers, but others where it is not. And that's one of the difficulties of creating a shared digital identity service, where all this information is potentially available to everyone.

Susannah Copson, Legal and Policy Officer at privacy and digital rights organization Big Brother Watch added:

This is a really difficult tension. Do HMRC, for example, need to see that kind of data? Probably not. So, it is more about the way that information might be shared more broadly, especially sensitive personal ID information. Who it is shared with, how it is shared, and whether that is necessary in some contexts. It’s a really difficult one.

Indeed. The question of whether one type of criminal should be flagged everywhere they go via their digital ID is a clever one, because on the one hand it plays to the public’s natural desire to see abusers caught and children protected. Why should such people have their privacy protected? 

But on the other, it is easy to see that it would be a short step from there to every type of criminal or antisocial behaviour being flagged in someone’s digital ID, and from there to everything else about them being stored as well: their finances, membership of organizations, beliefs, ethnicity, sexuality, and so on. 

In other words, how much data about an individual would be too much for a reusable digital ID to hold? And who should have access to it, and why? Stray too far down the path towards storing everyone’s movements, behaviour, and history, and you arrive at absolute surveillance, or a points system based on badges of dishonor and demerit. (And once again, what if the data is wrong, or refers to someone else? What if the system fails?)

Chandramohan added:

If we start adding markers to digital identity, then what if someone has been convicted of fraud or something, and there is a marker on there about that? In that example, HMRC would want to know about it, because they would want to pay more attention to it. But then you get to the Big Brother thing. 

And, of course, if the marker is wrong, then how do I rectify my data? Say if my identity has been reused in a lot of places, but some mistake has been made and a marker has been wrongly attached to my identity. And that instantly starts being shared – not only with the government, but also with commercial organizations, and I don't have an easy way of correcting it.

He added:

Quite often, as services become digitized, the ability to correct them becomes harder and harder. So, if the pervasiveness of this doesn't have checks and balances in place, then it could really start damaging people, and damaging families.

The technology is actually fairly straightforward. So, it's the controls and the legislation, the practices and the checks and balances, that become vital.

A digital divide

In a hypothetical future in which having an authenticated digital identity becomes mandatory when accessing essential services, another key check and balance would mean ensuring that every citizen – not just the majority – is both comfortable and confident in the digital world. 

This, too, may be a problem. Various sources suggest that over 1.5 million people in the UK may be completely offline – more than two percent of the population. As Great Britain is the country with the sixth highest internet penetration on the planet, those statistics will be worse in most other countries. Meanwhile, Ofcom’s 2023 Technology Tracker reveals that seven percent of the UK population does not have an internet connection at home.

Big Brother Watch’s Copson noted:

It is really important to promote digital literacy – and there are fantastic efforts out there to do that. But adjacently, that's why we need to protect non-digital ID methods as well. For people who aren't necessarily up to speed and don't have the capacity to engage with that kind of education – or may not want to – we should support them in using alternatives. 

It's something for which we're seeking to get the legal rights established. Because it helps people use the services in the best way for them. And that's what is most important, I think.

Quoting comments posted in the online event’s chat, Ofcom’s Smith added:

Many people want to maintain communal ties [rather than solely access essential services online]. They want to have human interaction with people in their neighbourhoods and communities – especially for many elderly people, ethnic minorities, women in abusive relationships, and all sorts of other demographics. 

[Using a High Street service] may be the one time in the week they get to interact with other people. So, we shouldn't underestimate the value of human connection and the importance this holds for people.”

Chandramohan responded:

Choice is really important. If we don't have the choice of other means [of proving identity and accessing services], then we're going to start excluding people. 

And we're probably going to end up excluding the very people who least need to be excluded. Because in the areas that we deal with, for example, we're starting people on their journeys to jobs and, essentially, being able to live life and take part in their communities.

Another challenge with any move to introduce comprehensive digital ID systems would be their safety and security. Angus McFadyen is Partner with law firm Pinsent Masons. He said:

Increasingly, what we're talking about is reusable identities. And the complexity and privacy concerns that come up there are broader than single use cases, but they also can vary quite dramatically by different products. 

So, there are digital identity solutions out there that have more limited purposes. For example, some are focused on proving age. Those will have more limited privacy concerns than those systems which are designed for more general use, where you might have a wider range of attributes that are available and held within the digital identity products.

Any abuse of personal data can lead to financial, moral, and in some cases, physical harms. So, data protection, and privacy by design and by default, will be key in the UK.

My take

An important debate, and one that reveals that any headlong rush towards mandating portable, reusable digital IDs to access services carries risks – risks that demand consideration at the design, policy, and implementation stages, and not a mess of partial remedies after the fact.

A grey colored placeholder image