While online safety laws are welcome, they are far from a panacea

Chris Middleton Profile picture for user cmiddleton February 26, 2024
Summary:
Internet safety campaigners are right to want to protect children and the vulnerable, but there are risks new laws might do as much harm as good.

An image of a person wearing a jacket and a beanie hat in front of lines of green code
(Image by Darwin Laganzon from Pixabay)

Online safety has always been a complex issue. While no one would claim that illegal content, legal but harmful content, extremism, conspiracy theories, disinformation, fraud, and other toxic material does not exist online – it obviously does – what can be done about it is an entirely different matter. So, here is a quick primer on some of the key challenges facing lawmakers and regulators.

On the face of it, the obvious answer would be to make both illegal and harmful content social platforms’ and ISPs’ problems. But with nearly two-thirds of the world’s population (over 61%) using some form of social media, monitoring user-generated content demands a vast investment in human resources, automated systems, and Artificial Intelligence (AI). 

Take Meta: by December 2023, it had just 67,000 employees worldwide, down nearly 20,000 year on year. Meanwhile, its earnings and market cap have soared to record levels in the quarter since. Most staff work in R&D, sales, marketing, advertising, tech, management, or basic admin, with many recently focused on the Metaverse. 

So, how can the remainder hope to monitor the posts of three billion people – even with AI? Even some of Facebook’s paying advertisers are off-the-shelf companies peddling scams and fake products, while anyone who runs a business page on the platform will be aware of messages from accounts shilling malicious links and ransomware. Despite these manifold threats, my own experience is that the same problems persist, week after week: report one, and another will spring up to replace it.

Indeed, tackling illegal and harmful content increasingly means fighting fire with fire, because AI is contributing at least as much to the risk as it is to the solution. Deep-fake videos, photos, and audio are all on the rise, along with ever-more-plausible phishing and fraud campaigns. Soon, hostile actors will use the faces or voices of innocent people to commit crimes or denigrate them online. (This has already happened in the case of Taylor Swift, for example).

Meanwhile, social platforms have long argued that they are not publishers in law, and are therefore not liable for the estimated 147 zetabytes of data that will be shared by users in 2024. They are merely providing a service, they claim.

The challenges of how to respond by other means – both technical and political – are legion, however. For example, the risk of online safety laws weakening or undermining some of the internet’s critical security features – such as end-to-end encryption, which enables secure transactions and communications – is real, however much we share lawmakers’ desire to stamp out illegal and abusive material. 

That said, the extent of the most troubling form of content online – child abuse imagery – is staggering. According to the Internet Watch Foundation, over a quarter of a million (255,588) examples of such material were found on the open internet in 2022, with each potentially containing anything from one to hundreds of photos or videos.

On a more reassuring note, however, it is worth noting that this represents just 0.0005% of the estimated 50 billion webpages online. Even so, the scale of the problem is undoubtedly larger, with the dark web or darknet invisible to most users, and encrypted email and chat applications also widely available. 

For campaigners, police, and child protection professionals, therefore, the key issue is simple: each illegal image represents a child being harmed or in danger, and in need of protection from predatory adults (I say this as a survivor of childhood abuse).

Increasingly, these issues pose existential risks to providers who have no intention of enabling criminal behaviour. For example, encrypted chat platform Wickr recently halted the use of its app, because it had been used to disseminate illegal content. Similar services still exist – Telegram, Session, and others – but it seems inevitable that they will succumb to the same problem, or be forced to breach their founding principles to tackle it.

Even so, the wish to communicate privately without being snooped on by providers, advertisers, and/or security services should not be seen as tantamount to a desire to commit a crime – which is one of the risks of ramping up public fear of encryption by casting it as the villain in a political drama.

Countless businesses, private individuals, and state employees may either need or want a secure communication channel – particularly if they are operating in politically repressive regimes, or dealing with privileged information. At the same time, many believe that the desire to catch criminals should not enable a universal right to snoop – to regard everyone with suspicion in order to catch the tiny minority of bad actors.

The question of ‘legal but harmful’ content online is equally tricky. For example, one internet user’s shocking or offensive discovery online might be another’s provocative artwork, or essential frontline journalism from a warzone or terrorist incident. Therefore, any desire to clamp down on legal content might mean some platforms restricting legitimate reporting or artistic expression in order to protect their most sensitive users. 

In general terms, restricting content that, however distasteful to some, does not break any law would also impinge on freedom of speech, expression, and association. And there is a fine line between a concerned, responsible government and a nanny state or repressive regime. That said, why should a child or vulnerable adult be exposed to encouraging posts on self-harm, suicide, or eating disorders? campaigners say. An excellent question.

So, how about restricting or labelling content based on verifiable age, as platforms such as YouTube already do – largely to prevent minors from viewing shocking or violent footage? It’s a workable idea, case by case and platform by platform, but not without its own problems. 

At scale, applied right across the internet, such solutions would introduce more and more friction to online services – akin to those endless cookie dialogues – and may force users to stay logged in via a major authenticating platform, such as Google, Facebook, Amazon, Microsoft, or Apple, in order to avoid them. But not everyone wants their every interaction, choice, and visit logged with reams of invisible advertisers, sponsors, and business partners. 

At the same time, securely and accurately authenticating users is a major challenge – one that carries its own risks, such as forcing users to divulge personally identifiable information. For some people – whistle blowers, anyone reporting from undercover, and others who may need to hide their identity for personal safety reasons – a legal obligation to reveal who they are might actively expose them to harm.

At the same time, authenticated IDs are not, generally, portable from one platform to another. More, a user might happily authenticate themselves on Facebook or TikTok, for example, but not wish to do so in a gaming metaverse, chat community, or virtual world, in which they might, quite legitimately, be exploring other identities – for fun rather than criminal intent.

Again, the assumption that anyone who adopts a new or fictional persona – or a pseudonym – is doing so for illegal or suspicious purposes is a dangerous one.

This, then, is the dial of difficult choices that faces online safety campaigners and legislators worldwide. So, where should the needle point: more towards freedom of expression, or more towards protecting the vulnerable from the worst facets of human nature? As we have seen, they are not always simple choices.

No silver bullet 

These were among the issues debated this month by a Westminster policy eForum on the UK’s Online Safety Act, which places new obligations on any technology platform that is accessible from the UK. In part one of this report we looked at how children’s champions are doing excellent work to protect the vulnerable online – often in the wake of personal tragedy. But what about other online safety experts?

Professor Julia Hornle is Professor of Internet Law at Queen Mary University of London. She explained that simply authenticating internet users by age is not as simple a solution as it might appear:

For harmful content generally [as opposed to illegal porn, for example] the challenge here is that, for many types of content, a simple distinction between minors and adults is not sufficient. For example, how do we tailor age groups in relation to memes, extreme ideologies, or sexual interactions between minors themselves?” [She was referring to anyone under the age of 18, including those who are over the age of consent.]

So, what might a possible solution be? She said:

I wonder whether we could think of almost a driving test for the online world, in order to better understand – in relation to each child – whether they're ready to deal with certain challenges as they go through adolescence. 

The second challenge – and this is the elephant in the room, of course – is content moderation, and the slowness of taking down material, based on takedown notices and flagging systems. 

In the long run, there will certainly be a need for automation. Of course, we already have a degree of automation, including hash databases for images that have already been classified as child sexual exploitation and abuse. But the challenge will be to increase automation without affecting freedom of expression and freedom of speech through false positives – not to mention false negatives.

Then she added:

The third challenge is the importance of effective and proportionate complaint procedures and the scaling of complaints, because, again, this is a problem that tech companies don't seem to cope with very well.

And there are two types of complaints, broadly speaking: those about content and accounts, but also complaints about having content and accounts reinstated, which are two sides of the same coin.

But the fourth challenge, I think, is fully understanding the providers’ point of view. What a duty to protect freedom of expression actually means in practice, from a legal point of view, because that means balancing potential harms with free expression. And that is difficult to do, because it doesn't just depend on the type of content itself, but also on the risk of amplification. 

Ultimately, therefore, it is not just about the content, but also about amplification, and the need to slow down, perhaps, certain types of content. And in terms of search, to de-prioritize certain search results. And that is about the algorithms, which are commercially guarded secrets. 

At some point, there will have to be a regulatory debate on how much the profit of these companies is more important than risks which stem from the morality of the content.

These issues are more pressing with some platforms than with others, she explained:

TikTok is something like 20,000 times more viral than YouTube, so this is very highly targeted content, and so the risk is that high degree of amplification defeats some of the online safety measures.

Abigail Burke is Platform Power Programme Manager for digital rights and freedoms organization, the Open Rights Group. She made the point that online safety needs to work both ways: to protect children and other vulnerable groups, but also everyone’s rights and freedoms online. 

That is a difficult balancing act, she acknowledged:

In June of last year, we coordinated a letter that was signed by over 80 civil society organizations, academics, and cyber experts from 23 countries urging the UK Government to protect encrypted messaging. Now, as the Bill has become the Online Safety Act, we've turned our attention to responding to the outcomes consultations for how proposals in the Act should be implemented.

So, what are the Open Rights Group’s outstanding concerns?

We see challenges for the Act’s implementation and its impact on human rights in a few key areas. The first is around free expression and due process. The Online Safety Act casts an incredibly wide net around content that must be removed. And it's important to us that guidance ensures that companies will consider and prioritize their human rights responsibilities around freedom of expression. 

Already, vulnerable and marginalized groups like activist, racial, or queer communities – plus people posting in non-Western languages – have experienced the highest rates of wrongful content takedowns, and are likely to be impacted by the increased amounts of content removed under this Act. 

Appeals are unnecessary safeguards, but they put the burden on users themselves to act, and they're not utilized in most cases. We'd like to see some more provisions or ideas around incentivizing companies to prioritize accuracy in their decisions. For example, Ofcom could implement a mechanism to penalize companies who make repeated, significant mistakes that impede freedom of expression. 

Or at least create a way for users to seek redress if their content is wrongly removed or their account mistakenly closed. Otherwise, companies are going to err on the side of ‘more content removed’ just to be safe, which means significant amounts of lawful speech and expression will be removed from the internet.

She added:

The second area is privacy. Protecting end-to-end encryption and user security is critical to us, and that will also improve children's privacy online. [Children have a right to privacy that is protected by the UN Convention on the Rights of the Child, the most widely adopted human rights convention in the world.]

We welcome a lot of the new measures to protect children – for example, preventing unknown users from befriending or contacting children on messaging apps. However, if encryption is weakened or client-side scanning is mandated on all platforms and devices, it will make communication insecure for everyone. As is evidenced by cybersecurity experts worldwide. 

Many people in the UK and around the world rely on safe, secure messaging every day, including young people, activists, doctors, and journalists. Several companies, including WhatsApp, have said they will remove their services from the UK if encryption is impacted.

My take

So, there we have it. Online safety is not a simple problem, and there are no simple solutions. And another thing is clear: these issues are becoming increasingly politicized, and that may not make the internet safer for everyone. Indeed, it may actively increase harms for marginalized groups and minorities, and make the online world less tolerant of free expression. So, we must be careful not to break the internet for some in a quest to keep other vulnerable communities safe. 

Loading
A grey colored placeholder image