Online Safety Bill - what harms does it seek to prevent?

Chris Middleton Profile picture for user cmiddleton September 23, 2022
Summary:
There are some bold views from online safety experts, lawyers, press representatives, and Facebook parent Meta.

An image of a man sitting in the dark on their laptop
(Image by Robinraj Premchand from Pixabay )

It is often said that children are at growing risk online, one of the key things the UK’s Online Safety Bill seeks to tackle. Indeed, the signs are that the Bill’s focus is shifting towards protecting children, and away from addressing wider harms to adults. But what do we mean by risk to children?

As LSE’s Professor Sonia Livingstone explained in my first report from the Westminster eForum, children are not some amorphous group, but complex individuals whose online interactions have real-world impacts. As a result, merely targeting platforms that are aimed at children risks missing the dangers to them that exist on larger public platforms.

But it took Susie Hargreaves, CEO of the Internet Watch Foundation (IWF) – the UK organization that reports and roots out explicit images of children – to explain the full scale of the problem when it comes to illegal content. She said: 

Each year we have record years, and last year alone we took down 252,000 Web pages of child sexual abuse [CSA]. That’s millions of images.

The crimes we see online have a real impact on children offline. Just to give you an example, in the first six months of this year, we removed 20,000 instances of self-generated indecent images of seven to 10-year-olds, and that's growing each year. We are exceptionally worried. With that in mind, we really do welcome this Bill.

So, not only are adults exploiting children online, but the impact of such material is feeding back into children’s interactions with their peers at an ever-younger age. More, even young children may be mimicking adult material they are exposed to online, either by accident, criminal act, or peer pressure.

However, one of the challenges facing legislators is how to deal with this, especially when it comes to systems that are trusted by law-abiding adults. 

Encrypted private messaging is spreading, and most adults – and children – believe they have a right to private conversations; indeed, in children’s case, their privacy is protected by UN convention. Meanwhile, end-to-end encryption is vital for the functioning of a trusted, secure digital economy. 

But there’s a problem with all this, explained Hargreaves. Among the IWF’s services is a hash list of unique digital fingerprints of illegal images. As soon as a messaging service becomes end-to-end encrypted, it is no longer possible to scan messages for known illegal material. Unstopped, any one of those images might lead to crimes against – and even by – children.

But she added:

The IWF is totally supportive of encryption as a technical solution, but we need to ensure that it's balanced with the necessary protections, that those are in place for children.

No one would disagree with that. Another problem is a lack of detail from some quarters, she said: 

While we're supportive of Ofcom regulating the space, there's a real lack of clarity in its roadmap about how they're going to work with others to deliver effective regulation.

The roadmap is very helpful in setting out what steps it will take in the early days, but it says very little about how it will partner or work with other bodies that have significant experience in areas like CSA. We need to make sure that it does not make the situation worse, and we work together using the expertise of people in the field. 

The IWF has been working for 25 years in this area, and is held up as a model of good practice across the world. Indeed, the IWF sits at the heart of the UK’s response to tackling indecent images of children, and we are a large part of why very little such content is hosted in the UK.

Quite right. Of course, this reveals a much bigger problem: a UK-centric Act can only apply to UK-centered content; it will do little to tackle dangers on a global scale, including on platforms hosted overseas and accessible from the UK, beyond being a model of good practice.

Harm to adults is another complex area, especially these days when high-profile commentators can drive clicks and engagement by attacking individuals online. Just look at the industry that has sprung up around baiting Meghan Markle, for example. Even some of the UK’s biggest newspapers do it, yet they are exempt from the Bill.

This is a thornier problem than it first appears. Giles Crown, joint Managing Partner at law firm Lewis Silkin, shared the example of an anti-abortion campaigner attacking a high-profile celebrity who has pro-choice views, linking to a news article and a video of an abortion, publishing the celebrity’s address, then encouraging ‘pro-life’ supporters to protest outside their home. They said: 

These are the kinds of posts that platforms have to deal with every day. And this raises a number of questions around freedom of expression and harm. The first is, would this be covered by the Online Safety Bill at all? The problem is, there is exempt content. For example, content that is posted by a recognized news publisher, or which reproduces in full an article by a recognized news publisher, or which contains a link to a full article.

[Then] how would a platform work out whether a piece of content emanates from a recognized news publisher? Perhaps some parts of it would fall within that definition, and others wouldn't. But how does a platform go about making that assessment? 

A platform then has to rapidly work out whether content is illegal, possibly a breach of the peace, harassment, or incitement to commit harm. If so, are there parts of that content which are illegal, and parts that aren't? Could a social platform legitimately edit a user’s content? And would it expose the platform to civil or criminal liability if it didn't? 

This is an extremely challenging set of criteria that platforms need to go through when considering specific content.

Excellent points, and it is notable that a senior lawyer doesn’t have the answers. 

A free press?

So, what do newspapers make of this debate, given that – on the face of it – all it takes to sidestep the Online Safety Bill is to claim a journalist’s exemption? 

Sayra Tekin is Director of Legal, Policy and Regulatory Affairs at the News Media Association (NMA), which represents roughly 900 titles in the UK. She pulled no punches in defending journalists’ right to say whatever they please. Tekin said: 

The key focus in protecting news publisher content is, number one, to ensure that current protections within the Bill are not diluted. Number two, we ensure that neither platforms nor government are the ones that moderate or in any way regulate through the back door news publisher content. And finally, that there aren't any barriers to news publication.

There are ways in which news publisher content, and journalistic content more broadly, is protected. The first is that news publisher content is exempt from the new online safety duties. This means that platforms and tech companies are under no legal obligation to apply their new safety duties to recognized news publishers. It also means that platforms are disincentivized to remove news publisher content. 

One objective criterion is to ensure that Ofcom, as backstop regulator, does not become a regulator for the press. So, it does not make qualitative assessments as to who is, or is not, a recognized news publisher. 

The second point is that there is already accountability. There is a whole suite of oversight and accountability for the independent press, and editorial codes of conduct that publishers are already subject to. Therefore, we do not need an additional layer of regulation via platforms or the government. 

Platforms will need to have systems in place to ensure that they take account of the importance of freedom of expression. And finally, news publisher content is protected in below-the-line comments. So, not only is news publisher content not regulated when it's published on Facebook or other platforms, but the Comments sections on news publishers’ sites are also outside the scope of regulation.

A robust defence and, in a democracy, few would disagree with the critical need to protect a free and independent press. 

That said, it’s fair to say that – in a world governed by clicks and social shares – more and more commentators are stretching the definition of journalism to breaking point, hounding individuals in a race to the gutter for engagement. The irony is that many of the eyeballs they attract may be bots, fake accounts, and troll farms that amplify and encourage extreme views. And, as has been observed before, there is a world of difference between the public interest and interesting to the public.

Regulating the platform giants

So, what does one of the big beasts of the online world think: Meta, parent of Facebook, Instagram, and WhatsApp, platforms that sell people to advertisers?

Richard Earley is Public Policy Manager at Meta. He said:

As a company, we have long been calling for regulations like the Online Safety Bill to set high standards on the internet. We've got strict rules ourselves, which we enforce through a variety of ways about what people can and can't use our platforms for. But we think regulations are needed because it shouldn't be for companies like us to be making these decisions alone.

A cynic might recall Mark Zuckerberg’s combative appearance before the US Senate a while back and reach for the Shocked Face emoji. But Earley continued:

There is an awful lot to welcome in the Bill. But I think there are a number of areas where it’s clear it has deviated from its original intention. 

The first concern is the Bill's impact on private messaging. If we look at the original online harms white paper, it recognized that public social media is very different from private messaging. But in the final Bill, we see those two lumped together as a single user-to-user service, or category of service. This is not how they work. 

We use a lot of proactive tools to find potentially violating public posts that might be harmful and remove them. In fact, our transparency report shows that in the most severe cases, we find and remove more than 99% of the harmful posts we see. 

On private messaging services, like WhatsApp, we focus more on preventing harm from happening in the first place. […] And we also look at parts of the service like group names or group images to find things like child exploitation groups, and we remove more than 300,000 every month just using those services.

Staggering statistics. Then Earley added:

But the latest amendments to the Bill go even further in these requirements, without any consultation. They seem to suggest that companies will have to find ways to prevent any user of a private messaging service from encountering some forms of harmful content. And the only feasible way to do that effectively would be to read everyone's messages!

The next issue concerns journalism and publishers, he said:

We want to support publishers in using our services in a safe and effective way. But our concern is that these exemptions are really too broad. They prevent us from making some determinations about illegal or harmful content at the speed that is needed. 

We're worried that the journalistic content protection will just slow down our ability to act on potentially harmful posts. We already see bad actors around the world trying to disguise themselves as citizen journalists to get around our rules. This happened with [English nationalist agitator] Tommy Robinson a few years ago. 

And we can be sure that as soon as these new protections come into place, many more of these actors who use our services to commit harm will find ways to align themselves, just enough, with the definition of a news publisher to benefit from the additional protections in this Bill. 

We recommend removing these exemptions entirely. If not, the government or Parliament should create a list of who the recognized news publishers are, so that we know exactly who we need to protect.

My take

As noted in my previous report, any official government list of journalism outlets would be a concept so open to error, abuse, and favouritism it would make a mockery of itself. But what’s clear from this debate is that the Online Safety Bill is a bold attempt to find a path through a highly explosive minefield. All we can say to the lawmakers is: good luck. 

Loading
A grey colored placeholder image