The UK should be proud of bringing its Online Safety Bill forward but should be “sufficiently humble” to recognize that it’s still a work in progress. So said Jeremy Wright, QC MP, former Secretary of State for Digital, Culture, Media, and Sport, chairing a Westminster eForum on the topic.
The Draft Bill’s novelty means there’s no international pattern to follow, and the government may not get it right first time. This may have unintended consequences, he warned.
So, what are the key themes of this complex, lengthy document? According to Ben Dunham, Associate Director of law firm Osborne Clarke, the starting point is to recognize what Whitehall is trying to achieve.
This is all about creating the right environment for protecting users of online services against harm. It's not about establishing the liability of online platforms for specific pieces of objectionable content.
In short, the Bill imposes a duty of care on internet services.
As such, it covers two main areas: social platforms, online marketplaces, dating apps, and other forums that disseminate content; and search engines that find and display results. Because search is in scope of the Bill, providers like Google must ensure that algorithms prevent exposure to illegal or harmful content before users can click on it.
The latest draft also covers non-user-generated porn, plus fraudulent advertising – a significant expansion of earlier texts. It calls out obviously illegal content, such as terrorist publications and images of child sexual exploitation and abuse, but also more nuanced material, such as information on assisted suicide.
The Bill covers harmful but legal content too. This may pose challenges to news services when reporting from war zones or on human rights abuses, particularly when images are accessible to children. Tensions may arise in establishing whether content is harmful in itself, or legitimately documents harm to others.
There are challenges in any Bill that has an all-encompassing remit, explained Dunham:
While it is generally thought that illegal content is the easiest to deal with, because there's more certainty about what is or isn't legal, in a user-generated context and at scale, there will inevitably be difficulties in establishing what's illegal in some cases.
For example, illegal behaviours such as fraud might be contextual and won't immediately be obvious. So, any decisions about whether to take down a potentially fraudulent post may involve looking at external factors beyond the content itself.
As if that wasn’t onerous enough, platforms will also have a duty of care towards freedom of expression and user privacy. As a result, compliance will need to evolve and recognize that providers may not get it right first time.
So, what do tech platforms make of the Bill? Ben Bradley is UK Public Policy Manager at TikTok, which has stolen YouTube’s and Instagram’s thunder since launch in 2018. He said:
We believe that all companies should operate within a clear legal framework established by Parliament, and I think a systemic and proportionate approach to regulation, which looks at systems and processes rather than at individual pieces of content, is the right one.
This moves debate away from whether platforms should be regarded as publishers (and therefore be subject to the same rules). However, it’s fair to say that the core objection to that remains that it imposes problems of scale, expense, and implementation on platforms, not that it is inherently a bad idea.
TikTok launched in the wake of the government’s internet safety green paper. As a result, it has tried to ‘bake in’ safety by design, claimed Bradley. But he added:
We're always conscious that this is an evolving process. There isn't a finish line. And I think it's important that there's time for parliamentarians to properly scrutinize the changes and understand the Bill and its full implications.
A lot of concepts have been introduced that weren't properly debated or consulted on. And while I share the desire to move at pace, I don't think we should let urgency lead to bad elements of legislation making their way through. I think that's the worst outcome we could have. We also call for greater legal clarity and certainty. In many ways, there's still some way to go.
Problem areas include journalistic exemptions and democratic protections, he said, in an era when extremists might describe themselves as journalists when it suits their purposes.
The language in these clauses remains very broad outside of news publishers. We need to look at what the potential downsides or unintended consequences might be of these provisions. Do they risk being abused by bad actors who would claim these exemptions as protections to spread harmful content?
An excellent point. Another minefield is ID verification. While bots, fake accounts, and trolls pollute and distort public debate – for example, by faking support for extreme views – anonymity can sometimes be a boon. Some individuals may be unwilling to reveal their true identities because of fear of persecution, arrest, imprisonment, violence, hate, and/or discrimination in their home countries. Whistleblowers are one example.
Extremism itself can be a problematic concept. In some authoritarian regimes, gay rights, atheism, feminism, women’s rights, and support for some ethnic minorities are seen as extremist positions, though they are accepted and protected elsewhere.
The Bill refers to the UK, of course, but national solutions challenge platforms that are global – or seek to be – by fragmenting policy, design, and implementation. For example, Google’s pursuit of censored search for China led to an internal employee rebellion in 2018 and accusations that it was abandoning its values.
Even within the UK there are extreme views on a range of issues, which may persuade vulnerable users to hide their identities. To wade into any public debate on trans rights, for example, can demand bravery, with trans people often subject to abuse and hostility.
Women generally feel marginalized and unsafe on social platforms, with widespread misogyny, mansplaining, and unwanted advances by men. This week’s eForum saw an all-male panel, though the organization normally does a better job of advocating and representing diversity.
There are other issues with verification too: one is age, with the risk of young people’s voices becoming (even more) marginalized. Another is the three million people in the UK who lack photo ID. Obliging platforms to filter content based on verified accounts may shut out noisy bots but also mute legitimate voices.
Facebook’s balancing act
The platform most often the focus of arguments about harmful content, misinformation, and democratic processes is Facebook. Richard Earley is UK Public Policy Manager at parent company Meta. He said:
Regulations are needed so that private companies like ours aren't making so many important decisions alone.
This comment might amuse anyone who remembers Mark Zuckerberg staring down the US Congress like a belligerent rabbit in the headlights of a timid, slow-moving truck.
We unequivocally share the UK government’s aim in making the internet a safer place and want this Bill to be effective in doing what it does. But it was already extremely complicated and, in places, introduced contradictory requirements. Over the last year, there have been new things added without detailed consultation.
[As a result] there is a risk that parts of this Bill could become unworkable or unimplemented. We must scrutinize this carefully and give our parliamentarians all the help we can and in making it as effective as possible.
The impression that Zuckerberg’s giant bunny wants a carrot, not a stick, is hard to avoid.
Systemic and structural approaches are the future, so it's good that the Bill's heart remains in those. But the risk is that it will move too far away from its focus on systems and towards specific instructions. This could result in companies being asked to carry out activities that aren't effective.
While people may use the likes of Facebook or Google to find information or audiences, we must remember that they are giant advertising machines. This makes them nervous of any regulation that impinges on core revenues.
The addition of advertising that contains fraud to the scope of the Bill [is an issue]. Clearly, we share the desire of the government to prevent the use of our services to carry out fraud or scams. But we already use a combination of technology, human reports, and partnerships to try to minimize the opportunity for people to abuse our systems in this way.
While that work is important, I do think that there could be a tremendous impact on the advertising industry in putting the entire online advertising system within scope of the Online Safety Bill.
Shock horror! But Earley then made a fair point:
There are three million small businesses in the UK that use our platforms and many of them rely on advertising to grow their businesses and connect with customers, particularly as we come out of the pandemic.
On the same day the government announced that advertising provisions were being brought into the Bill, they also launched an online advertising programme and said it was crucial that changes aren't made without broad consultation.
Mr left hand meet Mr right. There are other challenges, he added:
How will platforms like ours handle situations where a poster shares content that contains a link to hate speech? How do we balance our obligation under the Bill to make sure we're protecting people from hate speech with our obligations to make sure that we're giving news publishers, for example, a fair say?
Interesting. Anyone would think that newspapers which push hateful viewpoints are a nice little earner, both for their publishers and for Facebook.
Alex Rawle is Head of Online Safety Policy at Google. In a presentation that focused on its proactive stance, he outlined some recent initiatives:
We've turned off autoplay for users under the age of 16. On YouTube, we've changed the default upload mode to private, and on search, we've ensured that Safe Sites is turned on for all users we understand to be under 18.
Such moves may have been triggered by the UK’s lead in this space. He continued:
Another crucial element is working in partnership on shared challenges. The first example is our work with Samaritans to ensure that [intervention] information is prioritized for users searching for content on suicide and self-harm. When that happens, we display a box at the top of the search results so that people can be directed towards help.
The second is our partnership with parents on the Be Internet Legends online safety programme, one of the most widespread media literacy programmes in the UK for seven- to 11-year-olds, which has reached over 70% of primary schools and over four million children. It's driving behaviour change: 83% of children surveyed said that they would behave differently online because of the programme.
Great news. However, Rawle noted that media literacy has been sidelined in the latest draft of the Bill, and Google would like to see it revived. But he concluded:
The Bill is focused on tech companies, and a lot of the narrative is around the rules that we must follow. But ultimately, if we don't get the balance right, it will be everyday users who will be impacted.
We want to ensure that the Bill doesn't leave UK users with an internet where they're less able to do things than counterparts in the EU and other democratic countries. The fact that we're moving faster in the UK will mean that our digital economy will be more heavily regulated than its counterparts.
Those remarks may trouble a government that supposedly hates red tape and favours light-touch, pro-innovation regulation.
However, quote of the day came from an unlikely source: the UK’s wannabe content giant that’s really a legacy telco with a bigger customer base than it deserves. Alex Powers is Director of Policy and Public Affairs at BT. He said:
I think we could listen to everyone’s contributions and play a game of corporate lobbying bingo: ‘unintended consequences’, ‘inadequate consultation’, ‘inflexibility’, and so on. The reality is we’ve spent five years going through this process.
Broadly speaking, this is totally the right approach. This isn’t an attempt to regulate every single bit of content on the internet. It's about the conduct of the largest companies that affect everyone's lives from day to day.
Sour grapes? Perhaps, but tasty.