UK introduces Online Safety Bill - but serious concerns remain
The Online Safety Bill aims to make the internet a ‘safer’ place for users, forcing big tech to introduce new measures. But huge concerns remain over how it will work in practice and the reach of the legislation.
The British Government is set to introduce its flagship Online Safety Bill in Parliament today, which aims to create a safer online environment for users. The Bill has been through a number of iterations in recent months, with new measures introduced, but there is still fierce criticism to how the legislation will work in practice.
The Department for Digital, Culture, Media and Sport (DCMS) has said that the Bill makes the internet a safer place for users, whilst holding tech giants to account. However, privacy campaigners are seriously concerned about the legislation’s reach, how it will impact freedom of expression online and the powers given to ministers to make decisions about what is ‘harmful’ online.
During the draft and consultation period of the Bill, there has also been significant backlash to the government’s attempts to use ‘online safety’ as a campaign strategy to try to place limits on messaging services that use end-to-end encryption (such as WhatsApp).
However, the government’s primary focus with the Bill is that it will require social media platforms, search engines and other apps/websites that allow users to post their own content to tackle illegal or harmful activity.
Communications regulator Ofcom will soon have the power to fine companies up to 10 percent of their annual global turnover if they fail to comply with the laws. Ofcom will also have the ability to force them to improve their practices and block non-compliant sites, DCMS said. How this will work in practice, remains to be seen.
In addition, the government has announced that executives whose companies fail to cooperate with Ofcom’s information requests could now face prosecution or jail time within two months of the Bill becoming law, instead of two years as it was previously drafted.
One particularly challenging issue for tech companies is that the legislation will require that any websites which publish or host adult content, including commercial sites, will have to put checks in place to ensure users are 18 years old or over. It’s unclear how the likes of Google or Twitter will enforce this.
Other new offences have also been added to the Bill that will make senior managers criminally liable for activities that include destroying evidence, failing to attend or providing false information to Ofcom, and for obstructing regulators when it enters company offices.
News content and journalism will be exempt from any regulation under the Bill, but social media platforms will be required to tackle ‘legal but harmful’ content, such as exposure to self harm, harassment and eating disorders, which will be “set by the government and approved by Parliament”. There is concern, however, that Ministers will have too much power to decide what is ‘legal but harmful’.
But DCMS says in its release:
Previously they would have had to consider whether additional content on their sites met the definition of legal but harmful material. This change removes any incentives or pressure for platforms to over-remove legal content or controversial comments and will clear up the grey area around what constitutes legal but harmful.
Commenting on the introduction of the Bill, Digital Secretary Nadine Dorries said:
The internet has transformed our lives for the better. It’s connected us and empowered us. But on the other side, tech firms haven’t been held to account when harm, abuse and criminal behaviour have run riot on their platforms. Instead they have been left to mark their own homework.
We don’t give it a second’s thought when we buckle our seat belts to protect ourselves when driving. Given all the risks online, it’s only sensible we ensure similar basic protections for the digital age. If we fail to act, we risk sacrificing the wellbeing and innocence of countless generations of children to the power of unchecked algorithms.
Since taking on the job I have listened to people in politics, wider society and industry and strengthened the Bill, so that we can achieve our central aim: to make the UK the safest place to go online.
The Online Safety Bill is facing fierce criticism from prominent privacy campaigners, the Open Rights Group, which has declared that ‘internet policy is broken’. Executive director Jim Killock has written a comprehensive breakdown of where the Bill fails, which is worth reading in full.
Killock writes that the Bill is getting “worse, rather than better” and that the UK now has legislation that is “much more dangerous” than it was six months ago, focusing on solutions that are likely to fail.
Highlighting how the Internet is being used by citizens during the current Russian invasion of Ukraine, Killock notes:
It is clear that the OSB would make it much harder if not impossible for social media companies to offer uncensored, unfiltered and privacy-preserving access to people in Russia and Ukraine. How would the UK’s attempts to undermine encryption, or force identity and age verification in this bill fit with attempts by Instagram to offer encrypted messaging or Twitter and Facebook providing Tor services to people living under intense censorship and surveillance?
Killock also highlights how the UK has a poor track record when it comes to implementing internet-focused legislation, including the poorly thought through Digital Economy Act of 2010 and the repeated failure to introduce age verification for adult websites (because of backlash to the idea).
The Open Rights Group is particularly concerned about the powers given to government within the legislation. Killock writes:
The Online Safety Bill proposed to ‘solve’ safety issues through a ‘duty of care’ that would address ‘legal but harmful’ content. The Bill has not managed to find a means to define what would fall under these concepts; instead, Ministers or the regulator will decide. Worse still, the emphasis on content regulation will again punish the minorities the Bill is meant to protect; whether through language and cultural barriers, or because sexuality is easily mistaken for adult content by algorithms, minority content is punished by automated content adjudication, which will be the primary result of this Bill.
I have serious concerns about the reach of the Online Safety Bill and how the government is using examples of ‘protecting children online’ to make sweeping reforms to how platforms and citizens behave on the Internet. Yes, of course children need to be protected, but does that mean powers should also be handed to government Ministers to decide at their own discretion about what is ‘legal but harmful’ online? I don’t think so. There is a great deal of risk involved in that power dynamic. I also can’t see how this will be practically enforced in any reasonable way - beyond prosecutions being used to make examples of people and companies. The scale of activity on the internet is just too vast. In addition, the UK acting in isolation - rather than collaborating with allies on some sort of global agreement regarding regulation - leaves us wondering how seriously the tech companies will take this.