Internet regulation - What form should it take?

Profile picture for user cmiddleton By Chris Middleton March 21, 2019
Summary:
In the second of two reports from the Westminster eForum debate on online regulation, Chris Middleton hears some suggestions for the future.

 

Image of a chalk board with writing on that says ‘Know the Rules’

Regulation has to be about actions – about what people actually do, not their speech or beliefs, according to Baroness O’Neill of Bengarve. The Chair of the second half of the Westminster Eforum debate this week on regulating the internet – which explored the practical forms this could take – described herself as “a philosopher by trade, and a cross-bench peer in the afternoon”.

A question occupying her mind on a chilly morning in March was how to define a political advertisement in the era of social platforms. We can prevent commercial companies or political parties from breaking the law with such advertising, she said, but not private individuals, nation states, or their security services. Why?

Likewise, there’s no offence of deliberately lying in public office, allowing politicians to capitalise on viral reporting, memes, and Likes without fear of legal consequence – a point I put to her, to which she nodded in apparent agreement (but refrained from commenting).

Missing Santa Claus

Dr Victoria Nash, Deputy Director and Policy and Research Fellow at the Oxford Internet Institute, outlined six key principles for regulation – “a Santa Claus wishlist”, she said, in the absence of the government’s Internet Safety Strategy white paper. (It’s still unpublished at the time of writing.)

First, platforms function as public spaces. As a result, globally stated human rights should be front and centre of regulation, while protecting people’s right to participate. Second, she believes there should be a renewed commitment to safe harbour: “good regulation” that allows companies to take risks, and protects those that act in good faith. She explained:

“Well-balanced immunity or safe harbour are vital if we want responsible corporate behaviour.”

Third, we should provide a clear and detailed framework that minimises the judgement companies use in deciding whether to remove content. Nash said she would prefer judgements that concentrate on illegal rather than “legal but harmful” content, but stressed that it’s people’s behaviours that need regulating, more than the content itself.

Fourth, laws should “incentivise the performance of due process” (make it easier, not harder, to act) rather than reward haste or the sheer volume of decisions. Fifth, platform users should be empowered to protect themselves through their own actions and choices. And sixth, systems are needed that can hold both industry and government to account. She added:

“I don’t have the belief that government will always act in the public interest.”

As a chaotic Brexit gathered force in the capital, political leaders stormed out of meetings, and the Prime Minister blamed everyone but herself for a national cataclysm, many in the audience doubtless agreed.

Is the problem big or small?

But when we talk about internet ethics and regulation, we tend to put a handful of massive companies in the frame: Facebook, Google, Amazon, Twitter, Microsoft, Apple, et al. And as the first half of the conference lamented (see my separate report), we fret about the gravity these massive objects exert, as they bend the fabric of the internet around them.

The problem is this disregards the countless smaller platforms out there, with whom regulators need to build trust and relationships too, and which may host reams of harmful or illegal content. That said, an unintended consequence of regulation could be to make all platforms conservative and cautious, said Nash.

Another would be that introducing friction onto them would push users away to services that offer instant gratification, and which are less likely to be managed properly – a point made by Hugh Milward, Microsoft UK’s Director of Corporate, External and Legal Affairs.

Daniel Dyball is UK Executive Director of the Internet Association, an organisation that brings vendor muscle to the debating table – he wasn’t wearing a Homburg, but its presence was implicit.

Dyball proposed his own six-point wishlist for regulation: It should be targeted at specific harms using a risk based approach, he said; it should provide flexibility to adapt to changing technologies, services, and societal expectations; it should maintain the intermediary liability protections that enable the internet to deliver benefits to consumers, society, and the economy; it should be technically possible to implement in practice; it should provide clarity and certainty for consumers, citizens, and internet companies; and finally, it should recognise the distinction between public and private communications – an issue made more difficult by Facebook, as we will see in a moment.

The Euro angle

Europe certainly regards the big platforms as inherently anti-competitive, but one of the reasons they’ve proved challenging to regulate is the difficulty of defining what they are. Are they publishers, for example? Google and Facebook insist not, while employing heads of news and other suspiciously publisher-like roles.

Were they legally defined as publishers, it would be easier to address the appearance of inflammatory or defamatory material on them. It would also make it harder for them to siphon advertising revenues from other publishers’ IP (while telling them it’s good for exposure, like any big brand demanding people work for nothing). Perhaps the only meaningful difference between Facebook and a publisher is its billions of contributors.

Amazon wants to stamp its smile on all human activities, but frowns at the concept of paying tax in any of them. But of course, such ethical debates are not just happening outside the internet giants, but also within them.

Last year, Google’s statement of ethical AI principles was spurred by employee rebellion over its participation in the Pentagon’s Project Maven. Weeks later, another rebellion flared up over Project Dragonfly, Google’s development of a censored search offering for China.

The outcome of that is unknown, but China’s 1.4 billion people and Beijing’s megabucks hothousing of companies such as Baidu, Tencent, and Alibaba are apparently too big a commercial opportunity for Google to resist – its own ethics be damned. This week, by the way, Google was fined €1.49 billion ($1.31 billion) by the EU for a decade of anti-competitive practices in search advertising.

Meanwhile, Microsoft came out in support of data privacy and GDPR last year, and in favour of regulating facial recognition systems. Yet it was mired in controversy over its own government contracts, specifically its relationship with US Immigration and Customs Enforcement (ICE), when children were caged and separated from parents at the Mexican border.

With multibillion-dollar Pentagon cloud contracts up for grabs, the context for ethical discussions becomes ever more complex.

And of course, Facebook has been talking up data privacy in a business that’s built on intrusion or at least on encouraging voluntary disclosure. While young people no longer see Facebook as important – because their parents use it – many have decamped to platforms such as Instagram and WhatsApp, which are owned by it.

Recently, Facebook has begun pulling together all the strands of its business into a single encrypted whole, undermining vital distinctions between private communications and public statements in terms of what it intends to commercialise and the data it sells to its partners. And as the world’s biggest AI testing ground – bigger even than China – what it chooses to do affects billions of people.

Adam Kinsley is Director of Policy at Sky. He made the point that digital companies’ trendy obsession with aphorisms such as “fail fast” and “move fast and break things” is the exact opposite of safety by design. Indeed, it’s simply a recipe for the world being full of broken things that lawmakers have to sift through, trying to work out which pieces belong where (my own observation).

What we currently have is a system of crisis responses, said Kinsley. Large private companies – which are loyal to shareholders, not taxpayers – are acting as internet gatekeepers in the West, as the Chinese government is in the East. But internet platforms’ chance to self-regulate has passed, he said, and regulation should be “a floor, not a ceiling” to responsible action.

Warnocking at the door

Milward stressed Microsoft’s support for regulation, and suggested the 1984 Warnock enquiry into human fertilisation as a possible model – a coming together of experts from different walks of life to debate critical issues, in a safe space away from the public gaze. But while Microsoft, Google, Apple, IBM, and others are already debating the future impact of AI, for example, what’s missing is effective regulation of current behaviours online – as the terrorist attack in New Zealand sadly illustrated.

Rachel Coldicutt, CEO of Doteveryone, countered that waiting for a list of harms to be agreed by government and industry is naive. In a moving speech, she said there is a need to create a means for the public to seek redress right now.

We expect a small number of regulators to do a colossal amount of work in fixing these problems, she explained. But all of this is meaningless if there isn’t a debate with the public and a means for them to express their concerns – other than the social platforms themselves, of course. It’s not just about education, she added: many people have experienced harm and yet have no one to turn to.

For Tony Stower, Head of Child Safety Online at children’s charity the NSPCC, this was the critical issue too. The inaction of social media providers is actively fuelling harm to children, he said, while young people themselves often prioritise popularity over safety.

More than two thirds of child-grooming offences happen on mainstream platforms, he explained, contradicting Microsoft’s view that much of the real harm to people takes place in marginal online communities.

In the meantime, platforms are selectively revealing information in order to stave off regulation, said Stower. According to him, Facebook told the NSPCC that it would not release information about suspects unless it was forced to do so – but was presumably happy to sell it to advertisers. As a direct result of that intransigence, the NSPCC is now calling for regulation. He explained:

“We want a strong regulator to hold platforms to a duty of care.”

My take

As ever with the Westminster eForums, this half-day event hosted two debates and focused on a broad range of issues concisely and effectively. Hopefully, its findings will be reported upwards to government.

As I said in my first report, the core issue is friction: how much friction can a social platform be forced to accept before people stop using it?

This, really, is the challenge of the digital world itself: it has taught all of us to expect something for nothing – apart from a payment in private data. And it has taught us to receive that something instantly, without consequences to ourselves or to our fellow humans, including content creators. Largely so that Company X can make us look at advertising – arguably, a relationship that holds consumers in contempt and regards them as mere product.

It has made all of us lazy, and persuaded us to value speed, surface, and noise above slowness, depth, and signal. We live in a world of TL;DR and yet are overwhelmed by information bites whose veracity is impossible to verify without making an effort. But the platform discourages that effort.

To those of you who have read this far, our industry believes we must now take action and regulate. Do you?