Europe's legislators deliver emphatic 'Non!' to Facebook's vision of self-serving internet regulation

Profile picture for user slauchlan By Stuart Lauchlan February 18, 2020
Summary:
Regulate me, says Facebook's Mark Zuckerberg, and here's how. You've got to be kidding, say Europe's legislators.

Zuckerberg
Věra Jourová meets Mark Zuckerberg

Yesterday, highlighting Facebook CEO Mark Zuckerberg’s latest comments on the need for internet regulation, I noted:

There’s still an awful lot of deflection going on with Zuckerberg’s messaging. He’s saying all the right things - or what Apologist-in-Chief Nick Clegg thinks are the right things - but the proof will come when legislators do start to roll out regulations.

It didn’t take long for some of that to come to pass as the social media giant presented its own self-serving vision of what a regulatory regime should look like, only to be met with an emphatic ‘Non!’ from European Commission officials.

Facebook issued what it called a discussion paper on regulation on Monday - Charting a way forward - online content regulation - as Zuckerberg headed into meetings in Brussels with various EC representatives, only to find the document and its contents dismissed as “insufficient”.

EU VP for Values and Transparency Věra Jourová, one of the EC officers Zuckerberg met, issued a statement following their encounter in which she rejected the CEO’s by-now-familiar refrain that he and his firm need to be regulated for their own good. This, according to Jourová, is an abdication of responsibility:

Facebook cannot push away all the responsibility. Facebook and Mr Zuckerberg have to answer themselves a question ‘who do they want to be’ as a company and what values they want to promote. It will not be up to governments or regulators to ensure that Facebook wants to be a force of good or bad

EU Industry Commissioner Thierry Breton, who’s due tomorrow to unveil a list of proposed measures around AI use, was blunter when he spoke to reporters after a short meeting with Zuckerberg:

It’s not for us to adapt to this company; it’s for this company to adapt to us!

Soft touch 

So what was in the Facebook discussion document to provoke such an immediate dismissal of its contents? Leaving aside the inevitable long-standing accusations of inherent anti-US sentiment on the part of the EC and the irresistible political temptation of being seen to give Zuckerberg a smack, the most immediately obvious concern is that it’s exclusively couched in terms of ‘soft touch’ regulation.

That raises concerns that what’s being offered here is Facebook being seen in public to take a ‘lead’ on regulation, but that regulation itself being so weak as to demand little or no actual change to take place on the part of the firm and its peers.

In a blog post, Monika Bickert, Facebook Vice President, Content Policy, says that the document is built around four crucial questions:

  • How can content regulation best achieve the goal of reducing harmful speech while preserving free expression?
  • How can regulations enhance the accountability of internet platforms?
  • Should regulation require internet companies to meet certain performance targets?
  • Should regulation define which “harmful content” should be prohibited on the internet?

She adds that Facebook has learned lessons that have coalesced into ‘principles’ that the firm believes need to be factored into any debate:

  • Incentives. Ensuring accountability in companies’ content moderation systems and procedures will be the best way to create the incentives for companies to responsibly balance values like safety, privacy, and freedom of expression.
  • The global nature of the internet. Any national regulatory approach to addressing harmful content should respect the global scale of the internet and the value of cross-border communications. They should aim to increase interoperability among regulators and regulations.
  • Freedom of expression. In addition to complying with Article 19 of the ICCPR (and related guidance), regulators should consider the impacts of their decisions on freedom of expression.
  • Technology. Regulators should develop an understanding of the capabilities and limitations of technology in content moderation and allow internet companies the flexibility to innovate.  An approach that works for one particular platform or type of content may be less effective (or even counterproductive) when applied elsewhere.
  • Proportionality and necessity. Regulators should take into account the severity and prevalence of the harmful content in question, its status in law, and the efforts already underway to address the content.

Don’t get this wrong, Bickert cautions legislators and activists for tougher regulation:

Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation.

In the discussion document itself, Facebook focuses on a number of problems that attempts to regulate will run into. 

(1) Legal environments and speech norms vary

In other words, different regional cultures and laws make it difficult (ie expensive) to comply so companies just opt for a global - for which read US-centric - policy rather than go to the effort of meeting country-specific requirements:

The cross-border nature of communication is also a defining feature of many internet platforms, so companies generally maintain one set of global policies rather than country-specific policies that would interfere with that experience.

Underlying conclusion - too difficult.

(2) Technology and speech are dynamic.

This one’s a doozy - the thrust of the argument here is that saying something offensive in some contexts is less offensive than in other contexts, even if the thing being said is fundamentally offensive. Or as it’s worded:

Among these different interaction types, norms for acceptable speech may vary. Just as people may say things in the privacy of a family dinner conversation that they would not say at a town hall meeting, online communities cultivate their own norms that are enforced both formally and informally. All are constantly changing to compete and succeed.

Underlying conclusion - too difficult.

(3) Enforcement will always be imperfect.

Speaks for itself: 

Even in a hypothetical world where enforcement efforts could perfectly track language trends and identify likely policy violations, companies would still struggle to apply policies because they lack context that is often necessary, such as whether an exchange between two people is in jest or in anger, or whether graphic content is posted for sadistic pleasure or to call attention to an atrocity.”

Underlying conclusion - too difficult.

(4) Companies are intermediaries, not speakers.

In other words, don’t blame us! 

Despite their best efforts to thwart the spread of harmful content, internet platforms are intermediaries, not the speakers, of such speech, and it would be impractical and harmful to require internet platforms to approve each post before allowing it.

Underlying conclusion - too difficult.

Do this...but...

Recommendations from Facebook include:

  • Requiring companies to set up “user-friendly” channels to report harmful content
  • The regular release of enforcement data.
  • Make governments define what illegal content is.
  • Governments to require companies to hit specific performance targets.
  • A requirement on companies to consult with stakeholders when making significant changes to standards.

But for every seeming positive ‘recommendation’, there’s an accompanying set of caveats that appear to be there to deter radical change, such as:

Regulators could require that internet companies remove certain content— beyond what is already illegal. Regulators would need to clearly define that content, and the definitions would need to be different from the traditional legal definitions that are applied through a judicial process where there is more time, more context, and independent fact-finders.

Or:

The potential negative effects could arise from the way transparency requirements shape company incentives. If a company feared that people would judge harshly of its efforts and would therefore stop using or advertising on its service, the company might be tempted to sacrifice in unmeasured areas to help boost performance in measured areas. For instance, if regulation required the measuring and public annual reporting of the average time a company took to respond to user reports of violating content, a company might sacrifice quality in its review of user reports in order to post better numbers.

Or:

The potential for perverse incentives under this model is greater than under the procedural accountability model, because companies with unsatisfactory performance reports could face direct penalties in the form of fines or other legal consequences, and will therefore have a greater incentive to shortcut unmeasured areas where doing so could boost performance in measured areas. Governments considering this approach may also want to consider whether and how to establish common definitions—or encourage companies to do so—to be able to compare progress across companies and ensure companies do not tailor their definitions in such a way to game the metrics.

My take

As Breton put it:

It’s not enough.

This has the spoor of Facebook Apologist-in-Chief Clegg all over it. He was busy pimping the new document on Twitter yesterday as a sign of the company’s desire to work with legislators on the issue, while Zuckerberg himself was credited with an op ed in the Financial Times for another spin of the 'We’ll take a hit for the greater good of society’ messaging:

Nick Clegg

 

There are some valid questions aired in the Facebook document and I remain instinctively wary of politicians and bureaucrats rushing headlong into ‘something must be done!’ legislative frenzies.

But if the contents of the document are indeed to be seen as “the beginning of a conversation” as Facebook styles it, it’s a conversation that’s got a long, long way to go and will need to become a damn sight less self-centered and superficial if it’s to do any more good than just be another phase of the ongoing campaign to clean up Facebook's dirty brand.