Facebook and Twitter defend their monitoring of abuse towards MPs

SUMMARY:

Social media giants Facebook and Twitter have defended their approach to monitoring of abusive content on their platforms to government.

The UK’s snap general election earlier this year was incredibly divisive, more so than any in recent history. Politics in the UK is no longer driven by left or right wing divisions, but regional differences, socio-economic demographics, age and, of course, Brexit.

The media and politicians have done a good job of stirring up hysteria amongst voters and the result was incredibly vitriolic campaigns in the run up to election day.

Unsurprisingly, much of the discourse – and hate – happened online, on social media platforms, such as Facebook and Twitter. And some MPs were directed with inexcusable harassment and abuse, which would cause anyone great distress.

As a result, social networks have come under the spotlight with regards to what they are doing to protect users against online trolls. This is a wide ranging debate and the nuances of what social networks can and should do continue to be debated by legislators, politicians and the public.

However, in response to the election, Facebook and Twitter were called upon by parliamentarians to give evidence about what they are doing to protect users – especially politicians, in the public eye, running for office. And the results were quite interesting, particularly noting the differences between the approaches taken by Facebook and Twitter.

It’s also worth noting that Google gave evidence too, but the responses were thin on the ground compared to Facebook and Twitter, so I’ve chosen to exclude them from this article.

Some of the differences between the social networks include: the use of tech versus people to moderate, the need to identify and authenticate people and what they’re willing to take down and what they’re not.

I’ve gone through the lengthy evidence sessions and picked out some of the more interesting points.

Working with government

What was clear from the evidence, is that both Facebook and Twitter are highly engaged with MPs, government and law enforcement to help ensure the platforms are used appropriately and that users know how to protect themselves. The involvement of the social networks in the run up to the election was particularly interesting.

For example, Sean Evins, Head of Europe, the Middle East and Africa, Government Outreach at Facebook, said:

If you are running for office, in the case of the UK election what we did is design a multi-faceted approach to this election, obviously it came up relatively quickly, but the focus here is how we can we work with the candidates, how can we also work with the government and the new government, on understanding how to use Facebook well to reach the people they want to.  We helped foster the two-way communication that happens on Facebook all the time.

What we did in this case for this election was go local and regional.  So we trained people here in London but for the first time we actually went outside of London for an election and did eight different trainings on a local and regional scale, with over 5,000 candidates and campaigns and party officials.  We hosted multiple webinars to help educate them not only on best practices and understanding new tools, as Facebook continues to evolve, every week we make a new innovative…

When you’ve got candidates involved, there is a chance that some will become government officials the day after the election so we want to make sure that everybody is on a level playing field and that they understand how to use the platform effectively and well and to the fullest extent they’re looking for.  What we’re not are campaign advisors, my team does not go in there and say this is how you win an election.  What we do is present the same level of best practices whether you’re running for a local council or whether you’re running for an MP and it’s basically just an extra set of eyes and hands.

Evins added that whilst a lot of the training generally focused on ‘Facebook 101’, it also included multiple pieces related to safety.

Whilst, Simon Milner, Policy Director, UK, Middle East and Africa at Facebook, added that over the time of the election, candidates were given a quick response route to get abusive content taken offline. He said:

In the context of election, especially in the 72 hours around election day, we have certain tools we can turn on if a candidate is receiving threatening comments hypothetically, and that happened only, anything related to candidate urgent issues in that 72 hour period was a handful of times only, but we have dedicated support channels for big events things like that, will be responsive in 15 minutes.

This was also the case for Twitter, where political parties had a direct line to getting content removed. Nick Pickles, UK Public Policy Manager at Twitter, said:

If I was a party, I would want it to be down within minutes. We would work with political parties – they can file high profile requests, they have my email, they have my phone number. It would be down within the hour. We can escalate these internally. During election campaigns our relationship with political parties are extremely important.

Resourcing versus technology

Interestingly, it was clear from the evidence that Facebook is taking a more manual approach to its moderation, by heavily staffing its operations teams globally to assess offensive content. It gets millions of reports every week, and to deal with this it is increasing the number of moderators from 4,500 to 8,000 people.

Facebook is focusing on giving users the tools to easily report things they find abusive/offensive and then is training staff to assess the need to take it down. Evins said:

We now have 4.5k people, that number will grow to 8k pretty soon, people that mainly review content, so it is educating candidates from around the country, so if they see something that bothers them, how do they flag that, how do they report that and what sort of steps do we take.

Milner added:

So we aim to get to every report within 48 hours, but most are much much less than that .

We certainly are aware of occasions  when people feel we’ve been too slow or have made the wrong call.  When you are making millions of decisions like this every week, no matter how good your training is, how much you really try to get it right every time, you can’t always get it every right.  So one of the reasons we appreciate having channels like the one with parliamentary authorities and campaign parties who let us know when something has gone wrong, we have made the wrong call.  In terms of being too slow, we are putting more resources into it, an indication we know we need to invest more in this area and that’s exactly what we’re doing.

However, it was clear from Twitter’s response that whilst reporting from users is an essential tool for the platform, it was also taking a far more automated approach to dealing with inappropriate content. Pickles said:

It’s a very high priority for us. Our CEO has said that safety is a number one priority. In the last two weeks, we have added considerably to the list of improvements we have made. We realise we need to do more to enforce the rules, and make sure people feel safe on the platforms. We’re making rules clearer, strengthening them in some cases, enforcing them better. We’re dealing with breaches through automation. The election brought to bear the challenges around free expression and abuse that we’ve got to deal with.

It was twelve clicks to report a tweet, it’s now three or four. We’re using technology to identify things without user reports. The headline figure is that we’re taking action on ten times more accounts than this time last year, due to internal machine learning. We have a dedicated reporting flow for violent threats, and for hateful conduct based on the UN Declaration protected characteristics. We will use technology to prioritise the reports – for example, something that is two weeks old, that is reported by someone who’s not mentioned, is lower priority.

It is automatically triaged. This is the most complex challenge. But we are also essentially adding warning labels to offensive messages, where we think it is offensive but won’t break the rules. It can hide offensive replies. It will have false positives but it is there. We’re working on changes to make clearer in emails and in-app exactly which rule was broken, on which account. Currently users don’t get told what rule they broke. If you’re warned, we walk you through a process where you have to delete the Tweet. It’s been likened to a ‘speed awareness course’, to make sure people have understood the rules – it’ll be on the screen, in the text.

Anonymity versus authentication

Another interesting divergent approach for the two platforms is their approach to user registration and authentication. Facebook takes a view that users should use their real identities and it takes measures to ensure this is the case. Whilst Twitter believes that anonymity has its benefits, despite some of the safety concerns it can introduce.

Facebook’s Milner said:

There is no such thing as anonymity on Facebook – we have a real name policy, we’ve had that since the very beginning of Facebook.  You have to use your real name on the service.   It doesn’t mean people don’t try, much as in the UK  we have a rule you can’t go above 70mph on the motorway, lots of people do.

So people do try to open accounts in a fake name, and try and use that as a source of abuse, we know that internet trolls will try and use Facebook, because it’s a big community, if you want impact and you’re a troll, you can get onto Facebook and stick around then that can be quite helpful.  But we do have teams that particularly focus on this, focus on authenticity.  We really encourage people if ever they think an account, someone they are suspicious of, they think is fake then to tell us about it and then we can look into it.  We can also require people to prove they are who they say they are.

Meanwhile, Twitter’s Pickles said:

It’s more complex than whether you’re anonymous or not. An important part of our platform is not being able to use your real name – parody accounts, or accounts in support of someone, such as @JC4PM. Or where using your real name might threaten your safety – whistleblowers, or human rights activists in the Middle East. Second, the vast majority of people have given us information about who they are – like phone numbers. 80% of UK users access Twitter on their phone.

If you give the police a phone number they can easily identify someone. Something we’re doing aggressively is pushing people, saying ‘if you don’t give us your phone number, you can’t come back on the platform’. It reminds people they’re not anonymous. But Korea introduced a rule banning anonymity – but it made no difference to abuse and was abandoned.

Responsibility

Finally, both companies made comments about where their responsibilities lie and where the line is for how they moderate content – the crux of the debate around protecting users. Does a platform like Twitter or Facebook class as a publishing platform and is it therefore responsible for content in the same way a newspaper might be? Are the platforms doing enough to judge the nuances in context? Both had a viewpoint.

Facebook’s Milner said:

In terms of the issue of responsibility, I don’t think there will ever be a time when I work for this company where politicians will say you’ve done enough.  Or the media commentators, you’ve have done enough.  You’ve done everything you can possibly do.  They will always expect us to do more and we expect that.

I think the point at which we would not accept, is a suggestion that you should be responsible for everything that appears on Facebook as if you were a publisher.  As if you were running the BBC’s website.  That is not the nature of this service.  This is a service which enables millions of people to have a voice that they’ve never had before; people to communicate with one another freely and in an environment where they are not expecting and indeed it would be completely wrong for their speech to be monitored.

And that somebody is checking on all of their speech to make sure that it had not fallen foul of our rules let alone the law of the land. That is not the world we live in and I don’t think that’s a world that any of us want.

And it is on this point that both the platforms agree – they are not publishers in the traditional sense and should not be held accountable in the same way. Twitter’s Pickles said:

A lot of the people who say we should be publishers have a commercial interest in that as well. Something we do forget are users. The vast majority of users who have an overwhelmingly positive experience, are they willing to wait for their posts to be moderated? If we suggested that for candidates we’d immediately be accused of bias if some went up faster than others. It’s great strength of democratic debate that candidates can speak to the electorate directly. Leaders, world leaders, can speak to the public directly.

One of the critical things that relies on is that the communication happens in step with the real world. So whether that is tragedies and national disasters, and citizens can speak to their leaders in real time, to say ‘events are happening in the real world, and we can’t wait for tomorrow’s papers for debate’. That’s a very powerful tool for democracy, and the idea that we should roll it back and go back to a world where we treat everything as published…is not a solution to some of the civic issues we have around politics, or safety and abuse.

And I say that seeing the tweets about controversial comments. There is the complexity of secondary publishing – for example, YouTube keeping up a video of Fox News playing an ISIS video. A news organisation had already made the decision to publish it. There are immense risks with technology companies making decisions in that context.

My take

This is complex and legislators and law enforcement agencies are struggling to keep up, or even understand, how to tackle this. I think it’s going to take years of debate before we get to a place that we are all comfortable – but resourcing, the use of technology and putting control into users hands are all key.

Image credit - Image free for commercial use

    Leave a Reply

    Your email address will not be published. Required fields are marked *