Facebook’s decision to appeal the recent £500,000 fine from the UK Information Commissioner’s Office is perfectly within its rights, says the Commissioner herself, but also misrepresents the findings of the regulator.
The social media giant said its decision to appeal the penalty is a point of principle on the basis that the fine has implications “far beyond just Facebook”, according to a statement by Anna Benckert, Facebook’s Associate General Counsel for EMEA:
[The ICO finding] raises important questions of principle for everyone online which should be considered by an impartial court based on all the relevant evidence.
It was this tack that Richard Allan, EMEA Vice President of Policy Solutions, took in his evidence to the International Grand Committee of legislators from 9 national governments in London yesterday, when he said:
What we are challenging is the judgments…It is not the amount of the fine that is the issue here. The issue is that there is language in the judgments that goes to the heart of the thing we have been discussing, which is: how do you assign responsibility between the first party and the third party, when you have one of these developer relationships? We think that is very important to test, not just for us but for the whole sector. The appeal is the route—and the Information Commissioner herself has said that it is an appropriate route—for us to have that legal question looked at in some detail.
Again, just to understand the concerns, some of the language in [the finding] suggests that if I have an email or a message that has been sent to me by somebody and I share it with a third party, that may be illegal and cause problems back to the first party. That is the kind of question that it is really important we get right, because it will affect the entire ecosystem of internet applications.
But Information Commissioner Elizabeth Denham is having none of this. Giving evidence to the Committee later in the day, she said:
After a nine-month investigation, we issued a fine in October of this year. It was the maximum fine that was available to us under the previous [pre-GDPR] regime and I stand by the findings and the enforcement action by my office. That said, any organisation—any company—has the right of appeal. But I was really disappointed by Facebook’s statement in the context of the appeal, because I do think that they misrepresented the fact of our finding and the rationale for issuing that fine.
The statement that Facebook published last week said that our investigation found no evidence to suggest that the information of Facebook’s 1 million UK users was ever shared between Dr Kogan and Cambridge Analytica or used by affiliates in the Brexit referendum. That was in Facebook’s statement, and that much is correct about our investigation.
But our fine was not about whether or not UK users’ data was shared in this way. We fined Facebook because it allowed applications and application developers to harvest the personal information of its customers who had not given their informed consent—think of friends, and friends of friends—and then Facebook failed to keep the information safe. So, in UK terms, that is principle 1 and principle 7 breaches. That is pretty basic data protection 101; that is about notice and transparency and control. It is not a case of no harm, no foul.
More to come?
She added that this was “just one example of what Facebook allowed to happen”, stating:
Facebook broke data protection law, and it is disingenuous for Facebook to compare that to email forwarding, because that is not what it is about; it is about the release of users’ profile information without their knowledge and consent. That is messages and likes; that is their profile…I think that’s the basis of our fine. We found that there wasn’t consent, there wasn’t meaningful information, there wasn’t meaningful notice. And terms of service can’t trump the law. Because the profiling and the use of personal information, complying with meaningful consent and notice is a tension with their business model…Regulators need to enforce the law. I need to do my job and that’s what I’m doing here. Facebook has the right of appeal, but I think they shouldn’t misrepresent our findings and the basis of our monetary penalty.
Denham went on to confirm that there are ongoing investigations into Facebook’s behavior and that the £500,000 fine the firm received is unlikely to be that low if another fine is imposed:
The £500,000 fine that we issued against Facebook was the maximum fine available to us in the previous regime. Because the contraventions happened before May 25 2018, half a million pounds was the maximum. I’ve been clear that had the maximum fines available under GDPR been available, we would have issued a much more significant fine. But that’s hypothetical because that wasn’t available to us. We now have up to 4% of global turnover as do other EU data protection regulators. That’s a significant fine for a company and it would have an impact on the bottom line of that company.
But I also think our findings and the fine that we have issued, is important because we have found their business practices and the way their applications interact with data on the platform to have contravened data protection law. That’s a big statement and a big finding.
Under GDPR, the new legal arrangements, the Irish Data Protection Commissioner is the lead legal authority for Facebook because Facebook Europe is based in Ireland. We are working with our Irish colleagues. We have referred matters of an ongoing nature, concerns that we have with Facebook right now, we’ve referred those on to our Irish colleagues. As a competent authority, we are available to help them with that. There are several investigations ongoing under the lead of the Irish Data Protection Commissioner, who I have great confidence in, as well as other regulators in the EU.
Denham was not the only other witness to question Allan’s testimony. The Committee also heard from Ashkan Soltani, former Chief Technologist at the US Federal Trade Commission (FTC), who was blunt when asked about aspects of Allan’s statements:
This is false.
Of particular grievance were Allan’s remarks about changes to developer policies that took place in 2014 in relation to access to user data, when he said:
What developers had under the first version was the ability to ask you to install their application, and if you agreed to it and agreed to certain permissions, then they could also access some of the information that your friends shared with you. Version two stopped that. So in neither version was it full access to data. In Version One it included some access to friends' data where they'd given permission, in Version Two, that access was removed.
No it wasn’t, insisted Soltani, telling Committee members:
At the very beginning of the hearing, around 11 minutes in, Mr Allan corrected one of the comments from you all, specifically that apps in version 1 of the API did not have unfiltered access to personal information. In fact, that is false. In the 2011 FTC settlement, the FTC alleged that if a user had an app installed, it had access to nearly all of the user’s profile information, even if that information was set to private. I think there is some sleight of hand with regards to V1, but this was early V1 and I believe it was only addressed after the settlement.
Additionally, in the same complaint, the FTC found that Facebook misrepresented their claims regarding their app oversight programme, specifically Facebook’s verified apps programme, which was a detailed review designed to offer extra assurances to help users identify applications they can trust. The FTC found that that review was actually non-existent and the company was not doing anything to oversee those apps. Again, that is in the complaint.
Additionally, there were some statements made that the apps would only have access to information if a user installed them and consented to that access. I helped The New York Times in their investigation and verification of the whitelisted apps programme and I have some tweets in that regard that show the screenshots of this access. Specifically, apps were able to circumvent users’ privacy settings or platform settings, and access friends’ information as well as users’ information, such as birthday and political affiliation, even when the user disabled the platform. Those are the pre-installed apps.
Additionally, through a Facebook programme called instant personalisation, some apps such as Yelp and Rotten Tomatoes would automatically get access to users’ personal information, even without the user installing them. So, simply by being a Facebook user and visiting the Yelp website, Facebook would transmit to Yelp your home town, your location, your preferences, etc. So those are some discrepancies, I think.
There are some clear conclusions, he argued:
I think it was best laid out in Mr Allan’s 'win-win-win' comments, that 'the platform enables things we are not going to do and they are going to do'. Facebook gets more engagement from the platform; Facebook users do things that Facebook does not normally provide; and developers get access to a large consumer base.
Another way to describe this is that Facebook pays app developers to build apps for them using the personal information of their users. So, if I am a company and I want to have an app like FarmVille, or a third-party Blackberry client written for me, I would have to pay hundreds of thousands of dollars in development time to have that app built. But Facebook says instead, 'Developers, please come and spend your engineering hours and time in exchange for access to user data'.
Asked if he believed that Allan had known that he was proving incorrect information, Soltani said:
It is a very nuanced technical issue. So if you describe Version1 of the API as something from, say, 2012, after the FTC settlement, to 2014, and you exclude whitelisted apps and Instant Personalisation, then his statements are generally true. But if you include all the carve-outs, and we know the carve-outs to be significant—we know the carve-outs of the whitelisted apps and of API access to be significant—then it was deceitful…There were all these caveats, so the question is, ‘Do the exceptions swallow the rule?’. It is false to make statements to say, categorically, that this information is unavailable when in fact it is…When companies make technically nuanced and perhaps deceitful statements, it kind of gets under my skin…time and time again companies can exploit these technical nuances.
Quizzed on what he saw as Facebook’s culture, Soltani pointed to CEO Mark Zuckerberg’s performance before US legislators earlier this year, which he described as akin to a game of tennis:
'You ask me a question and I’m going to answer you in a way that you can’t challenge, and I’m going to win that rally'. - there is that contemptuousness, that ability to feel like the company knows more than all of you and all the policy makers. You might have seen the announcement recently that Facebook is going to create a “Supreme Court” to deal with Fake News. Imagine the level of confidence one must have to say, 'I’ve studied this issue for two years and I now feel qualified to create a Supreme Court to deal with fake news issues'. On the one hand it speaks to the power of Facebook as a platform; and on the other it speaks to the contempt that the company—at least the senior leadership of the company—has for an existing democratic process and multiple regulatory frameworks.
I’ll leave this for now with the words of Damien Collins, Chair of the Grand Committee:
We have recorded what was said, and based on the evidence that we have just heard, we are certainly in a position to follow up on that and go back to Facebook to ask them to explain the discrepancies in the evidence that we have heard. I think we could certainly take that further with regard to Richard Allan’s evidence and the misleading nature of some of the answers that he has given.
If only Zuckerberg could have found some time in his busy schedule to come and provide some answers himself…