"We didn't mean to upset you!" But is Facebook's apology good enough to fend off regulators?
- Summary:
- Facebook Sheryl Sandberg sort of says she's sorry. But is that enough to draw a line under the ‘scandal’ of manipulating users or is there more trouble ahead?
We didn't mean to upset you.
That was the apology from Facebook Chief Operating Officer Sheryl Sandberg last week over revelations that the firm had conducted a social experiment to manipulate users emotions by positioning either positive or negative posts in their news feeds.
But is that enough to draw a line under the ‘scandal’ or is there more to come back at Facebook on this one?
Speaking at an event in India, Sandberg wasn’t exactly penitent in her apology, justifying it as a version of something that all firms do at some point. She also fell back on the tried-and-tested PR excuse of ‘we didn’t explain ourselves properly but if we had you’d have understood’:
This was part of ongoing research companies do to test different products, and that was what it was. It was poorly communicated. And for that communication we apologise. We never meant to upset you.
But this wasn’t just a bit of market research or some A/B testing. Facebook had been secretly altering the news feeds of nearly 700,000 users to find out if users being subjected to positive or negative words in messages had a direct impact on the kind of statuses they created, or whether it would make them engage more or less with the platform.
Related stories
- How sad should we be that Facebook played with our emotions? (diginomica.com)
- Sheryl Sandberg not sorry for Facebook mood manipulation study (washingtonpost.com)
Public knowledge of the tests, which took place back in 2012, only came to light as a result of a research paper. Data scientist Adam Kramer, who led the project and authored the report, has also weighed in with a justification:
We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook.
He added:
I can understand why some people have concerns about it, and my co-authors and I are very sorry for the way the paper described the research and any anxiety it caused.
Not enough
But neither of these apologies is going to shut down concern and anger about the experiment, especially in light of further allegations in the Wall St Journal from a former Facebook staffer that the research team operated with virtually no supervision and that this was not the only experiment of its type.
The newspaper quoted Andrew Ledvina, who worked as a Facebook data scientist from Feb 2012 to July 2013:
There's no review process, per se. Anyone on that team could run a test. They're always trying to alter people's behaviour.
In what might be read as tacit acceptance of Ledvina’s claims of lack of process, Kramer argues that new oversight structures are now in place:
The experiment in question was run in early 2012, and we have come a long way since then. Those [new] review practices will also incorporate what we've learned from the reaction to this paper.
There’s been no word from the Federal Trade Commission about whether it plans any action or investigation into Facebook’s conduct here, but outside of the US it’s a different story with Europe’s data protection regulators getting ready to get tough.
For example, Ireland’s Office of the Data Protection Commissioner states:
This office has been in contact with Facebook in relation to the privacy issues, including consent, of this research. We are awaiting a comprehensive response on issues raised.
The UK Information Commissioner’s Office is also on the case:
We’re aware of this issue and will be speaking to Facebook, as well as liaising with the Irish data protection authority, to learn more about the circumstances.”
Now, Ireland is terribly keen to get cloud firms investing in the country - indeed Facebook's European HQ is there - and doesn't typically make too much fuss about data protection issues, while the UK's position is usually tough, but fair. So these are two of the more 'friendly' nations in Europe.
Just wait until Germany weighs in on this one. Back in February, the Higher Court of Berlin ruled that Facebook must follow German data protection laws, and ruled that several parts of Facebook’s terms of service and privacy policies are against the law in Germany.Facebook’s defence will be based on its assertion that none of the data used was associated with a specific person’s Facebook account and that the study was conducted with privacy rights in mind.
It may also point to a clause in the small print of membership that data can be used for research (although that’s a dodgy one as the clause post-dates the 2012 study).
Whether that’s enough to convince the data protection-sensitive European regulators remains to be seen.
The industry analysis
French Caldwell, Fellow in Gartner Research, who leads governance, risk and compliance research, pitches an interesting argument when he cited the Belmont Report, commissioned by Congress following public anger over a syphilis study and now the basis for the ethical guidelines for any academic research involving human subjects:
“Respect for persons” is a first principle of Belmont’s ethical guidelines. This principle requires informed consent of the research participants. If informing participants of the exact nature of the research would compromise the validity of the study, the participants don’t have to be told the exact nature of the research, but they do have to be told that there will be experiments. The general notice to users of Facebook that their data could be used for research purposes may be fine for experiments involving pre-existing data, but it does not explicitly address live experimental research.
He adds that ‘respect for persons’ also requires that vulnerable persons be protected, which could include minors or people with a precarious emotional or mental state. Noting that the research report highlights a “withdrawal effect” which addresses the question about how emotional expression affects social engagement online, Caldwell states:
Such a lingering effect of the individuals involved in behavioral research makes it important that protection for vulnerable individuals is established. These persons would need to be identified and then additional measures taken to prevent any negative outcomes for them. It is not adequate to do a statistical calculation and then from that determine that the risks to the group of potentially vulnerable persons is minimal. For a waiver of the disclosure requirement, the specific individuals should be identified and action taken to protect them as individuals. Furthermore, when the research is concluded, the Belmont principles require that research subjects are debriefed. It’s not stated in the study as to whether a debriefing occurred.
Caldwell makes the point that other social media firms also tap into user data to conduct studies into why people behave and engage in certain ways, but argues that there needs to be more transparency:
With that business model, perhaps in the world of social media businesses, this experiment may have seemed mild. Certainly, the researchers appear to have been caught off guard by the public backlash. If nothing else positive comes of this experiment, it offers a sober lesson on the fine line between the business of marketing and the intentional manipulation of emotions.
Facebook states that it has significantly upgraded its research safeguards since 2012 when this experiment was conducted. Perhaps Facebook, and other social media firms, could consider openly sharing the details of those safeguards.
Such transparency might allay user concerns which Forrester’s Fatemeh Khatibloo suggests might lead to mass defections from the site:
The potential for users to abandon Facebook is real. Facebook has novel data to analyze, and long term, that could change marketing practices significantly. The kinds of data that Facebook is starting to exploit are highly unique. It could actually combine evergreen affinities with contextually specific emotional states to change how brands buy media and measure performance.
But the short term implications may cut its opportunities off at the knees. If Facebook, with all of its research and experimentation, causes users to feel like lab rats, it’s possible that they will leave the site in droves. That outcome could severely limit brand reach — and that could signal the end of Facebook’s marketing customers, especially given today’s already reduced reach.
Perhaps the last word (for now) should be left to Kramer who ruefully admits:
In hindsight, the research benefits of the paper may not have justified all of this anxiety.
My take
- Frankly, 'we didn't mean to upset you' is patronising and insufficient in the extreme as far as post-getting-caught apologies go.
- It should also be noted that Sandberg has not apologised for what Facebook did, but for the fact that it upset some people.
- This is one of those instances where something you knew really had been going on comes to light in such a way that you can't turn a blind eye to it anymore - and your reaction becomes indignation.
- That said, I seriously doubt that many Facebook users are sufficiently outraged to abandon ship however.
- What is needed though now is increased transparency - and not just from Facebook, but also from Google and Yahoo! et al.
- If you want me to be a lab rat, I want to know why, what for and for how long?