Main content

How sad should we be that Facebook played with our emotions?

Derek du Preez Profile picture for user ddpreez June 29, 2014
It emerged this week that the social networking giant conducted a study amongst a few hundred thousand users to see whether or not it could manipulate our emotions – and it succeeded (just).

It should be no surprise to anyone that Facebook uses a powerful algorithm to alter what we see on our individual news feeds. I've noticed this in action on many

occasions – for instance the people I stalk the most appear higher up on my feed, if my significant other does anything on Facebook it immediately appears when I log in and the social network tries its best to throw ads and other information at me that it thinks might be relevant. I'm sure you've all noticed something similar. However, it seems that up until now we all assumed that this was largely being done for some sort of mutual benefit, where we get the content we want and Facebook sells stuff off the back of knowing us really damn well.

But apparently this isn't always the case. It has emerged today that the social networking giant, for one week only (that we know of), actually conducted an experiment on 689,000 unsuspecting users to see if it could influence their emotions. Yup, in collaboration with two US universities, Facebook controlled peoples' news feeds to expose them to either 'positive' content from their friends, or 'negative' updates, to see if it had any direct impact on their mood and posts. Surprise surprise, Facebook found that if you are exposed to more negative content, you become more negative. More positive updates, and you yourself become a bit more chipper. Not only this, but if Facebook showed users neutral content (i.e. boring), users then posted less content themselves. Facebook labeled it a "contagion" effect, where you don't need direct interaction with a person to be impacted by their mood.  Although the results weren't statistically astounding, when you consider the scale of Facebook itself, the impact is definitely notable.

Needless to say, since the news emerged, the internet has basically gone into meltdown. Here are some of my favourites from Twitter of people getting, well, a bit emotional:

This story is likely to rumble along over the coming weeks as debates begin to form about whether or not the study itself was legal and/or ethical. It is likely that Facebook is covered by its lengthy T&Cs on the legal front, but whether or not it breached ethical guidelines for “informed consent” is another question. It seems that users had no idea that the study was taking place and that the study was approved by an institutional review board on the basis that Facebook “manipulates people's news feeds all the time”. The company has put out the following 'official' statement:

“This research was conducted for a single week in 2012 and none of the data used was associated with a specific person’s Facebook account. We carefully consider what research we do and have a strong internal review process. There is no unnecessary collection of people’s data in connection with these research initiatives and all data is stored securely.”

However, a more interesting statement was made on the Facebook page of one of the co-authors of the study, Adam Kramer. He said at the time of conducting the research, Facebook felt that it was “important” to investigate whether people felt left out when their friends posted positive content and also wanted to find out whether or not exposing people to friends' negativity lead people to avoiding visiting the social network. Kramer admits that the authors “didn't clearly state”

their motivations in the paper. However, he also apologises for carrying out the research and admits that perhaps maybe it wasn't worth all the stress. He wrote:

“The goal of all of our research at Facebook is to learn how to provide a better service. Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. I can understand why some people have concerns about it, and my co-authors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.

“While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then. Those review practices will also incorporate what we’ve learned from the reaction to this paper.”

But alas, Facebook's moves to placate fears and concerns haven't quite hit the note and there are now even calls for legislation from Members of Parliament in the UK to protect people from this happening again. Jim Sheridan, MP and a member of the Commons media select committee told the Guardian:

"This is extraordinarily powerful stuff and if there is not already legislation on this, then there should be to protect people. 

"They are manipulating material from people's personal lives and I am worried about the ability of Facebook and others to manipulate people's thoughts in politics or other areas. If people are being thought-controlled in this kind of way there needs to be protection and they at least need to know about it."

And this is where my main concern lies. Personally, I don't really think Facebook's experiment is the end of the world, given that they have published the report publicly and it was done in collaboration with two respected US universities. It almost certainly wasn't ethical and Facebook is probably quite lucky that it didn't have any scarring effect on any of its users, but for me this research answers a question that needed answering. Can we be manipulated by the likes of Google, Facebook and Twitter? And now we have an open and honest answer: Yes. Clay Johnson, the co-founder of the firm behind Barack Obama's online campaign in 2008, posted the following points to Twitter that I think are particularly pertinent:

His points echo those of psychologist Robert Epstein, who last year brought into question the influence of Google in national elections. He conducted a number of experiments based on a fictitious search engine called 'Kadoodle', where search results and rankings were manipulated to favour a given political candidate by pushing up positive links and pushing down negative ones. His results suggested that those exposed to positive links about a political candidate were up to 15% more likely to favour said candidate, compared to those that saw impartial results. One might argue that this is a role that the media has played for decades, with newspapers and networks publicly aligning themselves to certain political parties to promote their interests. However, the public largely has a choice with the media about what they want to consume – whereas Google holds a pretty strong monopoly on web-based search.

That being said there is no evidence whatsoever that Google, or any social networking site, has used its power on the web to influence public thought or opinion. And as we can see with the Facebook news this week, there is a big risk if any were to do so – the public don't like being messed with and I suspect that trust

social nedia 2
levels between Facebook and its users will have taken a significant knock today.

However, a bit of paranoia is probably healthy here. I'm not totally sure how I feel about legislating this, as I think legislation often struggles to keep pace with the internet. But industry standards should be developed at the very least and we should be having an open conversation about how companies like Google and Facebook can manipulate users and make some decisions about when this is okay and when it is not. We can't just assume that it isn't happening. I don't really see a problem with Facebook pushing more emotive content to the top of my news feed if they think that this means that I'm going to post more stuff, but I would have a problem if Facebook was using its powers to manipulate my political thinking. But the good thing to come out of all of this is that we now have some hard evidence that these companies CAN manipulate us, and this is all coming from the horses mouth – so there's not going to be an argument about its validity. Let's hope that Facebook takes a proactive approach off the back of this and leads the discussion on what this means for its future and consults with its users about what it can and can't do. If these open discussions don't happen, I don't really know what that means for how society interacts with the internet – it becomes a far more dangerous game.

A grey colored placeholder image