The comparison between the social media industry and the tobacco industry isn’t a new one - the likes of Salesforce CEO Marc Benioff and others have been making that connection for years. But this week’s Congressional hearings in Washington have brought the theme back to the fore as Senator Richard Blumenthal declaimed:
Facebook and big tech are facing their big tobacco moment. Facebook knows its products can be addictive and toxic to children. They value their profit more than the pain that they cause to children and their families.
Now, we’ve been here before, quite a few times. The usual form on these occasions, particularly in the US, but also in Europe, is for legislators to sit sternly in a committee room, where they fire a few shots at whatever Zuckerberg-avatar has drawn the short straw and been sent along to make a token appearance, but mostly waste their allotted interrogation time to grandstand and posture and generally prove to be no flaming use whatsoever.
What’s different this time of course is Frances Haugen, the much-publicised Facebook whistleblower who’s been giving her insider testimony of how the online empire is really run. If you hadn’t watched her 60 Minutes TV special at the weekend, then her evidence on Tuesday before the Senate Commerce Sub-Committee would have hit home as she stated:
The choices being made inside of Facebook are disastrous for our children, our public safety, our privacy and for our democracy. And that is why we must demand Facebook make changes…The company intentionally hides vital information from the public, from the US government, and from governments around the world. The documents I have provided to Congress prove that Facebook has repeatedly misled the public about what its own research reveals about the safety of children, the efficacy of its Artificial Intelligence systems, and its role in spreading divisive and extreme messages.”
Of note, Haugen is not in favor of breaking up Facebook, arguing that this won’t address the underlying issues. As well as calling for a new regulatory body that would have oversight over tech platforms, she advocates changing Section 230 of the US Communications Decency Act, which affords website publishers immunity from liability for third party content on their services, to exempt decisions that are made about algorithms.
Facebook fights back
So how has Facebook responded to this latest unwelcome spotlight? Pretty predictably. Firstly, they’ve kept Zuckerberg out of the way. As the latest accusations were being made about his empire, Zuck had some important sailing he had to do. Meanwhile Facebook Apologist-in-Chief Nick Clegg had already pre-briefed the troops on what the firm suspected would come out of 60 Minutes and how to feel about it.
So only two things remained to do - first, attack Haugen’s credibility and second, reach for the moral high ground. Inevitably the former was the easier ask and so it was that Facebook issued a statement belittling her role at the company and her insight:
Today a Senate Commerce subcommittee held a hearing with a former product manager at Facebook who worked for the company for less than two years, had no direct reports, never attended a decision-point meeting with C-level executives — and testified more than six times to not working on the subject matter in question. We don’t agree with her characterization of the many issues she testified about.
OK, so send Zuck along to put us all right on a few points, then? No? Thought not.
Then there was the shameless attempt to reach for the high ground and move the conversation onto safer ground. This took the form of another airing for a by now all-too-familiar gambit by Facebook, first seen when Zuckerberg back in 2019 begged governments and regulators to step in and keep firms like his in check. In an article for the Washington Post, he - or another of the Clegg-ian avatars - wrote:
Every day we make decisions about what speech is harmful, what constitutes political advertising, and how to prevent sophisticated cyber-attacks. These are important for keeping our community safe. But if we were starting from scratch, we wouldn’t ask companies to make these judgments alone. I believe we need a more active role for governments and regulators.
And that’s since become a recurring meme. Regulate me, regulate me! Help me to help myself! Step up to the mark, Big Government! So, that’s the playbook Facebook turned to this week, telling Congress that:
It’s time to begin to create standard rules for the internet. It’s been 25 years since the rules for the internet have been updated, and instead of expecting the industry to make societal decisions that belong to legislators, it is time for Congress to act.”
That’s them told then!
Zuck's back on dry land
But while Zuck wasn’t talking to Congress, he was ready to speak to the Facebook faithful via his own platforms, sending out a morale boosting message that read:
Now that today's testimony is over, I wanted to reflect on the public debate we're in. I'm sure many of you have found the recent coverage hard to read because it just doesn't reflect the company we know. We care deeply about issues like safety, well-being and mental health. It's difficult to see coverage that misrepresents our work and our motives. At the most basic level, I think most of us just don't recognize the false picture of the company that is being painted.
Haugen’s claims don’t make sense, he protested, and there’s been “mischaracterization of research”:
At the heart of these accusations is this idea that we prioritize profit over safety and well-being. That's just not true. For example, one move that has been called into question is when we introduced the Meaningful Social Interactions change to News Feed. This change showed fewer viral videos and more content from friends and family -- which we did knowing it would mean people spent less time on Facebook, but that research suggested it was the right thing for people's well-being. Is that something a company focused on profits over people would do?
The argument that we deliberately push content that makes people angry for profit is deeply illogical. We make money from ads, and advertisers consistently tell us they don't want their ads next to harmful or angry content. And I don't know any tech company that sets out to build products that make people angry or depressed. The moral, business and product incentives all point in the opposite direction.
And then it was back to the ‘regulate me!’ meme:
Similar to balancing other social issues, I don't believe private companies should make all of the decisions on their own. That's why we have advocated for updated internet regulations for several years now. I have testified in Congress multiple times and asked them to update these regulations. I've written op-eds outlining the areas of regulation we think are most important related to elections, harmful content, privacy, and competition.
Does the last sentence suggest that even Zuck’s getting fed up of the continued airing of this self-serving party line?
He concluded on a note of corporate self-pity, sympathizing with Facebook staffers who see "the good work we do get mischaracterized":
I'm worried about the incentives that are being set here. We have an industry-leading research program so that we can identify important issues and work on them. It's disheartening to see that work taken out of context and used to construct a false narrative that we don't care. If we attack organizations making an effort to study their impact on the world, we're effectively sending the message that it's safer not to look at all, in case you find something that could be held against you. That's the conclusion other companies seem to have reached, and I think that leads to a place that would be far worse for society.
Clegg should have left him on the boat.