Twenty-four hours later, a fair few of the Senators on the hearing panel have had a chance to come to the same conclusion. For example, Independent Senator Angus King told the three legal reps on Day 2 of the hearing:
I'm disappointed that you're here and not your CEOs. We would appreciate seeing the top people who are actually making the decisions.
Democrat Senator Joe Manchin echoed the point:
I wish that your CEOs would be here. They need to answer for this.
But it was Democrat Senator Dianne Feinstein who weighed in most fiercely, declaring:
I must say, I don’t think you get it. You’re General Counsels. you defend your company. What we’re talking about is a cataclysmic change…We’re not going to go away gentlemen. And this is a very big deal. I went home last night with profound disappointment. I asked specific questions, I got vague answers. That just won’t do.
You have a huge problem on your hands. The US is going to be the first of the countries to bring it to your attention and other countries are going to follow I’m sure. Because you bear this responsibility. You created these platforms…and now they’re being misused. You have to be the ones who do something about it - or we will.
It’s still a possibility of course that the CEOs of the social media leaders will be compelled to come before the Senate committee. But for now, the front line of defense remains the legal counsel. That said, Facebook’s senior management was savvy enough to realise that it was going to have to come with some kind of positioning on the occasion of the release of another set of stellar quarterly numbers.
So it was that CEO Mark Zuckerberg told Wall Street analysts that at the end of a quarter with more than $10 billion in revenue, up 47% year-on-year, there were wider issues that needed to be considered:
None of that matters if our services are used in a way that doesn't bring people closer together, or if the foundation of our society is undermined by foreign interference. I've expressed how upset I am that the Russians tried to use our tools to sow mistrust. We built these tools to help people connect and to bring us closer together, and they used them to try to undermine our values. What they did is wrong, and we are not going to stand for it.
Now that that has been said, it’s down to Facebook to follow though, he added:
The first step is doing everything we can to help the US government get a complete picture of what happened. We've testified in Congress over the past couple of days about the activity we found in last year's election. We're working with Congress on legislation to make advertising more transparent. I think this would be very good if it's done well.
Even without legislation, we're already moving forward on our own to bring advertising on Facebook to an even higher standard of transparency than ads on TV or other media. That's because in traditional media, there's no way to see all the messages an advertiser is showing to different audiences. We're about to start rolling out a tool that lets you see all of the ads a page is running and also an archive of ads political advisers have run in the past.
This isn’t just an issue that impacts on Facebook, said Zuckerberg, and demands an industry-wide response:
We're working with other tech companies to help identify and respond to new threats because, as we've now seen, if there's a national security threat involving the Internet, it will affect many of the major tech companies, and we've announced a number of steps to help keep this kind of interference off our platform. This is part of a much bigger focus on protecting the security and integrity of our platform and the safety of our community. It goes beyond elections, and it means strengthening all of our systems to prevent abuse and harmful content.
We're doing a lot here, with investments both in people and technology. Some of this is focused on finding bad actors and bad behavior. Some of this is focused on removing false news, hate speech, bullying, and other problematic content that we don't want in our community. We already have about 10,000 people working on safety and security, and we're planning to double that to 20,000 in the next year to better enforce our community standards and review ads. In many places, we're doubling or more our engineering efforts focused on security. And we're also building new AI to detect bad content and bad actors, just like we've done with terrorist propaganda.
His comments were, of course, backed up by COO Sheryl Sandberg, who contributed:
When I was in Washington a few weeks ago, I made it clear that we are determined to do everything we can do to minimize abuse going forward. As Mark said, we're investing heavily in new technology and people to review ads and posts. This will enable us to look more closely at the content of the ads, targeting, and the advertiser who submits them, as well as tighten our ads policies, particularly for ads directed at social and political issues. We believe that ads are important to free expression and we will continue to accept ads on issues, but we will also do our part to elevate the quality of that discourse.
Facebook will be proactive here, she pledged:
Transparency helps everyone keep advertisers accountable for their messages. We're working with Congress on new requirements for online political advertising, but we are not waiting for legislation. We're building a tool now that will allow anyone to see the ads of pages running, even if those ads are not targeted to them. We will test it soon in Canada and then in the US in the coming months.
For ads related to US federal elections, we'll start sharing even more information, including an archive of past ads, the total amount spent, and demographics about the people the ad reached. We're also going to require more thorough documentation from these advertisers and will label their ads so it's clear who paid for them. We believe these actions will set a new standard for transparency in online ads.
Because the interference on our platform went beyond ads, we're also increasing transparency around organic content from pages. We're looking at ways to provide more information about who's behind a political or issue-based Facebook page. We believe this will make it harder for deceptive pages to gain large followings and make it easier for us to identify malicious activity.
All of this pivots on ensuring that the Facebook community can maintain trust in the platform, said Zuckerberg:
People do not want false news or hate speech or bullying or any of the bad content that we're talking about. So to the extent that we can eradicate that from the platform, that will create a better product, which will also create a stronger long-term community and better business as well.
We're going from 10,000 people working on safety and security to more than doubling that to 20,000. We're building – we're doubling – in some cases more – our engineering teams focused on security. We're building AI to go after more different areas of harmful content and finding fake accounts and other bad actors in the system.
For investors though, there’s a downside here. Getting and maintaining those trust levels is going to come with a price tag cautioned Zuckerberg:
I expect that all of these things will make our product better over the long term, but we will incur the expenses a lot sooner as we ramp up these efforts.I also just think that going forward, we're going to be investing in these things at a much higher level because we realize that this is important, not only for our community and this company, but it's part of our responsibility to society overall.
Whether Wall Street’s as happy about that in practice remains to be seen.