Friday rant - Facebook's disinformation spreading, ad-server-economy must go

Neil Raden Profile picture for user Neil Raden January 15, 2021
Big Tech has been on the defensive lately, and for good reason. What was once perceived as a way to foster democracy has given way to algorithmic dystopia. But Facebook's algorithmic dangers are tied to an ad-server-based model we must dismantle. Rant time.

Angry teacher pointing out

Facebook is perfect for amplifying and spreading disinformation at lightning speed to global audiences. And it doesn't do this by accident. Its single motivation is to hold and grow its massive list of active members to expand its ad revenue.

To do that, you give people what they want: posting pictures of their dogs or providing a haven for QAnon conspiracy theories that have already provoked violent incidents. 

Why would a comfortable billionaire want to preside over the destruction of his democracy? The answer is simple. Members are everything. Ad revenues depend on it. There are a staggering 2.7 billion monthly active users as of this writing. Facebook simply cannot be selective about who joins, including sweeping up every radical miscreant, conspiracy theorist, and disinformation disruptor. Facebook is a borderless nation-state, with a population nearly as vast as China and India, and secret algorithms primarily govern it.

How did the Internet evolve into a giant ad server?

The Internet (not the Web, yet) offered opportunities for people to "chat" over the Internet, typical message threads like CompuServe. AOL roared in front with a dial-up service offering e-mail and instant messaging. Like Facebook, AOL depended on ad revenue to its roughly seven million subscribers. The first commercial web sites were largely B2C, Business to Consumer, in the late nineties. Amazon was not the first, but it was the most prominent. The difference was that consumers paid for their products and services. Early ad-based, free sites were eventually dwarfed by the social web monsters like Facebook, Google, Twitter, YouTube, Instagram.

Today, all of the sites use some form of profiling and ad placement optimization. Twitter, for example, gives members the option of displaying "Most Relevant" or "Most Current." In the beginning, Twitter streams the most current feeds. Still, as their audience swelled, it becomes almost impossible, so they developed algorithms to selectively fee you what it found most relevant. But as a subscriber, you have the opportunity to turn that off, unlike Facebook.

Google has become so annoying with their profiling algorithm that I am going to evaluate other search engines. In the not too distant past, one could type a search string into Google and, like a miracle,  get precisely what you were seeking. Not anymore. No matter how you phrase your search, Google's Page Rank and profiling algorithms return a full page of irrelevant nonsense based on their current ad promotion relationship. 

Instagram, WhatsApp, YouTube, to one degree or another, all do the same thing.

What do the algorithms do?

That's a secret. But in the documentary, "The Social Dilemma," a former senior executive of Facebook said that the algorithms have become so complex they are not even sure at times what they are doing. But it is that LIKE button that drives it. The instant you LIKE (or dislike) something, that algorithm evaluates the content you just liked and adds a mountain of historical data about you to adjust your profile, which contains hundreds of attributes and determines what Facebook will stream to you.

What harm specifically do they cause? Long before Facebook went on the defensive about the events in Washington D.C., these issues were being tracked. A New York Times article chronicled Facebook's misdeeds globally:Facebook Admits It Was Used to Incite Violence in Myanmar.

Myanmar military officials were behind a systematic campaign on Facebook to target a mostly Muslim Rohingya minority. Human rights groups say this campaign has led to murder, rape, and forced migration. 

Facebook has also faced sharp criticism for being too slow to act in the Philippines, another country where its ubiquity has led it to become a platform for spreading hate speech and false information. In some countries, Facebook's experiments have helped amplify fake stories, while its slower response in other developing countries, including Sri Lanka, has allowed rumors to spark violence

WhatsApp has begun to play a leading role in elections, particularly in developing countries where it is being used by political parties, religious activists, and others to spread information. In India's recent elections, some WhatsApp messages were used to incite tensions, while others were found to be false."

During July 2020, over 1,200 groups and companies joined an advertising boycott of Facebook to pressure the social networking menace to intervene to halt the spread of misinformation hate speech on its various platforms. Many major brands  that boycotted included CVS Health Corp, Verizon Communications, and Coca-Cola C

It had no effect whatsoever and only served to demonstrate Facebook's control of ad spending opportunities that large brands require. However, the boycott was just one of many public backlashes Facebook faced in the last few years:

  • In April 2017, it was confirmed that groups, including enemies of the US, used its social-networking site to influence the 2016 presidential election.
  • In March 2018, the political consulting firm Cambridge Analytica had illegitimately accessed millions of users' data. It used the data to influence voters to support presidential candidate Donald Trump during his campaign. Facebook's complicity is assumed. 

This year the House Judiciary subcommittee on antitrust released its recommendations (PDF) on reforming laws to avoid the continued emergence of digital monopolies like the behemoths Apple, Amazon, Facebook, and Alphabet. The 450-page report suggested that Congress implement antitrust laws that could result in parts of the businesses being separated.

Say what?

Antitrust? Do you know how long it takes to initiate and conduct an antitrust action to its conclusion? Years. We don't have years. .By then, Facebook will provide the platform for every hate group in the country and give them a podium to mature into serious domestic terrorists. And citizens will be so confused by the contrived misinformation, not broadcast, but narrowcast right to them based on "the algorithm," while they are shielded from other streams. 

My take

So what can we do about it?

Option 1 - stop using it

Option 2 - boycott its advertisers

Option 3 - somewhere in that 2-3 billion membership, there must be someone who can write legislation. Facebook has people who monitor what goes out, but who knows what their criteria are. It also raises issues of censorship and free speech - and who is the judge of what's acceptable.

Facebook feeds on ad revenue because it provides its service for free. It's not reasonable to assume 2.7 billion people would pay a subscription to cover their $70 billion in revenue. I don't imagine some people would object to paying $30/year for a better Facebook, but quite a few would. Perhaps some sort of subscription would lessen the dependence on ads and, hence, the algorithm. But they have to do better scrubbing the hate groups, which I doubt they would. 

A grey colored placeholder image