Suicide prevention Facebook style. A net good in an AI-driven world?

SUMMARY:

While the truism that technology is neither inherently good nor inherently evil remains true, Facebook’s suicide intervention program powered by AI pattern recognition pushes existing legal, moral and ethical boundaries.

big brother facebookIn the US, we live in a world where everything about the individual is knowable. The giants in this are Facebook and Google, but it is Facebook where my attention falls today.

Consider this. In George Orwell’s dystopian masterpiece 1984, published in 1949, the hapless Winston Smith has a startling moment of perception:

…if you want to keep a secret you must also hide it from yourself. You must know all the while that it is there, but until it is needed you must never let it emerge into your consciousness in any shape that could be given a name.

In the age of big data, AI and limitless processing power, Smith’s fears seem not only prescient but quaint.  Much like the Berlin Wall, the privacy barriers have mostly tumbled over the past decade and without a shot fired.

Facebook and Google make money by selling the information they capture about us. Age, location, gender, tastes, brand preferences, buying habits and many other social signals we willingly stream to them through from our posts, comments, likes, and other online activity are the meat from which companies create campaigns that target us with goods and services.

With that goal in mind, brands have developed powerful algorithms that can get inside our heads and predict, with considerable accuracy, what we might do or buy next. The addition of AI tools like pattern recognition only serves to make the ability to predict future behavior even more promising and problematic.

Is this an invasion of our privacy? Sure. But it’s a Faustian bargain most of us willingly make in exchange for the convenience of being able to find information instantly, to be entertained and made to feel connected with family and friends in a “free” online community.

They are not stealing our secrets; we are willingly giving them away.  At what point, though, does this capacity to influence and control private behavior become dangerous and ethically invasive?  How far is too far?

These are topics that play into the ongoing but still emerging debates around the intersection between privacy and the ethical use of technology to influence our behaviors.

These thoughts were triggered by Facebook’s announcement that it is expanding its program to identify and intervene when a user is expressing thoughts of suicide or self-harm in posts or live videos.

Facebook launched the program in March, but it required the user or a friend to file a report seeking help.  Now, the company has begun using AI pattern recognition software to detect posts, comments or live videos where someone might be expressing suicidal thoughts.

The artificial intelligence element works proactively and doesn’t require a human first to make a report. If the AI identifies a post that is “likely to include thoughts of suicide,” it will send it along to one of Facebook’s specially trained reviewers who can then contact emergency services.  Said Facebook Vice President of Product Management Guy Rosen:

Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts. This is in addition to reports we received from people in the Facebook community. We use pattern recognition to help accelerate the most concerning reports. We’ve found these accelerated reports— that we have signaled require immediate attention—are escalated to local authorities twice as quickly as other reports.

In the context of preventing suicides, “proactive detection” sounds benign and a sterling example of good citizenship. But it is easy to think of dozens of ways that pattern recognition software could be misused.

For example, suicide is sometimes associated with mental illness which introduces the possibility that medical records might be compromised or mentally fragile users might be outed and then subjected to online bullying.

We know from press accounts that Facebook cooperates with the National Security Agency and it would be hard to imagine that there is not already a similar effort to identify potential jihadists.

That’s good if it stops a terror attack but what if it leads to the wrongful incarceration of people who are merely expressing their religious or political feelings?  Remember the terrible panic right after 9/11 when the Department of Immigration rounded up thousands of people of Arab descent and held them for months until they could prove they weren’t among the plotters?

As Adam Khan tweets:

Facebook using Artificial Intelligence to detect suicidal posts (including in livestreams) but how will it make sure that this isn’t turned into a tool to scan for political dissent, aid profiling by governments? Who should be providing oversight into how this works/should work?

To which, Facebook’s chief security officer Alex Stamos responded:

The creepy/scary/malicious use of AI will be a risk forever, which is why it’s important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in.

It is no accident that Facebook is rolling out its suicide prevention program throughout the world in the next few months, with the notable exception of the EU, which has far more stringent privacy rules and an even more robust General Data Protection Regulation (GDPR) going into effect in May 2018.  Data protection laws across the EU ban processing of an individual’s sensitive personal data without their explicit permission.

My take

Should we be cynical? The FT certainly is:

Facebook’s co-founder is not the first Silicon Valley figure to show signs of political inadequacy — nor will he be the last. But he may be the most influential. He personifies the myopia of America’s coastal elites: they wish to do well by doing good.  When it comes to a choice, the “doing good” bit tends to be forgotten.

There is nothing wrong with doing well, especially if you are changing the world. Innovators are rightly celebrated. But there is a problem with presenting your prime motive as philanthropic when it is not. Mr Zuckerberg is one of the most successful monetisers of our age. Yet he talks as though he were an Episcopalian pastor.

Facebook says it has been working on suicide prevention tools for more than ten years and that its approach was developed in collaboration with mental health organizations such as Save.org, National Suicide Prevention Lifeline, and Forefront Suicide Prevention and with input from people who have had personal experience thinking about or attempting suicide. They are also available globally — with the help of over 80 local partners — in whatever language people use Facebook in. That is a good thing we should all welcome as a positive technology based outcome.

For most of us in the US, the data genie is out of the bottle. We sold our souls for endless free cat pictures and a fantastic home encyclopedia. The danger that someone somewhere will use AI technology for evil purposes is genuine and probably inevitable.

That still leaves open the burning question – what might be the unintended consequences and who will be the gatekeepers? We know how it worked out for Winston Smith. Will it be the same for the rest of us?

Facebook is the company to which all eyes turn. They have to set the best example. But Facebook’s seemingly non-stop set of gaffes on topics where it tries to do right but then shoots itself in the foot does not inspire confidence. Adult supervision anyone?

Image credit - screen grab from the film "1984."

    Leave a Reply

    Your email address will not be published. Required fields are marked *