The ideology of algorithms needs a serious discussion

Euan Semple Profile picture for user euan_semple March 28, 2018
Summary:
The debate on how to manage algorithms is starting to unfold. Understanding the ideologies that underpin how these are created and what happens as they operate is critical for the future of our machine-driven world.

ideology argument
As we live more of our lives online we inevitably leave digital trails. Some of those trails we are aware of, some of them we are not. The patterns we make with our collective online activities have enormous value. They can also be used for good, or for ill. The biggest challenge is working whose definition of good or ill we allow to prevail. That's where the ideology of algorithms comes in.

Every day it seems, high profile stories emerge in the mainstream media of the use of algorithms to manipulate public sentiment. As a result, many people are choosing to constrain their online lives out of concern at what is being done with the trails that they leave and the information that they share. This is a shame. Being smart about what we do online and why, and what the consequences are, is a good thing. Closing ourselves off from some of the amazing opportunities that the online world affords us would be a great loss.

The patterns that we leave online are interpreted – and, controversially, manipulated – by algorithms. Mathematical equations that take data, assign “meaning” to that data, and then take action on the basis of that process. About five years ago I started using the phrase "the ideology of algorithms" to express the significance of this increasingly powerful capability.

No neutrality

For all the apparent innocence of mathematical formulae, there is no such thing as a neutral algorithm. They all have a purpose, a context, a degree of bias. In writing the mathematical formula there is an end in mind. That end steers the outcome.

The algorithms, and their associated prioritization are determined by the very small groups of people who write them, and they are influenced by the context in which they are working. What seems sensible to them in that context, a priority, and reasonable, may not be to other people. Even if their biases appear appropriate at the time at which the algorithm was written, they may not remain appropriate after the passage of time.

Additionally, given the globalized nature of the online world, what might be acceptable in one country, to one group of people, may be completely unacceptable to others living on the other side of the world. I read today that Microsoft is going to start monitoring Skype calls for “offensive language”. Who gets to decide what offensive is and on what basis? Never mind the future, we are already being affected by the ideology of algorithms.

We need to talk, everyone needs to listen

The challenge at the moment is that there aren’t enough people having the sort of big-picture discussions that we need to have about these issues. Indeed we have no place in which to have these discussions at a sufficiently strategic, globalized level.

Governments are running to keep up with the advancing pace of technology as it is, and asking them to preside over these big picture, philosophical, ideological, challenges is unrealistic. Watching the Information Commissioner’s Office struggling to deal with the current Cambridge Analytica fiasco is just the tip of the iceberg.

What about academia? Individual academics are thinking hard about these problems, and there are an increasing amount of books on the subject. But they are not in a position to directly influence the outcomes of the actions of technologists who at the moment are being left to work this stuff out for themselves.

We also need to address the issue of how siloed academia has become and how important it is to combine the insights of humanities and philosophy students with the fast-paced thinking of the technology departments. We need to influence this thinking at an early stage to begin to build-in the wider context and varied values that will be more likely to result in tools that move the human race forward rather than seeing it as cannon fodder for data mining systems.

At the moment what is acceptable is largely determined by commerce. By the commercial platforms that manifest these technologies, and by those who pay for those services. Those who pay for the services such as Facebook and LinkedIn are advertisers. In effect, we are allowing a combination of commercially driven technologists, and commercially driven marketing teams, to have an inordinate amount of influence on how we see the world. They already do.

Whether it is the items appearing in your news feed on Facebook, or the search results we see on Google, what we think is the truth is determined by a ridiculously small number of people. Someone said recently that if you want to get away with murdering someone then hide the body on the second page of Google search results and nobody will ever find it.

Do we need to establish a global forum in which to decide what shape we want our future lives to take – the priorities that we want the algorithms to move towards on our behalf?

Black Box AI

As more and more machine intelligence becomes self-learning and starts to potentially outstrip its originators the challenge becomes exponentially greater. When we have Black Box AI, artificial intelligence that is so complex and learns so fast that we can’t keep up with it, the consequences of our original prioritization become even more critical.

When you get two Black Box AI systems talking to each other so that two and two make five then we risk ending up in some very dystopian scenarios. A computer scientist used the analogy of The Sorcerer’s Apprentice. Getting the brooms to help you carry water seems like a good idea at the time but if you don’t anticipate all eventualities things can go horribly wrong. Say in the future you set up some Black Box AI to eradicate cancer and then find that it is killing humans - because that eradicates cancer. Why? You forgot to state the obvious.

Who to trust?

So who do we trust to decide what values are reflected in these algorithms? In conversation with some former BBC colleagues at a conference recently we were talking about the potential for blockchains to help manage digital identity, a problem that the BBC had seriously considered getting involved in more than fifteen years ago.

Their thinking was that if people didn’t trust the government with their data, and didn’t trust companies like Microsoft, as it was in those days, then perhaps a public service institution such as the BBC could step into the role as a trusted provider of personal data identity.

Sadly the BBC appears to have lost that trusted position over the past decade or so but the problem still exists. Who do we trust with our data, and more importantly, who do we allow to decide what it means?

Even within our organizations, we need to consider who is in charge of the algorithms and what worldview they are encoding.

We have huge potential for organizational transformation. We could code our systems to move towards a decentralized autonomous model, or a rigidly hierarchical centralized system, and everything else in between.

We’ve already seen the challenges of large-scale, hard to change once they are implemented, enterprise management systems. It could get much worse. This is why the “non-digital” parts of a business need to get over their “I don’t do technology” stance and start to think hard about the impact that these tools will have on organizational culture and management style.

That’s if we have any managers left. There is a sort of implicit assumption that whatever happens, we will need managers but at a workplace conference in Sydney when I asked who would be happy interacting with a chatbot rather their current line manager more than half the audience raised their hands! So if we have chatbot assisted management, what values do we code them with - and who gets to decide?

Awareness is the key

Thankfully there is an increasing awareness of these issues. At a recent event in Copenhagen, a group of concerned people involved in the technology industry created a manifesto to hold that industry to a higher standard. They had become uncomfortable about being asked to write addictive or manipulative software purely to increase the number of clicks to increase advertising revenue and felt the need to do something about it.

But we all need to take responsibility for moving the arguments on, making sure that we expect the highest standards from our legislators, politicians, and technologists. We can no longer afford to be passive consumers.

People need to take more responsibility for their information environment. They need to be more aware of the risks of bias. They need to be more critical in their thinking. They need to think harder about the sources of the information that they are reading. They need to be more responsible about what they are sharing and why. Collectively we need to grasp the situation and not run away from it.

Deleting your Facebook account is only a short-term fix, these tools are going to continue to become more important in our lives and the issues that they raise won’t go away until we deal with them.

Questions remain

We are already coding ideology into our systems. What sort of society do we want? Who gets to decide? Do we want a chaotic, individualistic, free for all at one end of the spectrum, or a fascist state at the other? Our old ideological frameworks of socialism and capitalism are looking tired and out of date. Do we need to invent a new -ism? Will the algorithms decide what that -ism is on the basis of the patterns they see of what works and what doesn’t? Will they know better than we do what is ultimately good for us? How will we feel about that?

Loading
A grey colored placeholder image