Technology is the first thing blamed by customers who are struggling to get banks, utilities, councils, insurance providers, telcos, and other organisations to understand them and the unique challenges they face. Humans are like that: they're not as easy to categorise as programmers seem to believe. They're not robots.
But the real problems are the rules, the policies, that organisations write before paying a technology company to cast them into software as digital 'statues' of their belief system. Increasingly, these rules tend to be blunt, ill conceived, and/or driven by political ideology. Worse, they're often not designed to benefit customers or citizens at all, but the interests of remote (and frequently offshore) investors.
Combined with poor or incomplete data in a networked, highly automated world, the output of this 'read only' process, as interpreted by machines, can be a dystopian nightmare for any customer who doesn't fit in.
Because the employees in many organisations obey the same rules as the software does – more often than not by reading them off a computer screen – awkward customers can be made to disappear. And I don't just mean vanish out of the door to a competitor; I mean disappear from society as a whole. They can become invisible (or 'blocked', as in this chilling episode of Charlie Brooker's Black Mirror).
In Part 2 of this report, I'll introduce you to one of 'The Blocked' to prove my point. But first let me explain my thinking.
Rise of the robots
Recently, I had the privilege of taking part in a robotics workshop for schoolchildren – thanks to my ownership of a real humanoid robot. Robotics, coding, AI, and even computer ethics are all things that today's primary school kids are being taught. Indeed, the UK government says that every child in that age group must know what algorithms are: the sequential, operational steps behind every piece of software.
Turn left, turn right, switch on, switch off.
The good teachers who run these workshops have risen to the challenge by sharing a catchy ditty, 'The Algorithm Song', and getting their youngest pupils to sing it in class. By drumming the word into them – via the tune of a recent hit single – the hope is that these kids will leave primary education with the understanding of algorithms' importance that the government demands.
But as I will go on to explain, we're all singing The Algorithm Song every day, and it's not always to such a happy tune. And our fascination with robots is really just a minor-key refrain in what is already the soundtrack to this century: the rise of automation and machine-based decision-making, based on increasingly anti-social rules.
But let's stay with that robot refrain for a moment. At a Cloud Week event in Paris earlier this month, Fujitsu's platform chief, Chiseki Sagawa, predicted that by 2025, humanoid robots will be commonplace in homes and offices. Sagawa is biased – machine-men have long been part of the cultural narrative in his home country, Japan – but that's not to say that he's wrong.
This week, for example, the first hotel staffed entirely by robots opened in his country: the Japanese-speaking receptionist is a humanoid, but if you want an English-language interface you're directed towards a dinosaur (what are they trying to tell us?).
The stated aim of that venture isn't to increase the sum of human happiness; it's to reduce HR costs and increase efficiency, and thus make the hotel's shareholders wealthier. We could describe that as the core rule of the system, its basic condition. On top of that rule are written the algorithms that make it real – step-by-step operational instructions that each robot follows to reach a pre-defined outcome.
So essentially, the hotel's human guests enter a money-making machine that also offers them a place to sleep. And as the Internet of Things grows, these types of highly automated environments can only proliferate, removing job after job from the human employment market.
However, the Japanese obsession with manufacturing synthetic copies of themselves is really a piece of epic misdirection – except insofar as guests receive a simulated 'human' experience that's cheaper than employing people, of course. Creating a mechanical device that can walk, talk, and look like a human is simply a physical engineering challenge; it has little to do with the programming, intelligence, or intention behind the scenes.
Robots don't need a human face, or any face at all. They're already embedded in the machinery of Western society.
In a networked, big data-driven world, the real issues are algorithms, along with the data that those algorithms rely on to codify business directives and create desired, predestined outcomes. In that environment, systems increasingly rely on systems that rely on other systems. But what if the original algorithms behind it all were based on a bad idea?
Who tells the tellers?
Anyone who's walked into a branch of their bank this century is well aware that today's financial services companies are now, quite literally, software programmes fronted by dwindling numbers of human beings. Those employees – many excellent, professional, committed people among them – follow step-by-step instructions in any scenario, and are forbidden to depart from them.
To go into a branch to open an account, seek investment advice, take out a mortgage, or request a credit card, is to be walked through a preset series of slides by a human. It's purely a matter of rules and algorithms, of you satisfying preset conditions.
In a sense, it's insulting to everyone involved to even attempt to give it a human face. Whatever qualities banks' employees might have as imaginative, talented, skilled individuals, they may as well be robots, like the ones in that Japanese hotel. (Perhaps this is why the 'big four' high street banks in the UK – Barclays, NatWest, Lloyds, and HSBC, lost a quarter of a million account switchers last year.)
Despite their protestations to the contrary, banks don't make 'products' and they never have. Today's banks are really giant compliance statements that sit behind a set of automated processes, which are designed to maximise remote shareholder profit, not public service. They've become sets of increasingly uncompromising algorithms, to which everyone – employees and customers – must comply. I'll prove this thesis to you in Part Two of this report.
That's why smartphone-only accounts are becoming so popular; they're more convenient and remove human pretence and ambiguity. But their popularity also reveals that we're beginning to prefer machine-based decision-making to human interaction. Or rather, to prefer using the software ourselves to having a human being read it to us.
Now, this is all very well (perhaps) when the data that those systems rely on is accurate, and if the algorithms that drive them have been conceived to increase the sum of human happiness. (We'll leave it to posterity to decide if that's been the case.) But what if the data feeding those systems is wrong?
And what if the data-gathering is itself sometimes the result of flawed algorithms that obey equally flawed policies? And what if the underlying rules on which any one of these systems is based are no longer in customers' – or even society's – best interests?
In Part 2, I will share a horrifying true story.
My take (so far)
At Cloud Week 2015 in Paris earlier this month, Constellations Research's Ray Wang talked of a Digital Bill of Rights, which, among other things, would protect people's right to opt out of digital systems, and to prevent them being oppressed by their own data. That's great, but it misses two key points.
First, it's increasingly difficult to opt out of digital society, unless you opt out of basic services too.
And second, it's often not 'our' data that oppresses us, but someone else's. I'll explain why in Part 2 tomorrow.
Many of the systems that organisations use now force their employees to follow rules to the letter, removing any potential for human skills, talents, intelligence, and empathy to intervene.
So it's not that the robots are coming, it's that they're already here. They're us – unless we urgently take steps to put the humanity back into our customer- and citizen-facing systems.
In part 2 of this series tomorrow, algorithms attack and find unwilling victims.