Digital Dystopia - when algorithms attack

Chris Middleton Profile picture for user cmiddleton July 29, 2015
Summary:

In Part 2 of this personal report on a dangerous rise of automation and machine-based decision-making, Chris Middleton shares a horrifying vision of the present.

verifiedvoting-online
Yes or no?

Four years ago, someone I know moved half a mile down the road from one apartment to a bigger one, in the same town that he'd been living in for a decade. A week later, he received a threatening letter from a council 100 miles away. It told him that he owed nearly £2,000 in unpaid Council Tax (a UK-based local tax) for a property he'd never lived in.

The letter was genuine, the automated result of a computer algorithm; no human being was involved.

Fearing identity theft, my friend phoned the council that had written to him. They informed him that if he didn't pay, they'd take steps to recover the money, such as by seizing property from his new address. He explained that he'd never even visited their town, let alone lived there, and so couldn't possibly owe them this money.

They responded by telling him that someone with the same name as him had moved out of a house there and disappeared, owing back-taxes. A national database flagged my friend as having moved at roughly the same time – albeit weeks or months apart – and so therefore (they said) he must be the same person: their absconding debtor. At that point, two completely different people, linked only by a (very) common name, became one person in the databases that govern our lives.

My friend's age, blemish-free tax records, National Insurance number, former address just streets away, and good credit history were all irrelevant to this particular system – thanks to an algorithm and a poorly designed rule that said “recover money quick, by any means necessary”. Once that instruction kicked in, machine-based decisions and algorithms set about dismantling my friend's financial reputation, piece by piece.

Like anyone who's found themselves trapped by poor data and/or absurd algorithms, his presumed, entirely automated guilt placed all the onus on him to extricate himself from the mess.

And, thanks to another algorithm, the machine-based judgement generated by the flawed system had already been logged with the credit reference agencies. So now it was official: he was a tax-evader. With no right of appeal against his machine-generated 'sentence'.

Banking on disaster

At the same time, my friend was trying to open a new bank account for a company he'd set up – a legal requirement for any new venture. He'd moved in with his partner, had spare cash, and wanted to do imaginative things with his life. At that time, he had a good income; no debts; no criminal record; a significant sum in his personal bank account (which he wanted to invest); and a 50 per cent stake in a large, valuable property, having just sold his own. He also had no credit cards or store cards. He preferred to buy what he could afford (it seems almost quaint, doesn't it?).

To people of his parents' and grandparents' generations, my friend would have been seen as an upstanding citizen, a model of financial probity. But the core rule on which the Financial Services sector used to be based (debt bad, savings good) has been inverted: debt is now good (for the shareholders that own it), as long as the debtors don't abscond. Today, the debtor might be an entire country, of course.

This simple, but absurd, rule now underpins a Western economy that favours debt and shareholder value over public service and community benefit, and under the current government, that situation is getting much, much worse. Algorithms are written based on that rule, and employees and customers must comply, as otherwise companies' remote investors make less money. Everyone loses except the shareholders, and if the system fails we bail the shareholders out so we can lose all over again. We know this to be true.

images
Says it all

The conclusion is obvious: automation favours the algorithm writer, because it's based on rules that create favourable outcomes for them. (Remember the Japanese robot hotel in Part One of this report? It's the same principle, scaled up to national level.)

But back to my unfortunate friend. According to the credit reference agencies (which are universally seen as holding accurate, benchmark data), and according to every bank in the UK (all of which are machine-based compliance systems) my friend was not only an undesirable customer who was incapable of managing his own affairs, (no credit arrangements, you see...), but also now an absconding debtor who had defrauded the taxpayer.

To an intuitive, intelligent human being, he was none of these things. But he was to a machine. As a result, every bank in the UK refused to open an account for the new company he'd set up. And the more he was refused their services, the more he was logged on databases as having been refused their services: a vicious, downward spiral of bad data feeding into more bad data, while denying human beings any opportunity to intervene. A nightmare of cyclical compliance, in fact.

Of course (you say), he could simply have contacted the credit reference agencies and paid them to correct his records. He tried to. But it's not that simple, because the onus remained on him to prove the system wrong. His problems persisted for months and they linger to this day, four years later. Why? Because errors spread like a virus in a networked system.

Healing 'patient zero' in a case of bad data and inept algorithms doesn't reverse an epidemic, because with networked, digital systems you have to cure an infinitely recurring number of patient zeroes. More, you could argue that any credit reference system that charges human beings money to correct its errors has zero incentive to be accurate.

The credit reference system is a deeply flawed, self-serving monster, one born of a truly insane idea that the more in debt you are, the better you are at managing your finances.

At that time, this comfortably-off, successful, debt-free man of good standing, who had equity, assets, and a bulging order book, was even refused store cards and credit arrangements to buy a new PC (“Take out more credit!” He was advised, but of course he couldn't). And every time services were denied to him, this was logged in another database – that 'virus' of errors again. All of this happened within weeks.

In the end, he was forced to wind up his company, which could have been an asset to the community.

Having no bank account meant that he couldn't trade: it was as simple as that. Winding up the company lost him money, and he was forced to make other decisions that also lost him money. Today, he's nearly penniless, and completely reliant on the debit card that goes with his 20-year-old personal bank account.

At the time his troubles started, one of the high street bank managers he spoke to had the good grace to explain the problem, even as my friend was showing him proof after proof of his then-excellent financial status. The manager said:

If it was up to me, sir, I'd say yes to opening the new account. But the computer won't let me.

Like every employee in every bank in the Western world today, that manager's ability to act outside of the system's algorithms and machine-based rules was no better than that of a robot. Granted, the manager had emotions, empathy, intelligence, and all the other traits that make us human, but he was no more capable than a robot of acting independently – even if reason and common sense told him to. He had to comply.

As clear a statement of human irrelevance to the Banking sector as you could wish to hear.

In the final part of this special report tomorrow, I'll take a long hard look at machine-based decision making and the Snooper's Charter.

 

Loading
A grey colored placeholder image