UN report - our algorithmic world is creating a social welfare dystopia

Profile picture for user Jerry.bowles By Jerry Bowles October 22, 2019
Summary:
Tech marketers dish out plenty of excitement about intelligent software and automating the inefficient. But a new UN report raises concerning questions about our algorithmic futures.

dominos-fall

Millions of the world’s poorest citizens depend on social welfare programs to survive. Governments are increasingly turning to IT for solutions. Not everybody thinks this is a good idea.

Philip Alston, the United Nations Special Rapporteur on human rights and extreme poverty, presented a new report before the UN General Assembly on Friday (Oct. 19) that claims the combination of new digital technologies and an amoral Big Tech industry are dramatically changing the interactions between governments and the world’s poorest and most vulnerable. And not for the better.

In what the scathing report calls the rise of the “digital welfare state,” billions of dollars of public money are now being invested in automated systems powered by artificial intelligence (AI), predictive algorithms, risk modeling and biometrics that are radically changing the nature of social protection. Alston writes:

The digitization of welfare systems has very often been used to promote deep reductions in the overall welfare budget, a narrowing of the beneficiary pool, the elimination of some services, the introduction of demanding and intrusive forms of conditionality, the pursuit of behavioural modification goals, the imposition of stronger sanctions regimes, and a complete reversal of the traditional notion that the state should be accountable to the individual.

And:

In these states, systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish… As humankind moves, perhaps inexorably, towards the digital welfare future it needs to alter course significantly and rapidly to avoid stumbling zombie-like into a digital welfare dystopia.

Alston argues that the digital welfare state is usually marketed as an altruistic and noble enterprise designed to ensure that all citizens benefit from new technologies, more efficient government, and enjoy higher levels of well-being. For example, the United Kingdom’s digital strategy proclaims that it will “transform the relationship between citizens and the state”, thus “putting more power in the hands of citizens and being more responsive to their needs.” But, the UN report notes the UK is:

…an example of a wealthy country in which, even in 2019, 11.9 million people (22% of the population) do not have the ‘essential digital skills’ needed for day-to-day life. An additional 19% cannot perform fundamental tasks such as turning on a device or opening an app. In addition, 4.1 million adults (8%) are offline because of fears that the internet is an insecure environment, and proportionately almost half of those are from a low income household and almost half are under sixty years of age.

Alston is particularly alarmed by governments that justify the introduction of expensive and complex biometric digital identity card systems on the grounds that they improve welfare services and reduce fraud. Too often, he writes, the real motives are to slash welfare spending, set up intrusive government surveillance systems and generate profits for private companies. He writes :

The process is commonly referred to as ‘digital transformation’ by governments and the tech consultancies that advise them, but this somewhat neutral term should not be permitted to conceal the revolutionary, politically-driven, character of many such innovations.

A case in point is a new digital tool called System Risk Indication (SyRI) that is being used by the government of the Netherlands for the stated purpose to detecting welfare fraud by collecting massive amounts of data about its citizens and algorithmically creating “risk models” to determine who is most likely to commit benefit fraud.

The result of such risk profiling, Alson writes, is that it assumes the individual will commit fraud at some point despite never actually doing so because their risk profile suggests they will. That guilt associated with someone who hasn’t committed a crime results in closer monitoring of their behavior, completely bypassing any due process that would be granted to anyone else receiving assistance. Writes Alston:

That level of scrutiny may turn into an obstacle for those most in need to even get the help that is available to them. When they know that they will be questioned every step of the way, a presumption of criminality will be applied to their every action, they may just choose not to participate in the system at all. On paper, that means that SyRI worked—it prevented fraud from occurring. In practice, it means that a vulnerable member of society in need of help has been pushed away and likely won’t get access to the services they need.

Alston reserves his most scorching criticism for Big Tech companies who, he says, are putting profits above human rights. He writes:

A handful of powerful executives are replacing governments and legislators in determining the directions in which societies will move and the values and assumptions which will drive those developments…

Most governments have stopped short of requiring Big Tech companies to abide by human rights standards, and because the companies themselves have steadfastly resisted any such efforts, the companies often operate in a virtually human rights free-zone.

The UN report was drawn from Alston’s country visits to the UK, US and elsewhere, as well as 60 submissions from 34 countries.

My take

Alston’s findings are not without merit. Governments across the world are spending enormous amounts of money on automating their social welfare delivery systems and, in the process, turning real people into algorithms and replacing human judgment with machines that have no compassion, empathy or understanding of circumstances that fall outside the mean.

A similar revolution is now taking place in criminal justice where AI is often the determining factor in who gets bail and who stays in jail. But, as we are learning every day, algorithms based on biased and incomplete data produce incomplete and biased results. In too many cases, we don’t yet know what we don’t know well enough to justify eliminating human oversight.

There is also the risk of automation bias, the conclusion that we spent millions of dollars on this stuff so it must be right even in cases where it is clearly wrong.

And, of course, Big Tech marketers make big promises about how new technologies will speed up processes, increase efficiency and transparency, reduce waste, save money for taxpayers, take human fallibility and prejudice out of the equation, and ensure that limited resources reach those who need it most. Sometimes, perhaps, that even happens. Most real-life implementations are more complicated.

Is that likely to change? Not really or, at least, not voluntarily. High tech executives aren’t going to wake up tomorrow and say “You know automation really is killing jobs for millions and making life harder for people who are poor because they don’t have jobs.”

But, a reckoning is coming. One positive outcome of the recent high tech fall from grace is that the regulators and the public at large are more skeptical about products and the promises of executives who promote them. That is a good thing.