WEF 2017 - the changing nature of risk in a digital age

Profile picture for user catheverett By Cath Everett January 16, 2017
The World Economic Forum’s Global Risk Report 2017 indicates that emerging technologies such as artificial intelligence and robotics are already starting to have a disruptive effect on society. So what can be done to reign in their most damaging potential effects?

Depending on your point of view, it appears that technology can either be blamed, or thanked, for the emergence of such potentially disruptive events as the UK’s decision to go for Brexit and Donald Trump’s election as US president.

As the World Economic Forum’s newly published The Global Risk Report 2017 succinctly puts it:

Evidence suggests that technological change provides a better explanation than globalization for the industrial decline and deteriorating labour-market prospects that have catalysed anti-establishment voting in many of the world’s advanced economies.

And it is just these deteriorating labour market prospects, brought about by increased automation and an “unsettling” pace of technological change, which will pose the greatest global risks over the next 10 years. The impact of such change will be felt most in non-manufacturing sectors as “rapid advances in robotics, sensors and machine learning enable capital to replace labour in an expanding range of service sector jobs”.

In fact, robotics and artificial intelligence (AI) in all its forms are described as the emerging technology with the greatest potential for having a positive, and negative, impact over the coming decade. The 11 other technologies of the so-called “Fourth Industrial Revolution” include 3D printing and blockchain and distributed ledger - and they too offer a “large and growing” potential “to disrupt established business models”.

But estimates of likely job losses brought about by these technologies vary greatly. A frequently quoted 2013 study by Oxford University’s Oxford Martin School forecast that 47% of US jobs were vulnerable to automation, while a working paper from the Organisation for Economic Cooperation and Development (OECD) in 2016 put the figure at more like 9% - although low income workers are expected to be hit hardest whichever one you go with. Somewhat worryingly though the Report continues:

Technology has always created jobs as well as destroying them, but there is evidence that the engine of technological job creation is sputtering. The Oxford Martin School estimates that only 0.5% of today’s US workforce is employed in sectors created since 2000, compared with approximately 8% in industries created during the 1980s.

Another statistic that does not bode well for the future is provided once again by the OECD. It indicates that up to 80% of the drop in the US workforce’s share of national income between 1990 and 2007 was due to the impact of technology as change shifted the “distribution of income from labour to capital” – and that was even before robotics and AI started making an impact.

Social disruption

But other trends are also colliding that are likely to make workers not only poorer, but also more insecure employment-wise. While the Fourth Industrial Revolution is making it easier for people to work remotely from anywhere in the world, this scenario is also creating more global competition, which has the potential to drive wages down.

The growing development of the ‘gig economy’ spurred on by digitisation likewise means that “workers can expect more volatility in their earnings”, while having to do without the usual employment protections such as holiday and sick pay enjoyed by “standard” employees.

Instead they will be expected to shoulder more personal financial risk - as incidentally was the case during the first Industrial Revolution. This means that emerging technology is “threatening to bring this evolution full circle”. But the Report also warns of other gig economy implications that are only likely to compound this already challenging situation:

New employment models also hinder the collection of taxes from both employer and worker, reducing the amount governments have available to fund social protections.

In other words, growing insecurity employment-wise is coming about as state-run social protection systems start to hit breaking point due to, among other factors, an ageing population in the developed world at least – and the situation is only likely to worsen as the ‘gig economy’ progressively takes hold. The report explains:

The underfunding of state systems is coinciding with the decline of employer-backed social protection schemes: this is happening while technological change means stable, long-term jobs are giving way to self-employment in the 'gig economy'.

This scenario is, unsurprisingly, creating a potentially heady cocktail. “With incomes pushed down and unemployment pushed up in affected sectors and geographical regions”, the Report points out, the end result could be “disruptive social instability”.

In fact, it ranks ‘rising income and wealth disparity’ and the ‘increasing polarization of societies’ first and third respectively in the list of top five trends expected to determine global developments over the coming decade.

In a bid to mitigate the most damaging effects of technological change, however, the report advocates “careful governance”. It says:

Careful governance can guide the distribution of benefits and impact on global risks because the evolution of new technologies will be heavily influenced by the social norms, corporate policies, industry standards and regulatory principles being debated and written today.

Effective governance

But the report acknowledges that the question of how to govern emerging technologies is complex, particularly as the debate tends to become polarised between positive and negative impacts. There is also the issue that imposing overly strict regulation can delay or prevent potential benefits from coming to fruition as can continued regulatory uncertainty

On the other hand, there is a definite need to control and limit potentially harmful effects. The report says:

Ideally, governance regimes should be stable, predictable and transparent enough to build confidence among investors, companies and scientists, and should generate a sufficient level of trust and awareness among the general public to enable users to evaluate the significance of early reports of negative consequences…..[they] also need to be agile and adaptive enough to remain relevant in the face of rapid changes in technologies and how they are used.

At the moment, however, governance of emerging technologies is patchy, with some being heavily regulated and others barely at all as they fail to sit within the remit of existing regulatory bodies. For instance, AI and robotics, which stand out alongside biotech as being technologies that require more effective governance than most, are only lightly regulated in most places around the world – despite the potential risks associated with allowing greater decision-making powers to move from humans to AI applications. The report explains:

So far, AI development has occurred in the absence of almost any regulatory environment. As AI systems inhabit more technologies in daily life, calls for regulatory guidelines will increase. But can AI systems be sufficiently governed? Such governance would require multiple layers that include ethical standards, normative expectations of AI applications, implementation scenarios, and assessments of responsibility and accountability for actions taken by or on behalf of an autonomous AI system.

A big problem here is that most policy makers do not have specialised knowledge of the field. Another is that future developments may involve technologies “that are not an issue on their own but that collectively present emergent properties that require attention”. As a result, the report continues:

It would be difficult to regulate such things before they happen, and any unforeseeable consequences or control issues may be beyond governance once they occur.

One suggestion, however, is to devise broad-based policy guidance frameworks that cover general principles rather than specific AI applications – for example, the US Department of Transportation issued a 116-page policy guide for developers of automated vehicles covering safety, control and testing. But whatever approach is taken, it is clear that:

We face a pressing governance challenge if we are to construct the rules, norms, standards, incentives, institutions and other mechanisms that are needed to shape the development and deployment of these technologies.

My take

Challenges to social cohesion and questions over the legitimacy of established policy makers and institutions are coinciding with the emergence of technologies that are starting to have a disruptive impact at the economic, employment and societal level. 

The most potentially disruptive of these technologies, AI and robotics, are forecast to have the greatest potential for both positive and negative consequences over the coming decade. But the fact that society is failing to keep up with such change means there is a pressing need for better governance. Period.