WEF 2020 - AI, responsibility and some notable ambiguities

Chris Middleton Profile picture for user cmiddleton January 24, 2020
A Davos discussion about what the responsible governance of AI actually mean throws up as many questions as answers - and a few evasions.


The title of one panel event at the World Economic Forum (WEF) gathering in Davos this week -  ‘How to Implement Responsible AI'  - was notable for two things. Firstly,  omitting the word ‘ethical’ when ethics has long been the focus of such discussions; and secondly, for being ambiguous. Was the WEF suggesting that AI should itself be held responsible for, say, bias or lack of transparency in its decisions? Or did the organisers mean ‘How to Implement AI Responsibly’?

It’s a critical distinction: does the WEF believe the onus rests on the technology itself, or on its human designers, managers, and minders? After all, when autonomous or AI-powered assistance systems have failed in recent years – such as when an Uber test car mowed down a homeless woman in 2018 or a Tesla running under software control slammed its owner into a concrete barrier – those companies seemed quick to point the finger at fallible humans. Neither was keen to admit to technology failures or misleading customers with product names such as ‘Autopilot’.

Such accidents were apparently about clever tech in the hands of stupid humans – as opposed to the alternative interpretation: badly designed tech in the hands of people who trusted it.

The question from Amanda Russo, the World Economic Forum’s Head of Media Content, chairing the panel was: 

How do we engender trust and contribute to a global discourse about how artificial intelligence can be responsible [sic] and effectively implemented?

Step into the future

Step one in engendering trust is for companies to be much more ready to admit their failures and question their own products, values, and actions. Because if they don’t, their employees might – as Google discovered in 2018 when two staff rebellions (on a drone image analysis contract with the Pentagon and a censored search facility for China) forced it to publish an ethical AI statement.

And providers’ actions in markets surely have a role to play too, unless they believe trust in technology can be engendered by antitrust behaviour.

So why did the panel omit the word ‘ethical’ from the title of this discussion? Because the WEF believes we need to move past a silo approach that is focused on either ethics or human rights specifically. At least, that’s according to a white paper it published last year on the Responsible Use of Technology, the apparent blueprint for the WEF’s attitude to AI and other Industry 4.0 innovations.

That paper says:

For some, the ethics-based approach lacks a foundation in government and company accountability, and is viewed as the ‘easy’ or ‘soft’ option for companies. For others, the humanrightsbased approach is limited in its ability to incorporate the very different notions of fairness, distributive justice, or social cohesion that can exist in different societies and local cultural contexts.

Think about the implications of that statement. As Western citizens we like to believe we have a moral monopoly because we don’t cut off atheists’ heads or string men up in public for being gay. But the notion that companies founded by Californian hippies and geeks must have the public’s best interests at heart is a dangerous fallacy.

Just ask the state’s own lawmakers: the home of Silicon Valley implemented the California Consumer Protection Act (CCPA) this year to protect the public from the likes of Facebook and Google, which opposed the legislation.

Or let’s put it another way. US tech companies would like to sell product to countries like China and Saudi Arabia too, so how can they incorporate such countries’ values, shift their bits to a billion or two extra customers, but still tut disapprovingly while they’re at it? Kerching! But also *sad face* at any contravention of human rights.

So is it possible to build trust in AI per se, if Western companies are going to set aside their principles in the quest for a quick buck? Sorry... I mean adapt sensitively to local cultural differences?

Singapore sling

The WEF sought to answer this question by convening a panel made up of: the government of Singapore, which ranks 151st out of 180 countries on the Reporters Without Borders’ Press Freedom Index; Microsoft, which has just admitted to exposing 250 million customer records and seems to have a free pass to represent ethics at Davos; and a fintech company called Suade, which is using data to help banks avert the next financial crisis (good luck!).

According to the WEF, the answer to these dilemmas is to build a model framework that balances ethical and human rights considerations with good governance and, by implication, with local mores and expectations. Such an initiative was announced by Singapore at the WEF’s 2019 annual meeting; this year’s panel focused on updating delegates on its progress.

S Iswaran is Singapore’s Minister for Communications and Information and Minister-in-Charge of Trade Relations at the Ministry of Communications and Information (MCI). He told the audience that his government is taking new steps towards better AI governance thanks to the framework it helped design:

Collectively, what we are seeking to do is build a trusted AI environment that will guide organisations to deploy AI responsibly. These efforts, they really build on what we started and shared at Davos last year. That’s when Singapore launched the Model AI Governance Framework to guide businesses to deploy AI at scale in a responsible manner. This framework translates broad ethical principles into pragmatic measures that businesses can adopt voluntarily.

This year, Singapore has introduced three further elements to its roadmap for organisations seeking to govern the technology. The first is a self-assessment guide, co-developed by the government, the WEF, and over 60 organisations, including panel members Microsoft and Suade, plus KPMG, Google, MasterCard, Salesforce, and Visa. That's a lot of big corporate muscle.

This is designed to help organisations align their practises with the Model Framework and assist with peer review and assessment. Also published at the event were a second edition of the framework itself, which focuses on robustness and reproducibility, plus a compendium of technology use cases. According to Iswaran, “15 organizations globally” have already aligned themselves with the framework.

Centring the issues

Kay Firth-Butterfield is Head of Artificial Intelligence and Machine Learning at the WEF’s Center for the Fourth Industrial Revolution. Her job is creating projects that fill technology governance gaps within organisations – such as the AI programme with Singapore. She said:

The way we do these projects is we have governments, or business, or academics, or civil society, come to us and raise a governance issue, or maybe we look for that governance gap. And in this case it was a marriage in that we had seen this governance gap and Singapore was interested in working with us. We then build a community of multiple stakeholders and work together to create a robust outcome, which we believe we have done with Singapore in this model framework.

The WEF has also just released a toolkit for boards of directors to understand the issues that surround AI governance.

For Diana Paredes, co-founder and CEO of financial regtech provider Suade – another of Singapore’s partners on the project – data quality is part of the challenge of good AI governance. It is critical to counteract bias at the earliest stages, not try to remove it later, and essential to have a human in the loop of all AI decision-making. She said:

Right governance in many ways also means taking the right responsibility. [...] Do you actually have a human in the loop, out of the loop, over the loop. [...] A framework that works for the industry has to be flexible and really allow the right level of detail and scenarios that you can encounter.

She added:

We always speak about ethics around AI and how to do it properly. But ethics also covers taking along in the journey a certain amount of layman language and explainability to consumers to really understand what this AI is going to do in their lives. Addressing those issues in the right way fundamentally means that AI is going to be adopted at a much faster pace and embraced instead of resisted.

The ethics of plain speaking

Microsoft President Brad Smith also sees a need for speed. He commended what he sees as Singapore’s “guidepost” leadership on AI governance and that government’s commitment to move quickly and insert itself into the debate:

In some places around the world, people are starting to look at these issues and begin to appreciate their complexity and they feel they’re going to have to study them for many years before they offer even an initial framework of how they should be addressed. And the fact that the Singapore government last year took one step and this year took another I think is very much a reflection of the kinds of progress we need.

The question, of course, is who is “we”: the human race, or technology companies for whom long-winded academic debates about ethics and human rights are an obstacle to growth? For Microsoft, rushing headlong into the AI world is obviously right, but wouldn’t the human race appreciate a little more time and reflection?

In fairness, Smith said there’s no time like the present to take steps – as the WEF has – to help organisations govern AI responsibly, otherwise the technology may govern us irresponsibly:

There is no single answer for all time with technology that is this young. But we should not wait for the technology to mature before we start to put principles and ethics and even rules in place to govern AI.


My take

This WEF session was less an open, in-depth discussion and more an opportunity for panelists to read prepared statements on the Singapore project, aided by a media-friendly chair. This was press event as staged corporate affair, not an exploration of what constitutes ethical AI.

But elsewhere at Davos this year, the WEF has been pushing its Tech for Life blueprint for responsible technology adoption. For the WEF, technology must make a positive contribution to society, enhance the lives of its users, create opportunities for all, respect and enhance human rights, and be human-centred at all times. It also asks those in technology to create, use, and manage technology with purpose, relevance, respect, responsibility, and openness. It says:

Only by working toward these principles can we put trust back into technology and ensure it continues to be a positive force for good, and help organisations reconnect to their stakeholders. Indeed, aligning an organisation for profit and purpose through the Tech for Life principles helps it engage with a society increasingly demanding that technology works for, and with, it.

It is hard to argue with those principles. But the uncomfortable fact remains that in a global economy, there is no globally enforced interpretation of human rights regulations and no global ethical standard.

And by suggesting that organisations should start using words like ‘responsible’ instead of ‘ethical’ and consider local sensitivities, the WEF may inadvertently be giving Western companies a licence to flog technology to repressive regimes with a clear conscience.

The alternative, of course, is to try to export Californian values to the world. But even California doesn’t trust its companies to do that.

A grey colored placeholder image