Memo to Musk - here's how to build a benevolent bot

Profile picture for user cmiddleton By Chris Middleton September 24, 2017
Elon Musk warns of killer robots in your neighbourhood, but the RSA and others silence the noise by proposing some realistic solutions.

While tabloids, human rights organisations, and tech visionaries such as Elon Musk have all warned of the rise of killer robots and malignant machine intelligence, what can we do to prevent it from happening? How can we make way for a more benevolent machine?

The first thing the tech sector must do is shift its focus from the apocalyptic to the mundane. While fears about out-of-control AIs or autonomous weapons are justifiable – as the military, intelligence agencies, and police circle the technologies like drones – many of the societal risks may be subtler and more insidious, if organisations rush in tactically for short-term gain.

The RSA - The Royal Society for the Encouragement of Arts, Manufactures and Commerce - has published an 83-page report, The Age of Automation, which examines the likely socio-economic impact of robotics, AI, and autonomous systems – both good and bad. It suggests a range of scenarios that industries and governments must work together to avoid.

Narrow, tactical implementations of AI could lead to an “entrenchment of demographic biases”, says the RSA, while amplifying workplace discrimination and blocking people from employment based on their age, ethnicity, or gender:

Equipped with AI systems, organisations will have greater precision in predicting people’s behaviours and the risks they face. This could lead to certain groups being denied access to goods, services [such as insurance], and employment opportunities.

In some cases, these impacts may be covert and deliberate, and in others rooted in flawed or partial training data – information gathered from human society that has long disadvantaged women or ethnic minorities, for example. In the UK, just 17% of STEM positions are occupied by women, which in itself has a negative impact on AI and robotics development.

So, diversity needs to increase, says the RSA:

When machines are only built by a small group in society, they will ultimately only tackle the problems of a small group in society. Tech companies should redouble their efforts to recruit a more diverse cohort of programmers and managers, for example by partnering with groups like Code First: Girls and InterTech LGBT.

The big picture

So a holistic, sustainable outlook is essential if everyone is to benefit from a technology set that has the potential to be transformative, beneficial, and complementary to human skills. As suggested in recent weeks, the UN’s Sustainable Development Goals could form the basis of a new regulatory framework.

But what about the details? The RSA also makes a number of recommendations on how the tech industry as a whole can cooperate with governments to ensure that the impacts are positive.

First, the industry should develop “benevolent machines”, says the RSA, with programmers, tech companies, and investors “steered towards developing benign forms of technology”. That’s easy to say, but what does it mean? The RSA explains:

Society can and should shape the development of machines. Progressive elements of the tech community should take a lead in drafting and signing up to ethical frameworks that would steer programmer behaviour, as the IEEE has done in the US.

In 2016, the IEEE Standards Association published its own report, Ethically Aligned Design: A Vision for Prioritising Human Wellbeing with Artificial Intelligence and Autonomous Systems. There have been other recent examples of the sector stepping up to the plate. Google, Facebook, Apple, Microsoft, Amazon, IBM, and DeepMind are founder members of pressure group, which commits them to a voluntary ethical code.

More, the RSA suggests that ethics should be a compulsory module in computer science courses, with developers being asked to sign an ethical pledge, akin to doctors’ Hippocratic Oath.

Show me the money

But there’s another dimension to the ethical development debate - who pays the bills?

Worldwide investment in robotics and AI may be significant, as countries race to stockpile IP, yet much of the money will come from venture capital or private equity funds, which may wish to direct the technologies to hit narrow commercial targets, suggests the report. Mobilising the social investment community to sponsor benevolent applications and use cases would be one way to counter this:

Philanthropic foundations and socially conscious investors also have a role to play, by funding technologies that have more benign effects on workers.

The RSA says governments should also plough more public funds into robotics and AI, in order to influence their development along socially advantageous lines – as Japan has, with its £161 billion investment in building a “super-smart society”. In AI specifically, many analysts see a two-horse race emerging between the US’ Silicon Valley powerhouses and China, with the latter having a major advantage. A disadvantage though in global terms is the extent to which all those reams of Chinese data may be walled off from the West.

In the UK, the RSA urges the government to significantly increase its spending on robotics and AI, and says:

Part of this funding should be used to launch a new mission that rewards researchers developing machines that boost the quality of work, for example ‘co-bots’ that augment human labour. The UK government should look to partner with like-minded countries on such an initiative.

The UK should also establish a national centre for AI and Robotics, says the RSA. This new organisation would be tasked with diffusing the technologies throughout the economy, coordinating a “national mission to use AI and robotics for the advancement of good work”.

However, the RSA has left a nasty sting in the tail for any readers in Whitehall: up to 80 per cent of the UK’s existing investment in robotics and autonomous systems (RAS) comes from the EU, according to the RSA, funds which must now be under threat.

The industry’s to do list

The report makes further recommendations that can be applied by a partnership of government and industry worldwide. In partnership with industry, it says that governments should:

  • Future-proof the workforce. Educators must prepare future generations with the skills and competencies that will allow them to thrive in an automated economy.
  • Encourage organisations to co-create automation strategies with their workforce. Professors Leslie Willcocks (LSE) and Mary Lacity (University of Missouri-St. Louis) have suggested that technology is more likely to be integrated in organisations when the C-Suite is actively involved in spelling out the benefits, in discussion with staff.
  • Create a modern social contract. Our tax and welfare systems must evolve so that those who reap the most rewards from automation support those who lose the most, says the RSA. This was a view echoed by techUK in its manifesto for digital change this year, which campaigned for the creation of a new industrial strategy.

The RSA suggests that “flexicurity” should be embedded in this new world of work, via a universal basic income and by championing cooperative schemes in which more and more workers become owners of the business. More, the tax burden should be shifted away from labour and towards capital, it says.

My take

These are all good, utilitarian, even utopian ideas, and some political commentators may regard them as a socialist manifesto. But arguably, these are simply the kinds of principles that are implicit in the flat, networked, collaborative processes that the technologies themselves enable.

Attempting to impose traditional, top-down, hierarchical governance on peer-to-peer processes is where bad policy decisions are made and societies turn against themselves. The networked world favours the many, not the few, and companies such as the self-organising Satalia – in which influence coalesces around skill set – have proven that new organisational structures naturally emerge from peer-to-peer technologies and open data.

But the challenge remains that some governments are still applying 19th Century industrial thinking to the new world of work. They seem to believe that AI, robotics, automation, the IoT, cloud technologies, and data analytics are about maximising benefits to the few via slashed costs and improved industrial efficiency – coupled with trickle-down philanthropy from gentlemen in stovepipe hats, perhaps. (See the Reform report on public sector automation earlier this year for some stark examples of this type of thinking.)

But industry is no longer run by the likes of Isambard Kingdom Brunel, and it is time that some governments’ social and economic policies finally woke up to that fact. If people like him still exist, they are headquartered in Bermuda, or – like Elon Musk – are warning of Armageddon even as they build the road towards it.

This deep-seated misunderstanding of robotics and AI in government circles, along with a host of other technologies, is why some countries are investing too little, too late, in the wrong areas, and for all the wrong reasons.

But at present, there’s little sign of that changing as we doff our caps, knuckle down, and hope for the best. Ultimately, benevolent machines can only emerge from benevolent societies. We will get whatever robots we deserve.