AI and government - cautionary notes for policy-makers from the UK's Chief Scientific Adviser.

Profile picture for user slauchlan By Stuart Lauchlan November 10, 2016
Summary:
AI and government - some benefits and some cautionary notes for policy-makers from the UK's Chief Scientific Adviser.


shiny-brain-1150907

It was only last month that members of the UK government sat down in the House of Commons for a debate on robotics and AI, a conversation that was worthy, but which never really escaped the limitations of a non-tech savvy group of politicos cracking ‘jokes’ about cheesy sci-fi movies. 

For a more measured response to the potential of AI and how policy-making might be shaped around it, a new report from the UK’s Chief Scientific Adviser Sir Mark Walport - Artificial intelligence: opportunities and implications for the future of decision making - makes for a more in-depth and considered reading.  

The report considers 3 questions:

  • What is Artificial Intelligence and how is it being used?
  • What benefits is it likely to bring for society and for government?
  • How do we manage any ethical and legal risks?

Walport notes:

Artificial Intelligence is not a distinct technology. It depends for its power on a number of prerequisites: computing power, bandwidth, and large-scale data sets, all of which are elements of ‘big data’, the potential of which will only be realised using artificial intelligence. If data is the fuel, artificial intelligence is the engine of the digital revolution.

On the plus side, the report argues that AI technologies have enormous potential for increasing productivity and streamlining how people and organizations interact with large sets of data:

Firms like Ocado and Amazon are making use of artificial intelligence to optimise their storage and distribution networks, planning the most efficient routes for delivery and making best use of their warehousing capacity. Artificial intelligence can help firms do familiar tasks in more efficient ways. Importantly, it can also enable entirely new business models and new approaches to old problems. For example, in healthcare, data from smart phones and fitness trackers that is analysed using new machine learning techniques can improve management of chronic conditions as well as predicting and preventing acute episodes of illness.

The use cases cited by Walport make the important point that AI can be used to do the same things that human beings can do, but at a volume and complexity that is beyond people. That said, the human factor will remain essential:

Artificial Intelligence is not a replacement, or substitute for human intelligence. It is an entirely different way of reaching conclusions. Artificial Intelligence can complement or exceed our own abilities: it can work alongside us, and even teach us, as shown by Lee Sedol’s unbroken string of victories since playing AlphaGo3. This offers new opportunities for creativity and innovation. Perhaps the real productivity gain from artificial intelligence will be in showing us new ways to think.

AI and government

When it comes to government use of AI, the report fortunately steers clear of the usual Big Brother scaremongering and cites positive potential use cases, such as:

  • Make existing services – such as health, social care, emergency services – more efficient by anticipating demand and tailoring services more exactly, enabling resources to be deployed to greatest effect.
  • Make it easier for officials to use more data to inform decisions (through quickly accessing relevant information) and to reduce fraud and error.
  • Make decisions more transparent (perhaps through capturing digital records of the process behind them, or by visualising the data that underpins a decision).
  • Help departments better understand the groups they serve, in order to be sure that the right support and opportunity is offered to everyone.

Again, when considering the use of AI as an advisor to policy-makers, there is an emphasis on the importance of human intervention:

It is likely that many types of government decisions will be deemed unsuitable to be handed over entirely to Artificial Intelligence systems. There will always be a ‘human in the loop’. This person's role, however, is not straightforward. If they never question the advice of the machine, the decision has de facto become automatic and they offer no oversight. If they question the advice they receive, however, they may be thought reckless, more so if events show their decision to be poor. As with any adviser, the influence of these systems on decision-makers will be questioned, and departments will need to be transparent about the role played by artificial intelligence in their decisions.

It will also be essential for government to operate within the parameters of legal data management and privacy frameworks, warns Walport:

These are an essential ingredient in maintaining public trust in government’s ability to manage data safely. Teams making use of artificial learning approaches need to understand how these existing frameworks apply in this context. For example, if deep learning is used to infer personal details that were not intentionally shared, it may not be clear whether consent has been obtained.

These current protections are effective and well-established. However, understanding the opportunities and risks associated with more advanced Artificial Intelligence will only be possible through trials and experimentation. For government analysts to be able to explore cutting edge techniques it may be desirable to establish sandbox areas where the potential of this technology can be investigated in a safe and controlled environment.

Challenges and trust

The report flags up specific AI-related challenges with which government is going to have to come to grips. These fall into two broad areas:

  • Understanding the possible impacts on individual freedoms, and on concepts such as privacy and consent, arising from the combination of machine learning approaches with the creation of ever-increasing amounts of personal data.
  • Adapting concepts and mechanisms of accountability for decisions made by Artificial Intelligence.

Walport warns that there will be a need for new systems of responsibility and accountability among policy-makers:

Current approaches to liability and negligence are largely untested in this area. Asking, for example, whether an algorithm acted in the same way as a reasonable human professional would have done in the same circumstance assumes that this is an appropriate comparison to make. The algorithm may have been modelled on existing professional practice, so might meet this test by default. In some cases it may not be possible to tell that something has gone wrong, making it difficult for organisations to demonstrate they are not acting negligently or for individuals to seek redress. As the courts build experience in addressing these questions, a body of case law will develop.

Despite current uncertainty over the nature of responsibility for choices informed by Artificial Intelligence, there will need to be clear lines of accountability. It may be thought necessary for a chief executive or senior stakeholder to be held ultimately accountable for the decisions made by algorithms. Without a measure of this sort it is unlikely that trust in the use of Artificial Intelligence could be maintained. Doing this may encourage or indeed require the development of new forms of liability insurance as a necessary condition of using Artificial Intelligence – at least in sensitive domains.

Finally, there needs to be an open and transparent debate around the impact of AI technologies in order to win and maintain public trust. Without that trust, all else will fall by the wayside. Walport suggest that this debate needs to tackle:

  • How to treat different mistakes made through the use of Artificial Intelligence.
  • How best to understand probabilistic decision-making.
  • The extent to which we should trust decisions made without Artificial Intelligence, or against the advice of Artificial Intelligence systems.

Public trust can only be established if citizens see demonstrable benefits from AI as well as strong safeguards in place. At a minimum, says Walport, this means:

  • Correctly identifying any harmful impacts of Artificial Intelligence.
  • Formal structures and processes that enable citizen recourse to function as intended.
  • Appropriate means of redress.
  • Clear accountability.
  • Clearly communicating the substantial benefits for society offered by Artificial Intelligence.

My take

An excellent platform for debate from Sir Mark, one that may have originated from the UK Parliament, but which has application and meaning for governments around the world. For anyone with an interest in AI -  citizen, policy-maker or technology vendor - this report is essential reading.