Main content

UK government launches AI Sector Deal and new Office for AI this week

Chris Middleton Profile picture for user cmiddleton March 5, 2018
The UK government’s new ‘sector deal’ for AI should be launched today by Business Secretary Greg Clark. We take a look at what it entails.

artificial intelligence AI robot automation
The Sector Deal for Artificial Intelligence aims to take “immediate, tangible actions” to advance the UK’s ambitions in AI and the data-driven economy, in line with the new Industrial Strategy.

It builds on – and is an extension of – the 2017 review by Professor Dame Wendy Hall and Jérôme Pesenti, ‘Growing the Artificial Intelligence industry in the UK’, which was commissioned last Spring and published in the Autumn.

News of the imminent launch was leaked by Hall herself at a Westminster eForum seminar event last week in central London. ‘Artificial Intelligence and Robotics: Innovation, Funding, and Policy Priorities’ brought together a range of speakers from academia, business, and government to discuss the challenges facing the UK in these hyper-competitive sectors.

The event revealed that the UK has also revealed new details about a new organisation to manage national policy. The new Office for AI will be jointly led by Gila Sacks, director of digital and tech policy at the Department for Culture, Media, and Sport (DCMS), and Dr Rannia Leontaridi, director of Business Growth at the Department for Business, Energy and Industrial Strategy (BEIS).

Leontaridi said at the event:

A significant new Whitehall organisation is called the Office for AI that Gila and I will jointly lead it. It is designed to make sure that we are joining up, not only with central government, but also with the industry around us.

A budget of £20 million has been set aside to help and encourage government departments and the wider public sector to deploy AI, she said. This is because AI and the data-driven economy are “at the heart of” the UK’s redesigned industrial strategy:

Our ambition is to make the UK a global centre for AI. We are a world leader in this space. [...] We want to use AI, and lead AI in an ethical way. Fundamentally, we want to help people to develop the skills for understanding and using it.

DCMS’ Sacks added:

We need to support people to develop the skills and the capabilities that they’re going to need to thrive in a future labour market that is potentially transformed by these new technologies. We need to understand how the nature of work and the shape of the labour market will change, and what that will mean for the rights of individuals, the responsibilities of employers, and the role of the state.

We need to work with digital sectors of the economy as they explore what these technologies will mean for them and what good adoption looks like. It can’t just be ‘more, bigger, faster’, it must be adoption that drives economic and social benefit for the UK as a whole.

Sacks then talked about the regulatory environment, where there is a tension between the need for a light-touch system that fosters innovation, and caution about how these technologies could be abused.

We need to ensure that AI and data technologies are effectively and appropriately governed. Through the dynamic application of existing law, through the careful assessment of where new regulations should be put in place, and by the effective application of non-regulatory governance levers, we can ensure that our governance keeps pace with technological change, and that our governance is genuinely both safe and pro innovation. Because pro-innovation regulation is at the heart of our approach to AI.

We really believe that the right governance, the right regulation, can make the UK the place to innovate by giving confidence to citizens and certainty and clarity to investors and innovators.

Making AI ethical

Sacks explained that another organisation, the government’s new Centre for Data Ethics and Innovation, will be an “advisory body to advise government and regulators on how to ensure that regulation, regulatory practise, and non-regulatory measures are best used to support the ethical and innovative use of data”.

Two other factors will be essential if the UK is to capitalise on its extraordinary history in AI, data science, and computing: public trust, and across-the-board collaboration, she added.

We need to build public confidence and take the public with us. We need to engage people in an adult conversation about what these technologies can do for us. We need to build trust, not just to convince people that everything is going to be OK, but also to remind people that these are tools.

We want to make sure that technology stops feeling like something that is happening to us and starts feeding back into the realm of democratic debate and decision-making. And if we’re going to get this right, we’re going to need to collaborate.

As one of the prime movers of the current policy debate, Dame Wendy Hall shared her own views about the challenges facing the UK – and was more than a little indiscreet on the subject.

Academics are being poached like billy-o if they know about AI. It’s a supply and demand world. We need to think about retraining. The Alan Turing Institute will be developing a programme of fellowships, which will be funded through the money that’s in the budget to try and attract the best.

I don’t know how we’re going to deal with the salary issue, but that’s another problem altogether. Because if you go to Australia, or Canada, or the States, professors there are already being paid more than the Prime Minster. So how do we attract them to come here? That’s an issue we need to sort out.

She was equally indiscreet about the AI review itself, and about her former partner in the programme:

Jérôme Pesenti at the time was CEO at Benevolent AI. But he’s gone off to be head of AI policy at a small company called Facebook. Which is ironic in terms of what we were trying to do with the review, which was all about job creation and economic growth and trying to keep that growth within the country! [Laughs]. Erm, so... I won’t say anymore.

Then she did exactly that:

He was in New York the whole time we were doing the review, so we did it all by WhatsApp.

The key thing we were asked on 1 March when we went to Downing Street last year, was they asked us to do this review in about four months. We had to deliver the first set of draft recommendations in less than two months. We had to give them draft recommendations by the end of April and we honed and refined it from there. There was no time for deep consultation or to ask people for written evidence.

Despite that, Hall and Pesenti’s work is now at the heart of both government AI policy and the organisations governing it – for the foreseeable future, at least. The challenge is that recent government tech policy has been a curious mix of bright ideas and sudden policy reversals once the slow-moving Civil Service bureaucracy takes hold – which often seems to exist to prevent ministers from doing anything that might scare the horses.

A grey colored placeholder image