Is 'do as I say, not as I do' official UK government policy for AI as well as everything else?

Profile picture for user cmiddleton By Chris Middleton September 15, 2020
Hammering a tough nut - Brexit Britain’s attitude to Artificial Intelligence, in the wake of the National Data Strategy.


The UK’s National Data Strategy was finally published last week. Just a few months earlier, a government spokeswoman admitted at a conference that Whitehall had set out to discover what the strategy was for after deciding that it needed oneaka “We’re going to do this.” Why? “No idea.”

The strategy was announced by Digital Secretary Oliver Dowden from an Ocado fulfillment centre. This was yet another example of the bizarre mix of upmarket and crass that typifies the current administration. What says ‘transforming society for sustainable benefit’ better than an automated food warehouse near Bexley? It emerged in the wake of news that control over national data was reverting to the Cabinet Office, which will doubtless be free to continue awarding contracts as it sees fit. 

So what of the UK’s Artificial Intelligence strategy? Logically, this should follow from the data that feeds it, despite the Cabinet’s peerless work of late in undermining public confidence in algorithms. To its credit, however, the government has backed its AI vision with a Sector Deal, the promise of private/public funding of £1 billion, institutes named after computing and data science luminaries Ada Lovelace and Alan Turing, and the formation of the Office for AI, which sits across BEIS and DCMS. 

But much of that good (if organisationally complex) work predates today’s ministerial team. Separately, Mayor of London Sadiq Khan has done much to promote the capital as Europe’s AI powerhouse.

How we got here

Anyone seeking more granular clues to the government’s thinking might have missed some during the Summer. A comprehensive statement of AI’s strategic potential for the UK actually emerged in June, when most of us were staring out of windows waiting for couriers - or an indeed an elusive Ocado delivery. As a result, it garnered little attention – like a parcel abandoned in an empty hallway. 

That document was the 152-page AI Barometer from another new institute, the Centre for Data Ethics and Innovation (CDEI). It boldly allies the technology with sustainable green growth, public health, and the need to tackle misinformation online: all excellent, socially conscious applications that shift the discourse away from dystopian singularities and killer robots. 

I urge you to read that report, which is a well thought-out, comprehensive, and accessible piece of work, with a narrative through line of balancing opportunity against informed risk. No one in government reads anything longer than a headline these days, so it’s up to you to engage with the detail.

Unfortunately for the CDEI, its report also focused on the need for automated decision-support systems in education, and the imperative to understand the impact of automated services on vulnerable people. This was just two months before an automated decision-support system - aka a mutant algorithm, apparently - in education had a negative impact on the lives of another group of vulnerable people - the UK’s teenagers awaiting their exam results

Like one of the darker chapters in the Harry Potter story – hasn’t it all got a bit dark lately? – the A’Levels fiasco saw teens magically sorted into two houses: Wifflewaff (hurrah for privately funded A-star students!) and GrimIndoors (your impoverished ancestors scored a U, and now so do you!). That’s ‘levelling up’ in action, and no mistake.

The CDEI report acknowledged that there are big obstacles to overcome in the commercial exploitation of AI, along with risks that innovators should always keep sight of: 

Three types of barrier merit close attention: low data quality and availability; a lack of coordinated policy and practice; and a lack of transparency around AI and data use. Each contributes to a more fundamental brake on innovation – public distrust. In the absence of trust, consumers are unlikely to use new technologies or share the data needed to build them, while industry will be unwilling to engage in new innovation programmes for fear of meeting opposition and experiencing reputational damage.

Quite. But the CDEI, a proactive, impeccably intentioned organisation, can hardly be blamed for the A’Levels debacle. However, the Education Minister can. (Fortunately the Civil Service is there to fall on a sword for him. It’s the honourable thing to do.) The report continues:

Against this backdrop, the CDEI is launching a new programme of work that will address many of these institutional barriers as they arise in different settings, from policing, to the workplace, to social media platforms. In doing so, we will work with partners in both the public and private sectors to ensure that the sum of our efforts is greater than their individual parts.

Where we are now

Let’s hope so. So what’s the latest from the government itself? For that, we turn to Sana Khareghani, new head of the UK’s Office for Artificial Intelligence. Speaking at a Westminster eForum AI conference on 10 September, she positioned the UK as the world number three in this technology – behind the US and China, with a chasing pack that includes Canada, Germany, and France. 

This was perhaps the first public acknowledgement that the UK isn’t quite as ‘world leading’ in this space as ministers routinely say it is, despite the excellence of its academic research and the startup hotspots of London, Cambridge, et al. 

It’s worth pointing out that in Q2 2020, 15 AI startups worldwide received investment mega-rounds (funding of $100 million-plus), according to figures from market analysis firm CBInsights. None were from the UK, with the top 10 comprising five US firms, three from China, and one each from Canada and Israel.

Yet Khareghani was at pains to talk up the UK’s smart, targeted investments in AI and data science:

We're delivering up to 2,500 degree conversion courses in AI and data science technologies to help the diversity challenge. This programme includes up to 1,000 scholarships to help increase the number of people from underrepresented groups and encourage graduates from diverse backgrounds to consider a future in these occupations. Universities have received an exceptional demand for courses, and more specifically for scholarships, and the first students in this programme are starting later this year in the autumn. I can't wait to hear about how they get on.

That’s good news. At the previous eForum on AI, panelists discussed the partnership between the Office for AI and the World Economic Forum’s Centre for the Fourth Industrial Revolution. This has since drawn up a set of rules and guidelines for the public procurement of AI, which were published in the Spring.  Khareghani said:

Re-thinking public procurement may seem dry and uninspiring, but there are a few more important opportunities for accelerating change across every area of public service and business. It's essential that public sector organisations understand both the potential of artificial intelligence, and how to deploy it in a safe and ethical way to ensure that everyone's treated fairly. 

We are now moving away from theory and into practice. We've been collaborating with businesses and public sector organisations to test the guidelines across government and ensure that frontline public services are benefiting from them. These guidelines have also helped shape the Crown Commercial Service’s new AI procurement framework, which will address ethical considerations. [...]

This connects teams in the public sector with private sector expertise and experience. New and emerging suppliers with fresh ideas are now able to bid for government contracts, enabling the government to be a driver for AI innovation. Working within our new guidelines for AI procurement, the AI marketplace will use the government's huge buying power to grow the sector.

My take

It’s great that the UK’s single biggest IT user, the government – one of the biggest customers in the world – is putting ethical, transparent deployment of AI at the heart of its strategy. 

So it’s unfortunate, therefore, that questions keep being asked about the nature and transparency of contract wins with the Cabinet Office, which appear to be happening outside of formal tender processes. And it’s doubly unfortunate that, despite its proactive, forward-looking stance on the world stage, Whitehall itself has done more to damage public trust in algorithms than nearly any other organisation in 2020, with the exception of Facebook.

‘Do as I say, not as I do’ is not a policy that will stand for long with an increasingly irritable public.