The UK launched its National AI Strategy in September 2021, a document that jostled for mindshare alongside new strategies for space, data, energy, and more, within the government’s overall Plan for Growth.
So, six months on, how is it faring? First let’s remind ourselves of some key details of the Strategy, which appeared in the wake of the 2016 Hall-Pesenti Review and last Spring’s consultative AI Roadmap. The Strategy’s top-line goal is for the UK to “remain an AI and science superpower fit for the next decade”. Below that is a tier of five core aims:
- to share the benefits of AI adoption across every region and sector
- for the UK to retain its position as a leader in AI R&D
- for growth in the AI sector to translate into a sustained boost for GDP
- to protect and further fundamental UK values – though it is unclear what those are in the current political climate
- and for strong AI capabilities to address national security issues.
All this is supported by three pillars:
- a commitment to invest in the long-term needs of the AI ecosystem (with increases in the supplier market, skills, workforce diversity, technology use cases, and scientific breakthroughs)
- a commitment to ensuring that AI benefits all sectors and regions (with greater diversity in applied AI, more technology adoption, and better value for money, with the public sector becoming an exemplar of AI ethics and procurement)
- and effective governance for AI (with responsible innovation and increased public trust).
In itself, therefore, the AI Strategy is commendable – as are the new plans for space, data, energy, and more, all of which have bubbled up from the UK’s many founts of technology research, policy expertise, and academic excellence.
However, the UK has a growing top-down problem: public trust in government is essential to making these strategies work, but that trust is spiralling downward with scandal after scandal in Downing Street, and with ever more divisive policy announcements coming from the Cabinet.
Meanwhile, the leadership’s cavalier attitude to every aspect of public life runs counter to the solid, respectable aims contained within these documents. That helps no one, least of all the UK’s many innovators, start-ups, and tech providers.
These big-picture problems aside, how well is the AI Strategy doing to date? And what needs to happen next? CogX co-founder Tabitha Goldstaub is Chair of the UK Artificial Intelligence Council, which was formed in the wake of the Strategy. She says:
We need to forge ahead. And we need the ecosystem to come together to maximize the benefits of the technology, but also the benefits of what we're seeing as interest from government. The UK really does lead the league tables – right behind China and the US.
Our job now is to make sure that we continue that listening exercise, help galvanize action across the ecosystem, and hold government to account on the actions that are contained within the Strategy. But part of that is looking at data and evidence, and thinking through what the basis is for every decision.
Goldstaub cites a number of recent reports on the UK labor market commissioned by the government’s Office for AI:
In one study, one-third of firms said that existing employees lack AI technical skills, and this actively prevents them from meeting business goals. Two-thirds expect that the demand for AI skills within the organisation is likely to increase in the next 12 months.
But there's a challenge finding talent outside the organization. There are over 100,000 job postings in AI and data science waiting to be filled, but almost 50% of companies said that job applicants lack the necessary technical skills.
That’s rather a bleak view to be honest. But there’s good news too: there is now a stronger trend towards upskilling. Three in five firms [60%] reported that employees had undertaken internal or external AI training in the last year. But sadly, only a quarter of those firms offer training on ethics and AI.”
So, while UK firms recognize that AI skills are lacking in the UK job market, many are seizing the initiative to fill that gap. In the process, however, the government’s strategic focus on ethical AI operation and procurement is falling by the wayside: just 15% of firms focus on it in training. Over time, that is unlikely to increase public trust in the technology.
Diversity and inclusion in AI
There are other challenges too: the AI sector frequently remains the preserve of white men – an issue throughout STEM professions, especially application development, though the government itself has done a good job of putting women in senior AI policy roles.
There’s nothing wrong with being a white male coder, but it stands to reason that technology used by everyone should be designed by everyone to avoid the creation of systems that, unintentionally, reflect the mindsets of a single group. Bias doesn’t always emerge consciously; it sometimes emerges systemically.
It's also about opportunity, of course. If the market is booming, all of UK society should benefit – a core aim of the AI Strategy. Goldstaub notes:
The diversity and inclusion story is certainly far less positive. Over half of firms did not employ any females in AI roles, 40% did not employ any staff from ethnic minority backgrounds, while a similar proportion did not employ any non-UK nationals. Suffice to say, there is a lot of work to do. However, I can confirm that it's very high on DCMS’ agenda. I've heard them talk a lot about how important this is, so that report helped.
Pre-pandemic figures from industry group STEM Women reveal that just 16 percent of UK IT professionals are women, and 21 percent of IT technicians. In terms of the future workforce, 35% of students in STEM subjects are women, but in both Computer Sciences and Engineering and Technology the figure falls to just 19%.
In the US, women make up roughly 50% of the STEM workforce, according to Pew Research, but only 25% of those working in computing careers. Black employees (nearly 14% of the US population) are just nine percent of the STEM workforce, while Hispanic workers (18% of the population) hold just eight percent of STEM jobs.
At a Westminster eForum on AI policy before the pandemic, Professor Dame Wendy Hall – co-author of the review that kickstarted policy discussions on UK AI – noted that she had spent her entire career advocating for diversity in STEM but, if anything, the situation has worsened in the UK.
The STEM Women figures suggest that it’s more the case that they are barely improving: a consistent story of 80+ percent of the IT profession being male. That said, the AI employment figures suggest that women are more strongly represented in this field than the industry average: a glimmer of hope for the AI Strategy.
A recent report from London-based economic research consultancy Capital Economics looked at the adoption of AI technologies. And, according to Goldstaub, it paints a “serious picture” of the size of the challenge. She says:
It predicts that the use of AI by businesses will more than double in the next 20 years, with more than 1.3 million UK businesses using the technology by 2040 – spending £200 billion a year, and we're currently at £63 billion.
That’s a big leap, when only 15% of all businesses in the report have so far adopted at least one AI technology. An additional two percent of businesses are piloting AI – which is good, but it could be higher, and 10% plan to adopt at least one AI technology at some point in the future. But it feels like we've got a big hill to climb.
That report found that the IT, telecoms, and legal sectors currently have the highest rates of adoption, while – interestingly – the sectors with the lowest rates are hospitality, health, and retail.
The latter findings are underlined by a forthcoming report from public sector tech consultancy Oxford Insights, says Goldstaub. This says the UK should focus on high-stakes sectors, such as healthcare, security and transport, with the NHS offering a huge data advantage over other nations.
It warns that, to be successful, AI businesses need an increase in a broad set of commercial and sector-specific skills, in addition to those technical skills. This is probably the most important point from everything we've learned recently. Because it shows that we really need to think about data and AI literacy for everybody.
This is something we made clear in the AI Roadmap, which leads me nicely to the Council's priorities today. Ultimately, we're here to be a critical friend to government, both supporting and ensuring that everybody understands there are risks and potential pitfalls along the way.
Our job is to rally the ecosystem. We hope to continue to galvanize action from across industry, academia, and civil society. […] There are three passions that Council members have all come together behind. They are data on AI literacy, using AI to solve big world challenges, and getting the governance of these technologies right.
We don't believe that the AI Strategy will work until we have a nation of conscious, confident consumers of the technology, as well as a diverse workforce capable of building AI that benefits not just the company, but also the economy and society as a whole.
This means we need to have a population that can do three things: build AI, work with and alongside AI, and live and play with AI.
On building AI, we need to ensure that there's a strong academic research and industrial base furthering research, development, and innovation.
On working with and alongside AI, the government needs to ensure we have a workforce that knows how to maximize the value of AI in their jobs and for the business. Success is really going to come down to school-age education, changes to the curriculum, and lifelong upskilling. But we also need to celebrate human intelligence.
And on living and playing with AI, we need to support the general public in imagining what sort of future they want: to know their rights, vote with their wallets, and decide what's going to go into their pockets, homes, and cars.
Aim for the moon
The next big focus for the AI Council is what used to be called moonshots but are now generally termed ‘big bets’, says Goldstaub.
The three areas are the climate crisis, health, and defence. And what's amazing is, when you come to things like addressing the climate crisis, there are ways that artificial intelligence can be used that we never thought possible before, such as improving the efficiency of food production, building more resilient and adaptable energy systems, and of course, the race to net zero.
A recent study by Microsoft and PwC estimated that AI’s environmental applications could save up to four percent of greenhouse gas emissions by 2030 and contribute to 4.4% growth in GDP, as well as creating potentially a net 3.8 million new jobs.
Similarly in health, having learned the lessons from COVID, there is potential for AI to improve health outcomes for patients and free up staff time for more care. And when it comes to defence, we're starting to see more and more AI companies collaborating with GCHQ and others.
Properly designed and delivered, these sorts of moonshot programmes play really well to AI’s strengths, because they require people to work across boundaries and across existing organisational structures, and to build new relationships, new networks, and new common languages, so we're developing entirely new solutions.
An inspiring rallying cry from a key figure in this industry – but also a much-needed dose of pragmatism and realism for the government. More than anything in 2022, the UK needs critical, but loyal friends. Flag-waving alone achieves nothing, without trust and real-world action.