Professor Dame Wendy Hall has been a force for change and innovation in computer science since the 1980s. In recent years she has notably been a campaigning, motivating presence in UK AI. In 2017, she co-authored the independent report Growing the Artificial Intelligence Industry in the UK, with Jerome Pesenti, CEO of BenevelentTech. He then became VP of AI at a US behemoth, Facebook, much to Hall's bemusement. Way to go, faith in the UK!
Like all reports compiled for the government between 2016 and the start of the pandemic, the context of the Hall-Pesenti Review was Brexit, and the challenges that the UK faces in forging an independent identity outside the EU.
The good news is that most of the Review's recommendations were implemented by the government within its forward-looking Industrial Strategy, which pushed investment in AI, plus robotics, mobility, green technologies, and more, to the forefront of policy making - despite the cross-governmental inertia that Brexit created.
So, it is tragic that the Industrial Strategy, one of the few elements of policy welcomed at both ends of the political spectrum, was torn up in March 2021, in favour of a vague Plan for Growth as the pandemic wreaks havoc across the economy.
Arguably, that decision replaced coherent strategy with short-term tactics as the UK casts around for partners and, frankly, for anything that brings in some cash. In business, such a move would signal a failing venture; in politics, however, it probably just reveals how much the Prime Minister hates his two predecessors.
None of this is Professor Hall's fault, of course - a woman who has exhibited strong leadership on others' behalf for decades. But where does it leave UK AI?
Despite the many positives that arrived in the wake of the Hall-Pesenti Review - targeted investments and a focus on skills among them - the UK arguably now has a more complex, bureaucratic system of institutes, centers, departments, offices, and units underpinning its technology ambitions. Worthy and impressive, but confusing and hard to navigate.
Even the Office for AI sits across two departments: Business, Energy, and Industrial Strategy (BEIS), and Digital, Culture, Media, and Sport (DCMS) - organizations that themselves have complex, serpentine briefs. DCMS has long been the dumping ground for whatever the government doesn't understand, though proponents might argue that policymakers are trying to foster interdisciplinary thinking.
The UK also has a rolling programme of announcements about announcements, and roadmaps towards strategies, typically accompanied by claims of world leadership in each area. And so it was that at a Westminster eForum on AI strategy this week, the main message was that a national AI strategy would be revealed later in the year - rather like the current trend for trailers about trailers for forthcoming blockbusters.
So how does Hall believe the UK is doing when it comes to AI strategy? While acknowledging the positive change that her own Review created (and it did), she said:
All the things that we managed to get the government to pull out the bag […] are a drop in the ocean compared to what we actually need. It's a drop in the ocean of what's needed to keep ourselves at the forefront of the AI agenda in the world. That's the point we really need to make to government as we're developing the new strategy. The job is not done. And we have to increase funding in this area.
All too often in recent years - take robotics as one example - the government has had the right approach to nurturing new sectors, backed by organizations such as UKRI/Innovate UK and the EPSRC, but has made woeful levels of investment to support it. A frustrating policy of constant self-undermining. Money isn't everything, of course, but it helps. In robotics, the UK's central investment mid-decade was roughly 200 times smaller than Japan's.
Hall says her own focus now is on filling the UK's AI skills gap via a 10-year programme. The context is an Ipsos Mori survey of 118 UK companies, published in May this year. It revealed that data science and AI skills are lacking in the jobs market and critical vacancies are being left unfilled.
Among the survey's findings, nearly half (49%) of respondents said they were affected by a lack of job candidates with technical skills in AI, and 32% by a dearth of relevant business skills. Firms also reported poor AI skills in their existing workforces: 55% said the shortfall was in their understanding of AI concepts and algorithms, 52% in coding skills/languages, 52% in engineering, and 51% in user experience.
The good news is that 62% of companies said employees in AI roles had received at least some training over the past 12 months - though very few in the ethical dimensions of the technology.
Part of the skills gap is rooted in the longstanding lack of diversity in the AI sector, said Hall, and in coding and STEM careers generally. Survey after survey reveals that these areas remain the preserve of young, white males. There's nothing wrong with being a young, white male, of course; the point is that technology used by and affecting everyone ought to be designed by a community that is more representative of society, which in the UK is 51 percent female.
According to Ipsos Mori, less than one quarter of the AI workforce (24 percent) is female, while 27 percent comes from ethnic minorities. However, those figures are higher than for the IT/STEM sector overall, where studies have consistently shown that 85-90 percent of the workforce is male and over 90 percent of employees are white.
Hall didn't pull her punches:
In terms of allies, computing is too important to be left to men. That's not to denigrate men, it's to say that women are 50% of the population. We need to be part of this too. [The impacts and risks are] much worse for AI than in computing generally because of the problems of bias and unfairness. We have to build interdisciplinary teams in the industry, and incorporate diversity at every stage. We have to include it in ethical frameworks as well, because if it isn't diverse, it isn't ethical.
Diversity is a serious issue, because it's getting worse rather than better. We really need to double, triple, quadruple down on putting diversity at the core of everything we do; diversity and inclusion. […] We can talk about it till the cows come home, but we need to give [policymakers] measurable targets plus carrots and sticks to make it work.
I've been trying to get more women into computing since the 1980s. And we need to be inclusive across the board, not just in terms of gender and ethnic minorities, but also people with disabilities, to make sure that the tools and technologies that are being built with AI, the products and services, are fit for purpose and good for everybody.
I've fought this battle all my life to make science and engineering something that's attractive to everybody, not just to men or a narrow group of people. I would absolutely argue for more interdisciplinary education in secondary schools and in universities. That would be fantastic.
But there's so much inertia in the system. Whenever you talk about changing curricula, changing degrees, trying to work across departments in government, we work so much in silos. That's how this system, this establishment, is set up. And in order to do what we're trying to do, we have to break that down. And that's no easy task.
This time [the AI strategy], we will take the programmes wider and not just be talking about what we're doing in higher education, but also in further education, in AI and data literacy for everyone. How can we help people move into AI as a career from wherever their starting point is?
Why so many strategies?
Great news. So how does the government believe it is doing?
In March, Digital Secretary Oliver Dowden set out the UK's 10 technology priorities for 2021, which comprised the usual loud but substance-free statements about building a world-class digital infrastructure, unlocking the power of data, unleashing the transformational power of tech and AI, levelling up digital prosperity, creating a science superpower, and so on. Kerpow! Blam! Roar! Since COVID-19 emptied London's streets, you can hear the chest-pummelling from miles away, which is probably the point.
Thankfully, the Office for AI is less inclined towards macho posturing, perhaps because it has always been led by women. Sana Khareghani is its head. She told the eForum:
The new national AI strategy will focus on growing the economy through widespread uses of AI technology, ethical, safe, trustworthy, and responsible AI, and resilience in the face of change through an emphasis on skills, talent, and R&D.
So fundamentally, the strategy is looking to ensure that we continue doubling down on the foundations or enablers of AI technologies to create a fertile ground, so these technologies are pulled through and used across more sectors than they are today. At the same time, creating an environment and a governance regime around it that enforces the use of responsible AI - rather than AI for the sake of productivity only.
The National AI Strategy that we are currently working on takes into account the recommendations that came from the AI Council in the roadmap that they published earlier this year. We've also spent a lot of time talking to industry, academia, and civil society.
We've also had help from the Alan Turing Institute. They developed a month-long survey, a consultation to bring as many voices into the strategy as possible. We've been looking at various drivers and applications of responsible AI through these workshops, everything from enablers and foundational things, such as access to skills and data, all the way through to ethics, mission-driven AI, venture capital funding, and governance. And the Secretary of State for DCMS has announced new plans for innovation-friendly digital regulation.
That's great, but why such a fragmented approach, rather than the more integrated and coherent one that was set aside earlier this year? She added:
I often get asked, why do we have so many strategies? And I think the answer is really that there is a better way to approach things than a master strategy that tries to tell us what we should do across every aspect of this complex area. AI, just on its own, is very perverse and is an umbrella term under which many technologies fit, so we really do need to look at this from many different angles to be able to give us a roadmap to ensure that the country takes advantage of global changes.
But Khareghani is not immune from the Johnson era's fondness for repetitive soundbites that aren't backed by real-world evidence. She said:
While levelling up prosperity across the UK, we need a strategy to focus on specific areas of the problem, bringing government, industry, and academia across the country together to make a real difference. And I have to say I really believe in this approach; I think it works.
We also have done this before. We had the Industrial Strategy, and it has four grand challenges on an ageing society, clean growth, future mobility, and, of course, AI and data. And by having separate strategies for data innovation and AI under the Plan for Growth and to build back better, we can ensure that the country's medium and long-term issues around policy intelligence are spread evenly and handled by specialist teams throughout the whole of government, instead of being the responsibility of one department, or one minister.
So there you have it. The UK has done it before. And now, only four years later, it is doing it again, but louder and from more places. Perhaps that is a good thing, but it misses the point that all this constant resetting and political infighting mitigates against progress, and risks undoing a lot of previous work.
Plus ça change, plus c'est la même chose, as the French might say. But hey, let's not talk about them. So take it from an Englishman: Stop moving the chairs, shouting through bullhorns, and waving those flags, Prime Minister. Just steer the frigging boat.