It has been a big week for Artificial Intelligence in the UK, as the AI Summit London began with a government promise of £30 million ($38 million) in new funding for responsible AI. Meanwhile, London Tech Week, of which the Summit is a part, continued with a host of other discussions on the topic.
But is the UK as close to the centre of this technology as it likes to claim?
The inconvenient truth remains that while the UK punches far above its weight in AI, ranking third behind the US and China in start-ups and investment (and leading Europe’s high-tech space overall, with a $1 trillion valuation versus Germany’s $467 billion and France’s $307.5 billion), its domestic sales market and legislative influence are limited.
A population of just 67.3 million people is a minnow compared with China’s 1.4 billion, India’s similar number, the EU’s collective 448 million, and the US’ 332 million. So, Brexit Britain needs to be able to sell to the world to capitalize on its impressive AI talent and investment, especially to its biggest trading partners, America and Europe, which together account for 59% of all UK exports.
But Brexit has made that more challenging, expensive, slow, and bureaucratic.
This has other implications for AI, as a report this week in The Economist noted. A relatively small domestic market with impeded access to Europe means that Amazon (AWS), Google Cloud, and Microsoft Azure have all failed to create large-scale GPU clusters in the UK that could run large training models. But those are available in the EU and the US.
Meanwhile, AI adoption in Europe already outpaces the UK, despite having fewer vendors and investors.
As a result, the UK’s recently announced plans to lead the world in ethical AI regulation may be bold and ambitious, but are likely to be frustrated by the much larger markets across the Atlantic and, especially, the English Channel.
The reality is that, just as GDPR led the world in data privacy and protection standards, so the EU’s freshly-signed AI Act is likely to become the most significant safeguard against US Big Tech overreach – one raised by 27 sovereign nations.
At the time of writing, the European Parliament has just voted in favour of adopting the Act’s measures that regulate the data feeding AI systems. Among other things, this may lead to the labelling of any content created by generative or large-language AIs, such as ChatGPT.
The Act’s 'big picture' aims include mitigating the risks of AI development and adoption, such as the automation of historic bias, and the potential exclusion of women and vulnerable minorities from services or employment opportunities.
Until recently, the UK has been stressing its determination to tear up EU regulations in British law, although this particular bonfire of the vanities appears to have been put on at least partial hold as reality kicks in. But the presence of that huge, critical market so close to its shores means those rules will remain in place for anyone selling into Europe, whether Eurosceptics like it or not. On AI, they will become de facto standards.
Put another way, isolationism and global regulatory leadership are mutually exclusive ideas and no amount of rhetoric will change that if the UK wants to shift product. However flawed it may be, the EU’s AI Act is likely to prevail and leave Britain’s promise to lead AI regulation as just so much hot air and political posturing.
But another challenge is more subtle. The UK also finds itself adrift from EU-wide research projects, and is therefore isolated from a continent full of valuable data, including on areas that are vital to ethical AI adoption and regulation. Needless to say, Britain is now unable to lead or influence that research in any meaningful way.
One such innovation hotspot is the EU’s BIAS project. This four-year program, funded by Horizon Europe Research and Innovation, is designed to “empower the AI and Human Resources Management (HRM) communities by addressing and mitigating algorithmic biases”.
That focus on employment aims to prevent AI from automating systemic bias in the job market. Such problems typically arise from decades of training data that may reveal applicants being denied jobs based on ethnicity, gender, age, religion, sexuality, postcode/zip code, education, and other factors beyond skill and qualifications.
A badly designed AI might actively exclude some of those groups on the grounds that they were generally unsuccessful in the past, and thus perpetuate human bias. Indeed, automated exclusion might add layers of obfuscation and/or perceived neutrality to those decisions, making it harder to prove that bias even exists.
The BIAS project aims to stop that happening and to upskill HR professionals, as well as AI vendors. Mark William Kharas is the EU organization’s Project Administrator. He tells me:
The UK is not a partner country [despite UK organizations currently being able to apply for Horizon funding], but we are going to be engaging in outreach and dissemination with researchers and policymakers in the UK – at least, as long as we are able to do so.
The problem is that research with the UK is certainly more complicated these days, with the changing situation, and with the uncertainty around funding guarantees associated with Horizon Europe, but that problem is beyond our abilities to deal with ourselves.
So, how does the project work? Kharas explains:
We are developing national labs as a pool of experts in the countries that we work in. They will be consulted throughout the project and will be part of our impact strategy.
Those labs will include people from the HR community, in addition to AI developers, workers, union members, representatives, and NGOs that fight bias or discrimination. And we are also engaged in co-creation workshops with stakeholders in different countries to learn how AI is used in human resources contexts.
As to whether the project ultimately aims to influence policy and regulation across the EU, including the AI Act, he says:
Yes. In the later stages, we have plans to engage in a series of workshops with CENELEC, one the EU’s standardization bodies, about how our findings should be integrated into standards for the use and development of AI technologies. And during the project, we will be analyzing our own work in light of the developing AI Act.
We hope, where appropriate, to share our results with the policymakers that are drafting Europe’s AI laws and regulations. One of our partners is the eLaw—Center for Law and Digital Technologies at Leiden University, which is skilled at engagement with standards bodies, and with EU policymakers that are directing those efforts.
Significantly, the project aims to tackle some of the problems caused by AI by developing yet more technology. Indeed, the first objective is to create novel tools to help identify and mitigate bias. It then plans to disseminate those tools, together with research findings and best-practice advice, to Europe’s HR professionals. Kharas says:
Across the industry there is so much work being done that does not take human perspectives into account, and only looks at bias as a technical issue, as a technical problem looking for technical solutions. So, we can't just develop new technologies, though the problems they may solve are incredibly important.
Our project is really about learning the whole context in which technology is deployed and used, with the humans who are making those decisions, perhaps based on recommendations that the technology makes. Humans are deciding to design and implement these technologies in the first place, so we need to look at the entire context and address it from all possible angles.”
Mind your language
There's another simple, but important point: the UK and US are primarily concerned with AIs that work in the English language, but Europe is not; the EU alone has 24 different official languages.
English may still be the most widely spoken language worldwide (1.45 billion speakers, compared with 1.12 billion for Mandarin Chinese), but it is not the most widely spoken first language, at just 373 million speakers. By that measure, Mandarin (929 million) and Spanish (474 million) are far more widespread, with Hindi (344 million) not far behind, out of a total of 602 million Hindi speakers overall.
We need to engage with this issue at a global scale. But a lot of the research into AI bias that is led by the United States and the UK is based around the English language. By contrast, we are going to be engaging in this research in many languages, since we have research partners from nine different European countries that represent at least nine different language groups.
That's very important for a European project, because so many of the use cases in Europe, in HR contexts, are going to be non-English-based. And this will be complemented by a series of in-depth ethnographic interviews with different stakeholders.
Then he adds:
Ultimately, we will develop curricula for the AI industry, and for HR professionals to be practically engaged in how they can look for bias – to identify it and mitigate it. Both from a technical standpoint, and from a human standpoint, in terms of how resourcing decisions are made and how technology is integrated into those processes.
A worthy project that seems to be tackling this critical issue holistically.
While the UK has, independently, created the Office for AI, sector catapults, the Alan Turing Institute, the Ada Lovelace Institute, and other organizations, its biggest challenge is entirely self-inflicted.
The tragedy is that the UK could be leading the AI debate from within its biggest market, and from a position of sector leadership and strength. But instead, it finds itself isolated, not only from Europe-wide research programmes, regulation, and market opportunities, but also – critically – from a whole continent of valuable data.
The week began with UK Prime Minister Rishi Sunak pitching UK tech excellence on stage with DeepMind's founder Demis Hassabis, making the undoubted case for Britain's home grown skills and talent in many fields, not least AI.
By mid-week, the big news from London Tech Week was the UK’s Defence Science and Technology Laboratory (DSTL) signing up to a Memorandum of Understanding (MOU) with Google Cloud to accelerate the adoption of AI across Britain’s defence sector, tapping into Google Cloud’s AI technologies and staff in what is positioned as a joint research initiative.
Meanwhile, Shadow Health Secretary Wes Streeting was excitedly trailing the possibility of a future Labour government working closely with Google to roll out AI across the state-owned NHS, telling TV viewers that:
I spent yesterday morning...at Google's HQ, talking to them about a range of possibilities right across the economy and in our public services.
In many ways, these announcements reveal the reality of Britain’s position today. Despite its immense homegrown AI talent and investment, the UK’s default setting remains partnering with US Big Tech whenever the opportunity arises - in this case one that snapped up British innovation when it acquired DeepMind in 2014, in a deal thought to be worth over $600 million.
Meanwhile, the EU sets an international standard with the AI Act.