Main content

Can new UK Hub shape global AI standards?

Chris Middleton Profile picture for user cmiddleton January 14, 2022
Can Brexit Britain really influence and hold its own on AI standards, when the US tech giants have their own agenda?

Image of the UK

Hot on the heels of the UK's National AI Strategy - launched in September last year - comes the AI Standards Hub, a new government initiative, proposed in the Strategy, which aims to shape global standards for the technology.

Britain's Alan Turing Institute, the London-based AI and data science organization founded in 2015, will lead the pilot, with support from the British Standards Institution (the BSI) and metrology institute the National Physical Laboratory.

Three august and widely respected bodies, backed by the Department for Digital, Culture, Media and Sport (DCMS) and the UK's Office for AI, which sits across DCMS and what is still called the Department for Business, Energy, and Industrial Strategy (BEIS), even though the Prime Minister scrapped the Industrial Strategy last year - arguably the one bit of government that had been working.

That aside, the move adds some much-needed substance to Whitehall claims of world leadership in AI and the UK being a "science and technology superpower". It does this by seeking to focus the debate on standards and regulation at global scale. Think of it as a nudge unit towards international consensus, a role the UK used to be good at playing before the national political climate became less temperate and more of a local storm.

According to an announcement from the government this week, which apparently nobody proofread before sending:

The new AI Standard [sic] Hub will create practical tools for businesses, bring the UK's AI community together through a new online platform, and develop educational materials to help organizations develop and benefit from global standards. This will help put the UK at the forefront of this rapidly developing area.

The Hub will work to improve the governance of AI, complement pro-innovation regulation and unlock the huge economic potential of these technologies to boost investment and employment now the UK has left the European Union.

However, this suggests that the AI Standards Hub will be as much an internal, galvanizing unit, explaining the AI world to British businesses, as an organization that seeks outreach for British ideas.

Alongside the Strategy, the context for the Hub is new government research published this week. This predicts that the use of AI by businesses will more than double in the next twenty years, with more than 1.3 million UK businesses using it by 2040 - roughly 20% of the current total.

Also in the frame is the Centre for Data Ethics and Innovation's (CDEI) new ‘roadmap to an effective AI assurance ecosystem', another component of the AI Strategy.

Meanwhile, industry body Tech Nation believes that the UK now has more than 1,300 AI companies, with venture capital investment soaring from $120 million to more than $3.4 billion in the ten years to 2020. All good news.

But the challenge facing the UK is that, while it may be leading Europe in key technology areas, from AI to FinTech and quantum computing, the tech behemoths that have the spending power of mid-sized nations are either American or Chinese. How those corporations, including Google, Microsoft, Amazon (now a strategic supplier to Whitehall), IBM, Apple, and Facebook shape AI standards is largely up to them. At least, until somebody stops them.

Doubtless they will want a presence of some kind in the new organization. Right now, plane loads of smiling Californians in chinos and blue shirts are probably flying across the Atlantic saying "Great idea! Let's talk," with a view to shaping what the UK thinks. Britain will need whatever is left of its fabled diplomatic skills to mediate.

Another challenge is that for a country that is, supposedly, ideologically opposed to bureaucracy and red tape, the UK has been remarkably adept of late at creating administrative complexity around technologies it wants to flourish.

A cynic might ponder whether the raft of new organizations, hubs, offices, and institutes that govern UK AI (and other technologies) is Britain's real response to the US tech titans. Granted, Britain hothoused DeepMind, just as it is growing innovative start-ups in robotics, FinTech, and quantum technology, but Google then stepped in and bought it.

A global race 

A further challenge is that the definitive quality of standards is their adoption by the largest number of people. Britain can propose as many as it likes - the technology equivalent of putting a crown on a beer glass - and it can try to persuade the planet to adopt them. But if they are markedly different to those of the EU or the US, then which ones are global corporations likely to adopt? In this context, realpolitik is likely to see the UK seeking to soften EU opposition to US Big Tech dominance, in the hope that Google, Amazon, Microsoft, et al, will be grateful.

The other key challenge is that the US - which is edging (at glacial pace) towards European thinking on ethics and data privacy - has its own ideas about AI standards, ethics, and governance, thank you. As does the EU.

The US has largely been policing itself via the strategies of its tech behemoths. But remember, some of those companies' actions have been driven by employee alarm at the direction of travel. Take Google, for example, whose comprehensive statement on AI ethics in 2018 followed a workforce rebellion over the company's participation in the Department of Defense's Project Maven programme.

Others, such as Microsoft, have seized the moment to take an ethical stance - in public, at least - as a competitive differentiator against the likes of Google, Amazon, and Facebook. Despite this, regulators have been signalling that federal regulation is on the cards, partly because some of America's tech titans are more in the business of selling advertising - and making user profiling their real products - than they are about powering the enterprise.

The EU has published numerous statements on AI standards, ethics, and governance, such as this one last Spring, which put excellence and trust centre stage, with the aim of protecting the rights of its citizens. The phrase ‘from lab to market' suggests a holistic approach - like ‘from field to fork' in agriculture.

My take

By contrast, the UK's new Hub seems to be more about forging agreement on technical standards and being seen to host conversations at the highest level. A worthy initiative that gives the UK an authoritative voice, if a quiet one relative to Big Tech.

But in the wider political context - which may include Britain scrapping human rights laws in favour of an ‘honourable gentleman' approach (surely now discredited) - it is akin to building a new table and setting it in the window, in the hope that someone orders lunch. Before Brexit, the UK was already at the high table with its partners and leading the conversation.

A grey colored placeholder image