How US tech multinationals are courting the UK’s AI policymakers – yet again

Chris Middleton Profile picture for user cmiddleton September 14, 2023
Summary:
In our first report from the Westminster eForum on UK AI policy, we hear a dispiriting keynote from a think tank/law firm/vendor alliance (delete where applicable)

An image of a finger pointing to a digital scale
(Image by herbinisaac from Pixabay)

Artificial intelligence is rarely far from the lips of business leaders and IT strategists this year – even if it occasionally just means boosting corporations’ share prices in their quarterly earnings calls. But with the UK government’s long-promised AI summit approaching on 1-2 November, it is top of mind for policymakers too.

However, there are signs that some among the phalanx of NGOs, think tanks, vendors, consultants, and advisors mustering around the government may be hyping the technology as much as they are talking sense to regulators and decision-makers. 

Certainly, last week’s Westminster policy eForum on AI could be seen as a herald of likely discussions in November. Presenting the opening, tone-setting keynote at the event was Bojana Bellamy, President of the Centre for Information Policy Leadership (CIPL). 

She explained:

CIPL works with business leaders, global governments, policy and lawmakers, and regulators to ensure and deliver this new wave of digital innovation and prosperity. But in a way that protects privacy and fundamental rights. 

The OECD tells us that there have been about 800 AI policy interventions [worldwide] in the past four years, while Stanford says that 37 bills have been passed into law globally since 2022 – including, of course, the EU AI Act, which is going through the legislative process. 

Here in the UK, we have our own approach. […] And perhaps I should quote [Google CEO] Sundar Pichai, who said that AI is too important not to regulate, but also too important not to regulate well.

Fair enough. So, who or what is the CIPL? 

While Bellamy herself works out of London, self-styled thinktank the CIPL is actually based in Washington DC, and its logo carries the brand of US white-shoe law firm Hunton Andrews Kurth, Bellamy’s employer. Indeed, CIPL appears to be based at that company’s head office on Pennsylvania Avenue, just a home run from the White House.

CIPL’s 95 ‘members’ include 94 multinationals and blue-chip corporations, such as Amazon, Apple, Cisco, eBay, Google, Hewlett-Packard Enterprise, IBM, Meta, Microsoft, PayPal, Walmart, and the Walt Disney Company, alongside (interestingly) China’s Huawei, Tencent, and Tiktok. The likes of Accenture, Boeing, Coca-Cola, LEGO, McKinsey, Uber, and the London Stock Exchange are also onboard. 

That’s quite an alliance. But for what?

The organization’s aim is “to help frame and advance privacy and data policy, law and practice” worldwide. Yet the only non-corporate member appears to be the Erasmus University in Rotterdam, in what is otherwise a very US-vendor-centric list. 

So – on the face of it, at least – 94 corporations, including a who’s who of AI vendors, would like to frame that policy. That’s realpolitik, perhaps, in a world dominated by US Big Tech and media giants. 

But however he looks at it, the proverbial ‘man on the Clapham omnibus’ (i.e. any neutral third party, a core principle of UK law) might form the impression, rightly or wrongly, that 94 mainly US corporations are attempting to bend the government’s ear in formulating Britain’s independent policy on AI regulation. 

Or is the government hoping vendors will tell them what to do? Who can say?

‘The old-fashioned approach to regulation’

No one doubts the need for high-level industry consultation. Even so, it begs the question of why an organization such as CIPL is leading a Westminster policy debate – especially one that heralds a UK-led discussion on post-Brexit AI policy. 

Such a forum ought to be talking up Britain’s homegrown innovators at least as much as trillion-dollar US Big Techs, with Microsoft being an almost guaranteed presence at such events. And, surely, we also need to hear voices from outside the sector, and not just eavesdrop on an industry talking to itself.

But that’s not to say that CIPL’s aims aren’t sincere and well intentioned, of course. Bellamy continued:

So, how do we regulate AI, in a way that enables us to reap the benefits of this new technology? How do we do it in a way that builds trust and confidence in the marketplace and with citizens, and which companies can actually follow on this journey? And how do we regulate in a way which is not, perhaps, the old-fashioned top-down style?

Perish the thought that top-down regulation might be a good thing! Then Bellamy really got into her stride, and – coincidentally – began rolling out some of CIPLs’ members’ own research findings, after the opening quote from Google (another member).  She said:

The latest Accenture report from March 2023 shows that 40% of working hours across all industries can be impacted by Large Language Models and generative AI. 

And, moving forward, there is a similar picture that shows the impact of generative AI on all job categories – the majority of jobs are going to be impacted by these tools. They will not replace us, but they're going to augment what we do [say vendors]. About 100% of all tasks are, in fact, susceptible to AI.

And McKinsey’s latest survey from August explained where senior leaders see some of the major risks that are coming from deployments of AI: inaccuracy, cybersecurity, IP, ‘explainability’, and privacy…

So, the question for us is, How do we do this [regulate] in a way that proportionately addresses the risks, but also enables the huge benefits of this technology?

How indeed? Then Bellamy added a statement that, in the context of a Westminster policy debate on regulation, was extraordinary: 

These surveys are absolutely showing us that if you are any business leader, any CEO, you absolutely have to adopt AI because there is no other way to be competitive, and to stay competitive!

Wow. I put it to Bellamy that such comments are both over the top and dangerous, because they risk creating a headlong rush towards tactical adoption of a technology that few business leaders even understand. 

Indeed, as our recent report revealed, most organizations are currently adopting AI informally, as shadow IT (as staff play with ChatGPT and Stable Diffusion in their lunch breaks). Meanwhile, the social, cultural, political, and economic impacts of any headlong rush to mass adoption are completely unknown – and may not all be positive.

She responded:

Yes, well, look, I don't think anybody said we should all adopt it [holy backtrack!]. I think what we are saying is that every single big management consultancy, plus experts and gurus, are telling us that those companies which do not invest in AI are going to be left behind because it’s the next big movement.

I would argue that language such as this is incredibly unhelpful at this point in human history. That’s because the man on the Clapham omnibus (remember him?) might feel that it sounds a bit marcomms, a bit PR, a bit hype-cycle, rather than something a global think tank ought to be saying – albeit one with 94 corporate members, many of them from the IT sector with AI products to sell.

Granted, it stands to reason that a group of US-based tech corporations and management consultants [ker-ching!] would love everyone to adopt AI, preferably in a context of hands-off regulation. And a firm of Washington lawyers has their research on file to prove it. Welcome to a 2023 UK government policy debate, everyone!

‘Catering for different types of risk’

That aside, what do CIPL’s undoubted experts actually recommend when it comes to AI regulation? It turns out that they have a plan that CIPL would like to “contrast and compare with the UK approach”, said Bellamy. She added: 

We believe that we need to take a layered, onion-peel approach. Where principle and outcome-based rules sit in the middle. They're augmented by organizational accountability that is demonstrated and enforceable, and topped by a layer of smart regulatory oversight. 

Starting with the core, which is principle and outcome-based rules. We believe, very deeply, that we do need rules, which should build on existing legal foundations. And we need to evolve these with a laser focus: targeted regulatory, but also core regulatory, intervention. 

But we also need to evolve these rules with a more innovative and bolder interpretation of existing rules […] which need to be technology neutral. This is absolutely essential. 

We do not need to be told how, but we need to be told what it is we need to achieve. The ‘how’ needs to be left to standards, co-regulation, and industry best practices. And they need to be based on a risk-based approach.

Hands off! in other words. So, what does a “risk-based approach” mean? She continued:

It is threefold. One, the rules themselves. These have to be drafted in a way that caters for different types of risks. We simply cannot have ‘one size fits all’. Two, we need to make sure that organizations are able to comply with the rules based on risk; so, they're prioritizing compliance based on risk. And three, the regulators also have to take a risk-based approach, right? 

But what does that mean? Well, we believe it means not just looking at the risks, but also at the benefits. What are we missing by not doing this? 

We need a debate at national level about risk. But this is also something that is going to evolve as our understanding of the technology, our grasp of it, evolves. And as the safeguards and technologies that we are using to mitigate risk evolve. 

It is absolutely essential that we as a country, and as a society, discuss and create some form of consensus between regulated entities, regulators, and policy- or lawmakers about what the risks and harms that we are trying to protect against actually are, and create a taxonomy based on consent.

But what about the millions of individuals who are affected? What about the British people, all those citizens and consumers whose data may be scraped and analyzed – which may already have happened without their knowledge or consent? She said:

You must give people rights. Because they are what people ultimately want, right? They want to be able to use and benefit from AI technologies [do they?], but they also want to be able to understand ‘How was that decision made about me?’ And 

Who do I go to contest that decision, to complain and ask about it? 

Do I have some redress? That's really what people care about.

My take

Perhaps. But perhaps people would rather their data wasn’t scraped unethically in the first place, or analyzed without their permission, and automated decisions made about them by algorithms that may be biased, or trained on data that may be out of date or inaccurate. 

Not everyone just wants a small opportunity to complain about it after the fact. 

A dispiriting presentation, then. One with a faint but detectable subtext of “leave industry to police itself”. But also, a well-briefed one when it comes to this disintegrating, chaotic government.

This decade, Whitehall has said it wants one thing above all else: regulators who value innovation and commercial data exploitation over and above citizens’ privacy and safety. 

In short, it’s really all about Brexit UK’s scramble for growth at any cost. US vendors can smell it; and the desperation that accompanies it. It smells like weakness. And that means money and opportunity for US Big Tech. Again.

Loading
A grey colored placeholder image