Main content

AI regulation and governance – how an AI vendor wants to nudge public policy

Chris Middleton Profile picture for user cmiddleton April 24, 2024
Summary:
Dutch-founded, San Francisco headquartered AI-powered search company Elastic is one of many companies seeking to bend governments’ ears on regulation. We hear from its Head of Government Affairs in DC.

Wooden blocks depicting concept of AI regulation © Suriya Phosri via Canva.com
(© Suriya Phosri via Canva.com)

AI policy has rarely been out of the headlines since 2022. We have had the EU’s AI Act; a joint statement this month on AI safety by the US and UK governments; the White House’s AI Bill of Rights; the Bletchley Park AI Safety Summit last autumn; new institutes, offices, and other bodies; dire warnings of apocalypse; calls for industry self-regulation, and a lot more besides, in every part of the world.

At the heart of all this are two differing views on regulation. On the one hand, that it is somehow an impediment to that all-hallowed quality that is easy to say, but hard to pin down: innovation. And on the other, that it is an enabler of innovation and trade on mutually agreed terms. Pick your side in a debate that has become increasingly politicized. 

So, pity the poor government spokespeople within AI providers who have to bat for their home teams while trying to claim that everyone’s a winner, despite any evidence regulators might present to the contrary.

Thus, from the Bill of Rights we come to Bill Wright, Global Head of Government Affairs at AI-powered search company, Elastic, which claims 50% of the Fortune 500 as customers. Based at the heart of debate - in Washington DC, seat of the Federal Government - he tells me:

This function didn't exist at Elastic prior to this. I was at another tech company [Splunk, as Head of North American Government Affairs]. Then before that, I spent a career in the US government, split between departments [including the State Department], agencies, and Capitol Hill, in the Senate and in the House. 

So, I've done it all, in terms of seeing both sides of how public policy is developed.

Indeed. Gamekeeper turned poacher, perhaps, with a role in government affairs at Symantec too, and a career start 30 years ago as Legislative Assistant in the House of Representatives, followed by a spell as an Assistant Attorney.

Wright continues:

I was brought into Elastic because of the number of government regulations that will likely impact technology and, in particular, Elastic [and other AI companies]. So that we can get ahead of some of those regulations - look around corners, if you like - and try to educate policymakers and help them make good policy.

In short, his role is analogous to those think tanks that are really vendor alliances keen to nudge governments towards defending companies’ interests - rightly or wrongly.

He adds:

“To the degree that we can do that, it's obviously good - good for everyone. And so far, it's been great. Elastic is involved in a lot of public policy issues, from AI to cybersecurity and privacy, right across the board. So, it's been a fun year, and we’re just getting started!

The sense that, collectively, AI vendors are throwing resources at trying to limit government action against them is hard to avoid - at a time when, we must acknowledge, there is public unease about the future, and eight of the top 10 most valuable companies on Earth are US tech giants. 

This is not a sector that is suffering under the weight of punitive regulation. Some AI providers are worth more than the GDP of top-10 nations, while even the 100th most valuable tech company, Cloudflare, is worth nearly $30 billion. 

So, is this a show of force by vendors, or a sincere attempt to educate policymakers who either lack technology expertise or, in some cases, seem overly keen to rub shoulders with billionaires? And does Wright acknowledge that, in their wake, AI companies are trailing real concerns about bias, ethics, black-box solutions, privacy, data security, cybersecurity, copyright theft, emergent monopolies, and more? 

He says:

Among the issues that governments and industry at large are grappling with are things like transparency, data protection, and the ethical use of AI. These are all things that organizations are trying to navigate and, from governments’ perspective, it’s about public policy. But right now, we're seeing a patchwork of regulations that may set the stage for something that becomes fragmented in the future.

At Elastic, we're very supportive of a globally harmonized, or at least interoperable, AI policy framework that strikes a balance between wanting to spur innovation – which is important, because this is a race - and enhancing the public's trust. And which is safe. Balancing innovation with risk mitigation.

Fair enough. He adds:

I think any regulation should really start at its core with public trust. And [governments] are emphasising the need for impact assessments of high-risk AI systems. 

But governments are also so eager right now to use AI! But I think the conversation is changing a little bit. A year and a half ago, it was all potential and possibilities. But now, maybe the novelty has worn off a little bit. 

A lot of the prior conversations seemed very aspirational and romantic. But now, the hard reality is coming to the fore, and there is more focus on strategic implementation, risk management, and compliance with emerging regulations. So, there's a lot of work to do.

A consistent approach?

As with GDPR before it, Europe is leading the charge on AI regulation, and setting a 2025 deadline for compliance. Is there any likelihood the US might follow suit - even in piecemeal, de facto style, as happened with data privacy rules in California?

Wright says:

You make a great point about GDPR, and California certainly followed suit. But nationally, of course, we have not been able to pass federal legislation as yet. 

But recently, a federal privacy bill has been introduced - it's bicameral, it's bipartisan. I've been accused of being too optimistic, but it may offer our best chance. Many of the principles, even in this legislation, are based in part on some of those within GDPR, though it’s a different animal.

But on AI, Europe certainly represents the high watermark again, just like with GDPR. It’s the first comprehensive AI Act out there. I like the way that it aims to create risk-based rules on AI development and use, with certain practices being entirely prohibited. 

But to say that the US could follow suit, I don't think so. The US is taking a much more decentralized approach - a patchwork of executive-branch actions that we've seen through various Executive Orders. Those primarily only affect government agencies, but through the power of the purse, they also affect any industry wanting to do business with the government. So, to that degree, there's a lot of leverage there.

He adds:

There are also some very domain-specific agency actions that are going to occur, rather than a broad national AI law. And many states are moving forward on some form of AI legislation as well. 

So, it's developing a little bit like our privacy laws. If there's one truism, here in DC and probably in London too, it's that technology innovation is always going to outpace policy. 

And any policy that’s propagated, it's got to be dynamic, it's got to be flexible, in order to keep pace with technology. Otherwise, you would end up working on measures that would quickly become ineffective, outdated, cumbersome, and potentially affect innovation.

Perish the thought! He continues:

We don't know what that regulatory landscape is going to look like. But we do have some ideas. We have signposts that it's going to revolve around prioritizing transparency, data protection, and ethical AI use.

So, it’s a good idea [for vendors and users] to establish a dedicated taskforce that's focused on AI governance and compliance. A team that can stay informed on these regulatory trends, foster that culture of ethical AI use, and invest in training employees on AI risks and regulations.

My take

Good advice, which we endorse. 

Then Wright adds a note of admirable self-awareness:

But if past is prologue, and we're looking at what has occurred around our privacy laws specifically, then that would not be optimal. It can be difficult to monitor the legislation of 50 states. But it will probably be good for my gainful employment!

Loading
A grey colored placeholder image