Main content

Dreamforce 2023 - regulating AI begins at home, says Salesforce Government Affairs EVP Eric Loeb

Stuart Lauchlan Profile picture for user slauchlan September 14, 2023
Government-led legislation would be great, but in the meantime AI tech firms should make voluntary commitments around risk and safety.

Eric Loeb
Eric Loeb

While attention was inevitably focused on the action at Dreamforce in San Francisco, Salesforce took an important policy decision in Washington by signing up to the Biden Administration’s voluntary safety and transparency commitments around AI.

Salesforce joins the likes of Microsoft, NVIDIA, Meta and OpenAI among others in making this pledge intended to manage risk as AI tech evolves. Back in San Francisco, Eric Loeb, Salesforce EVP of Government Affairs, highlighted the importance of such public commitments:

One of the things that you are hearing consistently is a recognition that there should be a motion towards regulation, but also an understanding that this technology is moving very fast, sometimes faster than either a legislative or regulatory process. And one of the things that is happening in what could be a vacuum, but really isn't, is some embrace of voluntary commitment.

So we were part of a round of voluntary commitments with the White House, looking at some of the dimensions of foundation models. That's just an example. I do expect we're going to see more of that, with the intent there that  the direction of these voluntary commitments is a glide path towards regulation.

A consistent theme of Dreamforce this year has been the idea that AI is dominating the private sector agenda, with CEOs piling pressure on their CIOs to know what their organization’s strategy is around the technology. It’s equally high profile in the public sector, said Loeb:

When you talk with a public official, no matter what the subject of the conversation started as, AI comes up. There is a lot of interest at a policy level, at a political level and it does come back to trust. This is important transformational technology. For consumers, for society, for government officials who have responsibility here, there has to be a sense of trust and sensibility.


Such is the pace of change and the speed with which generative AI has emerged into the mainstream, that while formal, policy-driven regulation at government level would be welcome,  that’s going to (a) take time and (b) will be enormously difficult to reach a global consensus on.

In the meantime the tech sector has a responsibility to take its own actions, argued Loeb:

It's not just a matter of waiting for regulation, but [we] should take a responsible step right away.

For its part, Salesforce aims to take a lead in shaping the agenda, he added;

We are looking at where we think the puck is going with public policy expectations and regulations. As a company that serves enterprise customers, we have a really active customer base that communicates to us what their expectations are for their uses of AI. So there's actually a pretty strong alignment of where we are setting our internal goals on our architecture and our acceptable use policy, with many of the most important topics that are being discussed in a regulatory setting.

So whether it is around accuracy, bias, toxicity, privacy, security, these are all things that you saw in the Dreamforce keynote that are actually built into our architecture. That isn't only a reflection of what we believe is a responsible thing to do, but also it is a reflection of where we expect public policy is going to be going and what those baseline expectations are. So it's really important for us to stay up to date as the technology evolves and the conversations continue.

That said, working with government and legislators is vital, he noted:

At the same time, of course, you need to have public policy and you need to have regulation. Where we are focused here is to ensure that that approach does have nuance to it. It's not one size fits all. People can sometimes be binary and say regulation is good or bad, but you need to think about different contexts and use cases throughout the whole AI ecosystem, and differentiate things that are harmless from others.

One of the reasons for that is while trying to protect against the harms, you also want to protect innovation and competition. So you have different types of frameworks. That's the dialog we’re hearing. I'm very encouraged to see so many countries that are engaged in this,  looking at things, whether it is regulatory, legislative, or policy frameworks. Take, for example, the NIST AI Risk Management Framework, which is voluntary, and its policy is not regulatory, but it is part of this, this glide path.


But the bigger picture remains the challenge of getting some form of international - let’s admit, global is going to be impossible! - consensus on regulations and standards. There are positive initiatives - the UK is hosting the world’s first global AI safety summit in November and the European Union’s [EU] AI Act is shaping up to be a solid piece of legislation. But as always, international accords often don’t last longer than the political photo opp before everyone involved is back to squabbling like a bag of cats.

Loeb conceded:

There are different philosophies of regulation. There are different government models in many places, so it is a tall order to think of a global singular approach. But I think you'll see different concentric circles and approaches evolve.

But he singled out the efforts of the EU for praise:

To the credit of the EU, several years ago, the process began on the AI Act and the European Commission has within its staff some very experienced people on policy and technology, who have been thinking through and addressing the issues. The status is that the Commission starts [the process], then it moves to the Parliament...When the Parliament was debating this, generative AI came and took the world by storm, so there were some adaptations at that time. As well as a very good point about interoperability and a common lexicon, the definition of AI in that Parliamentary process picked up on the OECD [Organization for Economic Co-operation and Development] definitions. So there's there's going to be a conversation between jurisdictions as people work together and learn.

I think as well with the AI Act - which will most likely be finalized at the end of this year and then it's a two year period before it's enforced - there's going to be ongoing discussions and policy development, not only for what we know today about generative AI, but what is rapidly evolving. It's not a static process, it will continue to evolve and that's a good thing. I commend the leadership of the EU on this. It is a framework that does account for a risk based framework that differentiates the high risks from the lower risks. That's a really important contribution to the discussion.

My take

This is an incredibly important topic and it’s encouraging to see the likes of Salesforce taking an active role in the regulatory debate. The AI sector cannot be allowed to go the way of the social media companies, where Mark Zuckerberg can be summoned before grandstanding politicos for a ticking-off before sloping off back to Meta headquarters with not even a slapped wrist to show for it.

It was interesting to hear OpenAI founder Sam Altman’s take on regulation at Dreamforce earlier this week. He said:

I think our [political] leaders are taking this issue extremely seriously. They understand the need to balance the potential upsides and potential downsides, but you know, nuance is required here. I did not have particularly high hopes about the balance of that nuance being held appropriately and it really, really is.

I don't know exactly how this is all going to play out, but I know that people seem very genuine and caring and wanting to do something and to get it right. I think our role [as tech companies] is just to try to explain as best as we can, realizing that we don't have all the answers and we might be wrong, what we think is going to happen, where the control points can be, where the dangers might lie, what it takes to make sure the benefits are distributed, and then let the leaders decide what to do.

I think getting a framework in place that can help us deal with both the short term challenges and the longer term ones, even if it's imperfect, starting with something now is great. It's going to take people a while to learn. Solving this legislatively is actually quite difficult given the rate of change, but getting something going, even if it's just focused on insight and oversight, so that the government can build up the muscle here, I think that would be great.

There's a lot I don't agree with Altman about, but on this at least, we're on the same page.


For all the highlights from Dreamforce 2023, check out our dedicated events hub here.


A grey colored placeholder image