OK, here’s the rub with generative AI - and indeed AI in general - as articulated by Salesforce CEO Marc Benioff today. On the one hand, he says:
There is no question that this AI opportunity is going to change everything - and probably anything.
On the other hand:
We all know, and we all have in our mind, what could happen if this AI thing goes really wrong.
That’s a ying/yang worldview that reflects the prevailing mindset among the Salesforce customer base. While everything at this week’s Dreamforce conference is viewed through a generative AI prism, there’s an ongoing consciousness that while organizations are excited by and curious about the potential of this new technology, there’s also a lot of valid nervousness and concern. (See IBM’s State of Salesforce study for more on this.]
In other words, everyone’s watching this latest tech revolution with a schizophrenic combination of fascination and hesitation.
We’ve seen revolutions before, from cloud to mobile to social, but this is different, argues Benioff:
[This is] nothing like any of us have ever seen before. It's got everyone's attention…We also recognize something very important - that we're in this AI revolution. It's going to impact who we are. It's going to impact how we operate. But it's going to bring us back to our core values…it's our values that are going to guide us...And that's where we have to start thinking about what are we doing with this technology?
And here’s the big takeaway - this is not just a technology revolution, says Benioff, this is a trust revolution:
Like a lot of new technologies in our industry, there appears to be this thing called a trust gap. Somehow there might not have been as much trust built into these technologies as [was] widely expected. [That's] not unusual in our industry.
Now, the Salesforce CEO is never a man known for mincing his words, but on this topic Benioff had particularly harsh words for some less-than-honest brokers out there in enterprise tech land:
We have a very different narrative on Artificial Intelligence. We look at things very differently. We are not afraid to say things that others are afraid to say.
He’s not kidding, as he warned the thousands of Salesforce users in the audience for his Dreamforce keynote:
A lot of companies that you’re dealing with, a lot of companies that you’re partnering with, a lot of the companies that you’re looking at or that you work with, especially on the B2C side, you know how they operate! I don’t have to tell you. I do not have to explain [this] to anybody here. It’s a well-known secret at this point - they’re using your data to make money!
Large Language Model [LLM] providers came under fire as he went on:
Where is the data going when I'm using my LLM? Where did that data just go? What is happening? Well, it's ingested. It's gone. [The LLM] is using it to get smarter and you'll never see your data again. It's not exactly how we operate in the enterprise world, but these LLMs are hungry for our data. It's how they get smarter. We know these companies are just algorithm companies. They only work when they have data and the data that they get, mostly they've stolen, or they just go out onto the net and get whatever data they can get. And the next set of data they want is your data, the corporate data that will make them even smarter.
Warming to his theme, he said:
The other thing that's really interesting is that these things are good, but they're not great. You get a lot of answers that aren't exactly true. We call them hallucinations; I call them lies. These LLMs are very convincing liars. They really are. It's amazing. And of course, they can turn very toxic very quickly. That's not necessarily great in a customer situation, not necessarily great in an enterprise employee situation.
Eeek! Maybe let’s not bother with the revolution then? Not at all, said Benioff, there is a phenomenal opportunity here:
We can see we can have higher levels of customer success and productivity and growth and transformation and strategy with this new technology. There's no question. You read the reports from all the great management consulting companies, you can look at all the technology things, it is going to radically change our landscape. Everything is going to shift at the same time.
But such a shift needs to be approached in a particular way:
[My engineering teams] have been saying for the last nine months, especially as we go through this next wave of generative and as we start to head towards autonomy, they’ve said, ‘We're gonna do something very, very different’. Because before we turn all this technology over to you, and say that we are going to stand behind it with our values, we’re going to add a little something.
This isn’t a new way of thinking for Salesforce, he argued:
Several years ago, we said, ‘Technology is about to really change. We need new standards for ethical and humane use'. We created an Ethical and Humane Use Officer in our company. We created a Chief Trust Officer in our company, way before anyone else, because we recognize at Salesforce that your data is not our product. That is not what we do here… Your data isn't our product. You control access to your data. We prioritize accurate, verifiable results, our product policies protect human rights, and we advance responsible AI globally, just as we always have, and transparency builds trust.
And trust has to be the top priority, he concluded:
We want to turn [AI] over to you, but only with trust…We want to build on Salesforce, the trusted AI platform for customer companies. That is our mission around AI. We want to build a trusted AI platform for customer companies so that everyone is Einstein and more productive.
We have learned in this room over 25 years that what we do matters. The decisions that we make, in terms of how we shape how our companies are using these technologies, are very important.
In my preview of this week’s Dreamforce, I raised the question of the need to strike a balance between the potential of generative AI in the enterprise and addressing the very real - and very justifiable- concerns associated with it among enterprise organizations.
For Salesforce, the approach to tackling those concerns and closing that trust gap lies with the Einstein 1 platform and the Einstein Trust Layer - see Phil Wainewright’s analysis here - as well as playing to its longstanding trust credentials
This was a barnstorming keynote address that was in equal measure evangelical and pragmatic. It won’t have answered all the concerns in users minds, but it hit the right notes and framed the right questions. (And it came in under time!!!)