Maybe it's the most important technology of any lifetime?
Salesforce’s AI Day event in New York was never going to be knowingly under-sold and that comment from CEO Marc Benioff didn’t disappoint. But given the way that generative AI has dominated the tech agenda in 2023, perhaps that’s hardly surprising, although as Benioff was quick to point out, Salesforce has AI form that pre-dates the current GPT hype cycle:
We're already the number one AI CRM. We already know that all of our customers are doing a trillion transactions a week using Einstein. There's no company that's even close to what we're doing in the Customer Relationship Management area with Artificial Intelligence.
Nonetheless, it was generative AI that dominated yesterday’s gathering, with the official unveiling of AI Cloud, a suite of GPT-enabled offerings that spans the Salesforce product portfolio - Sales GPT, Service GPT, Marketing GPT, Commerce GPT, Flow GPT, Slack GPT, and Tableau GPT.
One problem with hype cycles is that people can become over-confident about their understanding of what’s at stake, familiarity leading to assumptions that don’t necessarily stand up to scrutiny.
That’s certainly the case with generative AI, this year’s all-powerful ‘silver bullet’. As Benioff observed, every customer meeting or interaction today starts or ends with someone's idea of what generative AI is all about for any particular organization:
The story usually goes like this. 'I know about these new Large Language Models.' Many of these models are amazing. A lot of us have used chat GPT and this kind of GPT 4 model. These CEOs or CIOs that we’re talking to are so inspired and they're like, 'You know, what we're going to do is we're going to start working with these models and we're going to take all of our corporate data and we're going to put it into the LLM and then all of a sudden we're going to have instantly intelligent company!'.
And it's not quite like that…
People may well know how the public LLM models are working, he suggested, but if they do, then there are further implications that are not always being taken into account:
[The LLMs] take these huge 'vacuum cleaners' and they're just vacuuming up all of the data off the internet that they can get. If there's publicly-available data - or just data that's out there off the internet that can be scraped - they're taking it and training their models with as much data as they can bring down. They're taking that data and then they're turning on their LLM and whatever it comes out of it is great. And if there is this concept of hallucination, or if the LLM basically starts to lie, well, that's not really their responsibility. Their responsibility is to give you the best case they can with their generative AI. That's not exactly the world of trust that we live in.
As a case in point, Benioff repeated an anecdotal example that he aired on the recent Salesforce earnings call regarding an unnamed large bank in New York:
The CEO made it very clear they want to use LLMs to become much more productive in mortgages and account service and all the capabilities that a bank will do. So, can they do that? Can they just take all of their account information, and all of the history of all the accounts, and just put it into an LLM? Well, I don't think that's going to work out very well when it comes to the regulated industries and the way that data works in large companies.
The consumer generative AI model may be very powerful, he argued, but does that translate to the needs of the enterprise? Trust needs to be taken into account:
Does that [consumer model] give us that trusted experience in our enterprise? Is it going to give us trusted productivity? We want the augmentation, we want that next generation capability for the enterprise, but what about the trust? How are we going to store data when what the LLMs desire is to take as much of that data and then put it into its weights as needed? But is that what we're going to allow it to do with all of our enterprise data? Will we be able to preserve our sharing model? Will we be able to preserve our security model, our privacy model, where we'll be able to lock down for each and every customer and each and every employee what they need?
Benioff cited the relational model of data management, based on rows and column, where security can be enforced right down to the cell level and users can lock other individuals out of access to data contained within it:
That is not the kind of generative AI that we're all experiencing, because the way these LLMs work is that they're taking down all that data that they grab, and then they're amalgamating and tokenizing that data, and then using their algorithms to generate their intelligence. So, that idea of cell based security is not in the current generative model. That's the breakthrough that we're really going to try to show you today.
Here is where the Einstein Trust Layer comes in, sitting between an app or service and a text-generating model. This can detect when a prompt might contain sensitive information and automatically remove it on the back end before it reaches the LLM. It’s ‘anonymous prediction’ in action, Benioff explained:
The idea is that when our systems, when our applications, when our platform looks at all of your data, and then uses machine intelligence, or machine learning or deep learning, (which are kind of the three primary AI techniques that really existed before the current generative AI techniques), we don't look at your data. We're able to provide you those predictions and that AI capability without actually looking inside the data, by just keeping it anonymous.
Now with generative AI, what we're able to do is we're able to take the same technology, and the same idea, to create what we call a GPT trust layer. We're about to roll this out to all of our customers, so they have the ability to use generative AI without sacrificing their data privacy and data security. This is critical for each and every one of our customers all over the world. Every transaction, and every conversation in Salesforce begins and ends with the word ‘trust’.
As ever, the proof of the pudding is with the customers themselves. Present in New York was high end retailer, Gucci, one of the first Salesforce customers to be trialing generative AI. Benioff said there were tangible benefits to be seen here:
What happened was, we’re working with a call center and customer service group and we saw that the service agents could do so much more. They become augmented. They began to have more capabilities. Where they were into service before, now they can have marketing, now they can have sales, now they can have commerce. They have more capabilities. I think maybe this is one of the great promises of generative AI - to augment our capabilities
All this was new 18 months ago, said Vasilis Dimtropoulos, VP Global Gucci 9, a client service within the retailer, designed to offer customers around the world a direct link with the Gucci community:
We started this journey in uncharted waters one year and a half ago, trying to see how with AI we can augment the capabilities of our advisors. In Gucci, our mantra is 'the human touch, powered by technology’, so we started testing how we can 'Gucci-fy' the tone of voice and, very importantly, how we can test it? Our advisors create unique relationships, craft relations with our clients. We tested how AI can augment this. We measured it. We saw positive results and the journey just starts. We are the business case zero in AI, at least in the luxury arena. I hope!
Meanwhile at the American Automobile Association (AAA), the high profile of the generative AI tidal wave led to an awkward encounter for Shohreh Abedi, Chief Operations and Technology Officer, around those security and trust issues raised by Benioff:
When all of this hype started, the first thing that happened is my CEO rushing into my office, with the look of spook in his eyes, saying, 'We have to shut everything down. You have to shut everything down!'. And I said, 'Whoa, wait a minute, slow down a bit'. I get it - this thing is pretty scary in terms of a tool like this [being] in the hands of just about anybody. But at the same time, we've got to be able to look at this and see how can we leverage it for our benefit, [while] sticking to what's core to AAA, with safety and trust in mind.
The CEO wasn’t the only person with generative AI front of mind, she recalled:
I didn't want to stifle all of the creativity of our people, because I had people from underwriting, from all kinds of departments, coming up with 150 different great ideas that they could do with chat GPT….You’ve got just about anybody knocking down my door right now, saying, 'We've got the best model, we've got this AI capability or another.'.
The approach that AAA has taken though is heavily-centered around partnering with ‘known quantities’, such as Salesforce. Abedi explained:
We don't believe it's the right thing to do to take [on] another layer and component, but rather fold it into the fabric of our platforms. That's really how we're moving forward. You want to be able to partner with folks that have the wherewithal, that can go shoulder-to-shoulder with you. That's really what I'm looking for, not the quickest spin-off or someone out there that has ‘the greatest thing’, but then leaves my backdoor open, because that is hugely important for a company like us.
For now, AAA’s generative AI focus is in three main areas, she added:
One is customer service, just to be able to give our agents more time to actually listen and hear the customer versus searching for things. Another is in the middle for support processing. I can reduce all of that processing time, bring down the costs, make it more effective and quality. The third layer, really for my developers, was in DevOps, having to identify, troubleshoot, and also increase our testing capability.
Salesforce’s pitch here is pragmatic around generative AI and yesterday’s presentation certainly wasn’t as hype-led as others that I’ve seen this year. As diginomica has noted over the past few months, the use cases for generative AI as an enabling - augmenting? - technology to boost productivity are starting to emerge. The point Benioff made about false assumptions and over-confident understanding on the part of execs is well-made and needs to be repeated as many times as necessary to get the idea across.
Customer exemplars, as ever, are the best weapons in any marketers arsenal. Stories such as that of Gucci 9 and AAA still need more meat on the bone, but that will come over time. Perhaps by Dreamforce, we’ll get a closer look?
But for now, this was a confident, slick laying out of Salesforce’s AI stall.
My colleague Rebecca Wetteman was on the ground in New York and we’ll get her perspective in the coming days.