Main content

Salesforce EVP of Product Patrick Stokes on intentional enterprise friction, 'prompt engineering' and the rise of the Chief AI Officer

Stuart Lauchlan Profile picture for user slauchlan July 3, 2023
As a technologist, Patrick Stokes would love generative AI to be something that organizations can just pick up and go with. But it's not that simple...

Patrick Stokes

It's kind of the only thing that we're thinking about right now.

As noted passim, Salesforce’s re-invention as an AI company is something that no-one can have missed in recent months, not least at the firm’s AI Day in New York last month and last week’s avatar event in London.

Taking part in both events was Patrick Stokes, Salesforce EVP of Product & Industries Marketing, who sat down after the London keynote for an exploration of what lies at the heart of this shift.

What made this discussion particularly of interest was that while a lot of the debate around generative AI has been pitched at a business executive or job function level, Stokes comes at it as a technology person, a fact he affirms when he says of the wider AI revolution:

This is such a big pivot. This isn't necessarily something where you can just hand somebody the technology and say, 'Go!', as much as I'd like to think that as a technologist.

That said, he argues, the potential of what’s on offer with generative AI isn’t something that anyone can ignore, least of all Salesforce:

I think what we've recognized is that all of a sudden there's this incredible new capability, new technology, that if we can safely blend it with all of the data and kind of context about our business, we have an opportunity to just dramatically change the way people work. We now have an AI - or more specifically at Salesforce, we have a CRM - that can work for you, that can do a lot of the things that we do in our daily lives that make us incredibly, incredibly productive. We're thinking about that through the lens of our workflows - sales, service, commerce, marketing, those workflows that Salesforce has traditionally invested heavily into.

We're thinking about all the things that users do across those workflows in their day-to-day lives and how can we improve them by adding a little bit of automation and a little bit of AI into that workflow. We're seeing that even little minor things in the course of a day can add up quite a bit. AI has the potential to dramatically improve that and open up opportunities for your workforce to focus on things that require a little more creativity, a little more ingenuity, for your business specifically.


But there is a need for a degree of caution here. While the hype cycle around generative AI in recent months might give the impression that the tech sector has finally found the most silvery of silver bullets of all time, there’s also an inherent caution that Salesforce UKI CEO Zahra Bahrololoumi referred to last week when she said of buyers:

They're cautious. Very cautious. And why is that? Well, because there is an AI trust gap. Every company wants to embrace [AI]. In fact, for many it is the number one priority. But your customers are not so eager, and that's because less than half of them trust companies with their data.

That puts a burden on vendors to tackle this caution head on and that’s what Salesforce is doing, insists Stokes:

We're being very careful and selective about the [Large Language Model] partners that we choose to work with. So, number one, we're building our own models, but we're also partnering with a number of model providers, like OpenAI. We're being very intentional with them and those relationships, and defining what are our mutual roles and responsibilities in terms of content moderation, toxicity detection, bias detection, etc?.

In many cases, what we're doing is duplicating a lot. We're doing it on both sides. We need to understand what data is going into the models to train them, and then we need to understand how those models are being tested, before we feel comfortable connecting them to pieces of our applications.

Once an LLM is connected into Salesforce applications, there’s a deliberate move towards intentional friction. Stokes explains:

What I mean by that is that nothing that's coming from a generative AI right now is being automatically applied into a workflow. Everything for the time being will require a human in the loop. Everything that we generate and put out, we're going to ask a human being on the other side of an interface to look at it and to say, 'Yes, that looks right to me' or 'No, that looks wrong to me' and move forward from there.

Now, someday, perhaps, we trust these things and we can take away some of that friction, but for now, we want to be very intentional about adding that friction in, to really learn and understand how these models are being used by the users. I think that's a very important thing to do. I won't belabor the Trust Layer [introduced by Salesforce], but this really is where we think we can excel and really differentiate. This has always been how we revel in these problems, in these kinds of trust and security and data governance problems. It's what's made Salesforce special over the years and we'll continue to do that here.


Another way Salesforce intends to help organizations get themselves fit for purpose to embrace generative AI safely and productively is by partnering with system integrators. In New York, Accenture was announced as one such partner, while in London Deloitte Digital was named as the latest. Both are, of course, long-standing Salesforce partners anyway, but Stokes outlines some very specific objectives around AI:

What we're seeing from our customers right now is a tremendous desire to go experiment, but not really knowing where to get started or how to get started or even how to frame the experiment that they're doing. Do they want to try to automate a process with AI?  How do they figure out that process? What does the change management look like? How can they trust the data? How can they trust the way the Large Language Models have been trained? There's just many, many of these conversations and so a lot of these partnerships are good ways for us to engage in a really hands-on way, in a one-on-one way.

There's more change management involved here within the business and that's where we're really relying on our partners to go and drive that kind of personal touch with our customers, to help them set up some of those early experiments, do it in a safe way, about how to scale it through the rest of their business. So it's really all about that experimentation.

As noted in recent Salesforce research, one inhibitor to successful generative AI productivity gains is the all-too-familiar one of skills shortages. Salesforce has introduced new AI-focused modules in its online free training initiative Trailhead, with more to come before the annual Dreamforce conference in September. For his part, Stokes can see that there will be a need for organizations to take on new skills to meet new roles in an AI-enabled future:

There's a very obvious skill which is going to come up which is this concept of ‘prompt engineering’. This idea of being able to ask and provide the appropriate instruction for a Large Language Model to go do or complete a series of tasks is going to become very, very, very important. It's almost like a search engine optimization, but kind of ramped up 10 or 15 levels.

And it's more than just asking the question - it's sourcing the data that you need to put into the question or put into the instruction. That's why the 'engineering' word is there. It's more than just the question. It's the connectivity to everything, making sure that you're able to go out to this other system, bring all of that together into a set of instructions, and then figure out how to attach that into the workflow of your users.

This should be good for certified Salesforce professionals, he adds: 

That kind of end-to-end experience is huge. I think that is the way that we will rapidly roll this out into a lot of businesses. I'm excited about that, because we have this incredible community of trailblazers and Salesforce admins everywhere who are absolutely perfectly suited for that type of work. They understand their business. They understand the workflow of their users. It's different in every organization, but they understand their data and their architecture and their IT systems, so they're just perfectly suited for up levelling their careers into that form of skill to help roll this AI out across their organization.

That begs the question of whether the corporate ranks of most organizations will expand to accommodate a Chief AI Officer? Stokes thinks that’s entirely possible:

We certainly have Chief Data Officers today. We have Chief Ethics Officers in many cases. Maybe [we'll have] one you're not thinking of - Chief UI Officers, because this is really changing the user interface and how users interact with the underlying system. So I do think that some day we will have Chief AI Officers. Probably 12 months from now we'll be sitting here talking to some of them.

My take

The rise of the CAIO is surely inevitable. The only question to my mind is how quickly it happens. In the meantime, Stokes made it clear that Salesforce is taking a pragmatic approach to generative AI, however quickly the concept has become an integral part of the product messaging and functionality. The expansion of Trailhead to take in the new skills demands of this shift in the tech sector is welcome. As diginomica has repeatedly said, the potential of generative AI is enormous and should be explored, but proceed with caution. It’s not baked yet.

A grey colored placeholder image