Main content

AI optimism on show in latest Salesforce research, but is it more than a vibe? Chief Ethical and Humane Use Officer Paula Goldman breaks down the findings

Stuart Lauchlan Profile picture for user slauchlan June 26, 2024
Summary:
Knowledge workers are excited about handing more and more over to autonomous AI according to a global study. But over what time scale and why?

AI

Trust in AI is crucial. Amid all the hype cycle excesses of the past 18 months, I think we can safely agree on the importance of human beings being able have faith in the ability of a new wave of automation not to over-step the mark or undermine personal freedoms. How that trust is to be built is a more complicated matter and one that’s, thankfully, earning more air time around responsible thought leaders in the field.

I say ‘responsible’ because I still remain hugely concerned that in the AI arms race certain individuals and companies are more focused on how clever their technology can be made as a goal in itself, rather than considering longer term human-centric issues or even the context of present day realities. I still find it staggering that OpenAI CEO Sam Altman cheerily pitched hallucinations as a feature, not a bug, at last year’s Dreamforce, only hours after Salesforce CEO Marc Benioff evangelized about the vital nature of trust.

Today Salesforce releases the results of a global study of 6,000 knowledge workers from companies of varying sizes that makes for interesting, if at times rather challenging, reading. The top line conclusion is that respondents today trust AI to manage 43% of their work tasks, but still prefer to have human intervention in areas such as training, onboarding and data handling.

There is a boss/employee divide on show - leaders say they trust AI to do over half (51%) of their work, while rank-and-file workers settle on 40%. And while 63% of respondents today say that human involvement is key to building their trust in AI, there is a sub-set that’s already happy to offload tasks to autonomous AI, tasks such as:

  • 15% trust AI to write code autonomously.

  • 13% trust AI to uncover data insights on its own.

  • 12% trust AI to develop internal and external communications without a human.

  • 12% trust autonomous AI to act as their personal assistant.

The boldest prediction to come from the study is a claim that 41% of global workers will trust AI to operate autonomously within three or more years, up from 10% who are comfortable making such a statement today.

Ethics

Paula Goldman is Salesforce’s Chief Ethical and Humane Use Officer, charged with overseeing the company’s strategic laying down of guidelines and best practices for the adoption of technology. Her reading of the study findings is:

What I take away from the research [is that] workers are excited about a future that involves autonomous AI and many are starting transitioning to that. They're already beginning to offload their work to AI. But we're not there yet. As workers adopt and embrace these kinds of tools, like digital agents, we have to bridge trust gaps.

In terms of those tasks that people are currently comfortable with handing over to autonomous AI, Goldman says:

The top three are actually not surprising to me - write code, uncover data insights, and build internal and external comms. We've already seen great strengths for generative AI in all three of these domains and I think they really speak to where AI is quite strong right now. Then there's tasks that people just don't yet feel comfortable delegating, including inclusivity, onboarding and training employees, and ultimately being responsible for keeping data safe.

What’s needed right now is what Salesforce is pitching as ‘human at the helm’. Goldman explains:

We know that a human touch builds trust in AI, and that the way that we apply that human touch, the way that we design human interaction, has to evolve to keep pace with how quickly AI itself is evolving. You've probably heard of this concept of 'human in the loop', which is where people review every single AI-generated output. But that model doesn't work, even for today's sophisticated AI systems.

It's really important, in particular pieces, that we need much more sophisticated ways of thinking about how people remain in control of AI. For this next generation of AI, including agents, we need controls that allow people to focus on the highest risk, highest judgement, or highest touch relationship type decisions, and to be able to delegate the rest of it or to delegate significant parts of the work. These controls give people a macro sense of how AI is performing, basically a bird's eye view, and the ability to inspect, which is incredibly important.

And those helm-steering humans need to be educated, she adds:

Obviously it's about products and how we design products, but it's as much about the people as it is about the products. Every technology transformation in history, if we want people to trust and adopt technology, we need to enable people to use it successfully. This research shows us that knowledge and training can go a really long way and as businesses I believe we need to look really lean into that.

Optimism

In contrast to the generic ‘armageddon pedling’ around AI that is all too visible today, overall Goldman finds a good deal of optimism emerging from the study results:

I found these results striking. I think there is a lot of optimism and a lot of curiosity and a lot of openness to creativity, understanding how we might be able to use AI in creative ways to make our jobs not just more productive, but more enjoyable. Now that is not without risks and obviously the public conversation has taken up those questions as well, but it is striking to me that in this survey, there's a lot of openness and a lot of recognition that these tools are getting better and better, very quickly.

What the study doesn’t address - or at least, not in the public findings - is what that optimism is actually based on? Is it more than just being caught up in the out-of-control AI hype cycle that has resulted in both wildly pessimistic as well as wildly optimistic predictions from vendors and analysts alike? Goldman suggests: 

Obviously, there's been an incredible amount of attention [given] to AI. We have been in over the years many different cycles of enthusiasm about technology. I talk with customers every day, I talk with workers within our customers every day. I think there's a recognition of just enormous promise. There's a recognition of transformative current reality and potential, and sort of the expectation that this is getting better and better. I think that's what's reflected here.

But that’s still a qualitative ‘vibe’ rather than quantitative evidence as it were. Goldman goes on:

I'd have to dive into each of the sort of  the qualitative inputs and stuff like that to get to really the definitive answer, but I believe that's what's behind it. And maybe I would add a recognition [that] this research was framed [around] the core question [which] is really about AI improving. It's this transformation from reviewing every output, to being able to trust it to do what it's being asked to do. Obviously, that's going to differ by circumstance and by task, but really the core set of questions that you see in this research is about is that transformation happening, and how quickly is it happening? And people are saying, 'Yes, it's happening, and we need to be prepared for it'.

My take

Salesforce has taken a strong position around issues of trust in relation to AI and generative AI in particular of late. There’s a been a welcome note of pragmatism that comes from the top down about the ‘non silver bullet’ nature of this technology, reflective of what so many execs have noted is a heady combination of enthusiasm and curiosity running into an equally powerful pragmatism and nervousness.

That’s why I find myself somewhat uncomfortable with the ‘three years or so and we’ll be happier with autonomous AI’ optimism reflected here. I think that’s a prognostication that needs challenging and substantiating at the coal face. Is this aspirational or are there solid factual grounds for holding such a belief? There are organizations that are blazing a trail with generative AI adoption as we saw at this year’s London World Tour event. Equally there are those - many more - who talk of their interest in tapping into gen AI’s potential in the future. That may be three years away; it may not.

But overall this is another useful contribution to the ongoing AI futures debate. Human at the helm is a strong image that has legs. I also found it interesting that Goldman has been picked out as the voice of this research - check out her blog on the subject here - given her role within Salesforce. It sends a very positive message about the importance of ethical considerations in this AI-enabled future.

Loading
A grey colored placeholder image