Main content

Salesforce minds the AI Trust Gap

Katy Ring Profile picture for user Katy Ring November 7, 2023
Summary:
Trust matters and never more so than today with the rise of generative AI...

mistrust

Nights are lengthening and festivals to honour the dead and ward off evil spirits are being celebrated in many cultures. In the UK, thanks to these celebrations commingling with the UK’s global AI Safety Summit last week (hosted by HMG at Bletchley Park), the spectre of AI is spooking commentators and influencers.

Coinciding with the broad discussion of the possible tricks and treats that AI may bestow on us outlined at the Summit, Salesforce has revealed more information about the generative AI capabilities it is bringing to the market. In the business context, there is much in the Salesforce approach to reassure organizations wanting to take their first steps to deploy generative AI.

The AI Trust Gap

According to Sift.com’s Q2 2023 Digital Trust and Safety Index, nearly half of consumers admit it’s become more difficult to identify scams since ChatGPT (the most widely used large language model) has been available. Beyond increasing the sophistication and scale of low-level fraud, generative AI is also in the societal dock for enabling synthetic porn, destroying jobs and (as hysteria builds) potentially destroying the human race. At the enterprise level, concerns are more prosaic but nevertheless serious, and typically follow these lines of enquiry:

  • how can this technology be used to improve my business and impress shareholders?
  • How can my organization take advantage of the technology while managing compliance around data privacy?
  • Is my organization’s data capable of feeding a Large Language Model to generate business improvement?
  • How do we access the skills to apply this technology?

As soon as an internal team assembles to play with generative AI in an enterprise setting, other concerns surface: There is a tendency for models to generate factual errors from “hallucinations,” which is AI output that deviates from the training data and there is evidence of these hallucination rates being as high as 20-25%.

Another issue is the purity or accuracy of the data fed to the model which can involve harmful stereotypes and biases impacting the output, as demonstrated by Tay, the Microsoft AI chatbot experiment in 2016. Furthermore, data may be collected that is based on human guesstimates from surveys and/or lazy form-filling by humans both of which can generate very weak models. Build, say, a proactive maintenance model on such “hit and miss data” and organizations can soon run into expensive problems.

This is all contributing to what Salesforce refers to as the AI Trust Gap.

Einstein Prompt Builder and Trust Layer

To address the AI Trust Gap, Salesforce is introducing its Einstein Trust Layer within its Einstein AI platform. Einstein Trust Layer provides guardrails for creating prompts and anchors queries in the data each organization already has within its Salesforce applications.

Ryan Schellack, Director, Data & AI Product Marketing at Salesforce, explains:

Data Cloud grounds every prompt (to a Large Language Model) in enterprise content and trusted customer data, which is important because otherwise you are in garbage in/garbage out territory.

The company’s Vector Search (billed as 'coming soon') means that organizations will be able to query PDFs, product manuals and visuals so that prompts for Einstein can be based on Salesforce CRM opportunity data and case data using both structured and unstructured data. Salesforce is taking a Bring Your Own Lake approach to data and beyond its own Data Cloud is also currently piloting integration with both Google Big Query and Snowflake, with support for Databricks and Amazon Redshift/Glue planned to follow next year.

The Einstein Prompt Builder is a low code tool that enables the creation of prompt templates with text and references sourced back to data in the CRM system. It has been designed to “seamlessly retrieve data while safeguarding access policies and controls,” according to Avanthika Ramesh, Director of Product, Salesforce AI.

The Einstein Trust Layer provides dynamic data grounding with real time information going into prompts before being sent to the Large Language Models. Data masking detects PII data and masks it while providing contextual data for the response. Prompt Defense uses rule-based heuristics to stop end users injecting free text into the prompt and misusing the system.

The query is then sent to the Large Language Models, which can be hosted within Salesforce’s trust boundary, or can be the customer’s own models sitting on customer infrastructure, or external models with a shared trust boundary. The response is then sent back via toxicity detection filters to assess whether the model has returned anything dubious. The data is then demasked and an audit trail is created so that all the data that has fed into the response is captured.

Einstein Copilot operates with the Einstein Trust Layer to provide guardrails for conversational AI, making it easier to glean insights about the data in every CRM app. It will be followed into the market by the Einstein Copilot Studio to enable no code customization of Copilot.  Prompt Builder will be the first Copilot Studio capability to launch. It provides a no code interface to enable power users to create reusable prompt data using templates, with responses fielded in a playground to review sample GPT ouput and score its toxicity.

My take

As Microsoft launches its own Microsoft 365 Copilot add-on for Office to its very largest customers, the ability for widespread use of generative AI in the mundane world of work is now a reality. Salesforce Einstein Copilot, offers sales, marketing and service professionals an AI environment with decent guardrails to get started while minimising risk. With apologies to Susan Jeffers, products such as Salesforce Einstein GPT make it easier to feel the corporate fear and deploy generative AI anyway.

Loading
A grey colored placeholder image