Main content

Roll up, roll up! The AI circus is open for business with trust center stage in the ring

Rebecca Wettemann Profile picture for user Rebecca Wettemann April 2, 2024
The AI circus is here - but who are the ringmasters and who are the clowns? Spoiler - trust is the key factor.


Welcome to the AI circus, where confusion meets curiosity and trust hangs in the balance.

With all the talk around AI, trust, and benefits versus risks, when you’re talking about the willingness of humans to adopt and benefit from AI, how you explain, market, and train on AI matters. In Valoir’s recent report, Language Matters – AI User Perceptions, we found that the AI trust battle is just getting started.

In our recent study, we found that 84% of workers say they have experimented with generative AI, either on their own or at work. However, although there has been plenty of press about the current and expected benefits for users of AI, potential users are skeptical. In fact, 17% of workers believe AI is about as useful as a screen door on a submarine at work, and only 15% thought that AI could help them jumpstart a writing task.

I expect that a lot of this skepticism comes from experience: workers who tried early version of Chat GPT, Bard, or other tools found them rife with inaccuracies and hallucinations and ruled them out – at that point – as an effective business tool. Although obviously the models have evolved, vendors are now going to need to lure users back for a “redo,” which means delivering results free of errors and hallucinations will be critical.

In Valoir’s 2024 Predictions, we predicted that this year would deliver some great AI failures, and  concerns about AI risks are keeping folks up at night. Privacy violations top the list of worries, with 51% of workers expressing fears about potential privacy violations by AI systems. Additionally, apprehensions about AI acting autonomously without human intervention (45%) and the perceived threat of AI replacing human roles (38%) add layers of complexity to the trust equation.


So, apart from not making mistakes or producing hallucinations, what do vendors of AI solutions need to do to gain buyers’ – and users’ – trust?

Today, there is no consensus among workers regarding the most trusted AI vendor, highlighting the ongoing battle for trust and credibility in the market. And some of the factors you might think matter don’t. 

The key factors on workers’ AI “trust meter” are focused on actions (transparent) and outputs (correct), not longevity or brand name. In our study, only 59% of workers said AI needed to come from a large technology vendor to be trusted (and a significant minority said they would be more likely to trust AI if it came from an emerging innovator, not a tech giant).

From the vendor perspective, workers believe they can most trust AI when it comes from a vendor with clear data ethics and privacy policies. At the application level, verifiable data and sources were the top trust factors, followed by no-error outputs. 

Interestingly, longevity of brand names wasn’t necessarily synonymous with trusted AI – in fact, IBM, who’s arguably been delivering AI solutions longer than anyone else, was on the long tail of the recognition list for both AI and trusted AI (user were asked, unprompted, to enter the name of vendors they most associated with AI and trusted AI).

When it came to the brand names most closely associated with “trust in AI,” workers mentioned mostly the same brands they associated with AI, with Google and Microsoft leading the list. However, 1 in 10 workers were unable to name a company they associate with trusted AI.

On the other end of the spectrum, when we asked what company workers were least likely to trust with AI, Meta led the pack, cited almost twice as often as the next most common answer, Google. Microsoft, Apple, and the government were the next most common answers followed by companies owned by Elon Musk (Tesla and Twitter/X).

It’s important to note that in people’s association of brand names with trust and AI, no vendor made up more than 20% of responses, and Microsoft and Google topped the list for both most trusted and least trusted – meaning perceptions of trusts are all over the map. 

My take

So, in the immortal words of PeeWee Herman, what is all of this supposed to mean? It means that trust will be key for both buyers and potential users of AI, the battle for trust is just beginning, and it’s anyone’s to win – or lose. Although the technology will be important (as in no hallucinations), equally important will be communicating not just the benefits of AI for the end user but how solutions can deliver them with respect for privacy, personal agency, and ethics – in a way that non-data-scientists can understand.  

A grey colored placeholder image