Building a culture of trust in the new world of AI

Profile picture for user Rick Rider By Rick Rider December 20, 2017
In the unfamiliar new world of AI, it's important to build trust in artificial intelligence through a focus on culture and outcomes, says Infor's Rick Rider

Robot hand in trust AI machine learning © zapp2photo
The race to develop software for B2B AI applications is competitive and intense. Development teams are making huge strides in pushing the corners of progress to leverage AI more than at any time in the recent past. Yet, a big concern remains — trust.

Relying on artificial intelligence is an act of trust in the system. Where AI is taking actions, or making recommendations that have previously been decided by people, it means completely letting go of a process in which many have been actively involved. Here’s an easy example — relying on a system to ‘know’ when to reorder stock automatically, where previously the user has managed this as part of a hands-on daily activity.

How can we persuade people to trust AI when there’s no visibility into how its decisions have been arrived at? We hear in our daily conversations and in media channels ‘I don’t trust him’ or ‘They can’t be trusted,’ and that’s talking about people. Comments like these dominate our culture today. There’s a risk that AI will become yet another barrier to understanding, leading to even less trust.

Building trust in AI

Where does that leave the buyer of B2B software with AI technologies? Can the algorithms be trusted? Is the output viable? Are even the IT departments that evaluate these tools to be trusted?

Here are three components for AI buyers to consider when assessing a provider’s credentials:

  • Is the organization analytically evolved? Are the underpinnings of big data, science, and analytics firmly in place?
  • Can the technology under evaluation take the organization where it needs to be? Does it make broader sense in the context of the ecosystems of the organization’s market?
  • Are the right cognitive skills and critical-thinking ability available to manage user perceptions and properly evaluate the output?

In the day-to-day management of AI, it is less about the algorithm. What matters in the end is the output. Working with forward-thinking management teams, engaged internal teams, and solid foundational data science and analytics management, buyers of AI must evaluate the technology based on its effectiveness in getting them where they need to be.

Culture and technology

At Infor, our customers often ask us about trust. We take the issue very seriously and have developed a unique group — Data Science Labs located adjacent to the MIT campus in Cambridge, Massachusetts — to bring big data, analytics, and science to all our software. Additionally, our Hook&Loop Digital team is dedicated to optimizing digital transformation. These two groups work closely together with customers to understand how best to deliver the outcomes they seek.

An important aspect of building trust in AI is in making it responsive and transparent to people who interact with these systems. Therefore, we ambitiously seek to add team members with skills in the cognitive sciences — such as anthropology — to help us understand perceptions, and improve techniques to build trust in the output. Critical thinking — the ability to question assumptions or see a bigger picture — is similarly key to developing a trust relationship.

These investments reflect our belief that understanding and analyzing the relationship of culture and technology will support building a trust relationship between buyer and vendor, within organizations, and within markets. This will be the path forward.