Don’t get too sweet on AI, warns SugarCRM CEO

SUMMARY:

As Dreamforce gets underway with AI top of the agenda, another CRM vendor issues a note of caution about the limitations of the technology.

Larry Augustin

As the buzz builds around AI, automation, and robotics, business leaders should be wary of rushing to join the crowd and making AI a tactical, rather than strategic decision. That’s the view of Larry Augustin, CEO of privately-held SugarCRM, who warns:

People need to be cautious about what they automate versus what they don’t. In all of this, business leaders shouldn’t get so caught up in the vision of the future of automation that they forget that there’s a lot of value in human interaction as well.

The risk could be that AI becomes this decade’s offshoring. In the Noughties, the outsourcing sector benefited from the rush to send customer contact centres offshore, with banks, telcos, and utilities among the organisations using technology to keep customers at arm’s length while forcing costs down.

But in the years since, a consumer backlash forced many to repatriate those functions – at considerable cost to reputations and bottom lines. Augustin agrees:

We saw some of that backlash when interactive voice response [IVR] came out – ‘Can I speak to a real person?’ – and I think we’ll see some of that if companies rush too fast into automating too much of their business. People need to be careful about it, and not lose track of the value of human interaction.

But getting the two of them to work together is important. The automated interaction can handle many of the mundane tasks, but they can’t be two completely different, disconnected channels. When you’re talking to a human, that person needs to understand what you’ve done with the machine on the automated side, and vice versa. At times, the machine will have to be directed or influenced by other interactions that you’ve had with the customer.

Augustin’s call for caution might surprise many, as SugarCRM has itself joined the AI crowd in 2017 with Hint, the first in its new, separate line of relationship intelligence products. But his point is that CRM and AI should remain focused on the human, rather than the machine, and on augmenting and complementing human skills, not replacing them:

There’s definitely a class of technology in which the purpose is to replace people. If you look at how a lot of IVR and call centre technologies are sold, many of them are on the basis of, ‘Now you can have 50 people do the job of 100’, but that’s not how we do business.

We charge based on the number of users, so it works against us to decrease the number of people! That doesn’t mean we’re not going to make people more efficient, but fundamentally we don’t see our job as being people replacement, we see it as being about increased productivity and helping people to do their jobs better. We focus heavily on the human interaction part of the relationship.

As the technology space bends around the massive objects of the big West Coast conferences, such as this week’s Dreamforce jamboree,  other stars in the IT firmament sometimes struggle to pull media attention into their orbits. But gravity has nothing to do with size, as Sugar understands very well: the CRM vendor attracts custom through depth and common sense rather than being a gas giant on the scale of Jupiter:

Our thesis is that as people face the prospect of talking to computers more, they want higher quality out of the opportunity to talk to a human. Many people want to get past the machine as fast as possible to talk to a person. That’s where we play. We’re not in the business of making the automated agent that replaces the human, or the automated website.

People want an automated or self-service model for simple things, but when they have a real problem they want a human to listen to them. And connecting those two things is important, so we’re in the business of helping the human agent to do their job better.

Hint is, initially, focused on giving you information about the person at the other end of the connection, information that is both publicly available as well as information from your own personal, direct interactions, giving you some intelligence about that person. That’s about assisting people, and making human-to-human interactions more valuable.

Caution

There’s evidence that many organizations are heeding Augustin’s and others’ call for caution. SugarCRM recently conducted its own research into AI attitudes among 400 C-level executives in the US and UK. Augustin says:

A high percentage of people – 63% – said that they are going to use AI to some extent in the next two years, but almost a quarter of people said that they were unsure about it, and 15 per cent said definitely not. So there are two poles, if you will, one of optimism and one of scepticism, and the sceptics see AI as a science experiment.”

So why are some organizations cautious – and why should others be too? Augustin argues:

The top concerns around AI are to do with trusting the technology. More than one half of our respondents said they are worried about data security, what the system is going to do with that data. Thirty percent said data security is their top concern:

Others are worried that AI will make errors. One of the challenges with AI is that it is very difficult for the AI to tell you why it’s reached a decision. People are looking for cause and effect. If a person makes a mistake you can have a conversation with them about it. But you can’t do that with an AI today.

My take

It’s refreshing to hear a technology leader urge caution about a technology set during the peak of its hype cycle. But over the next few years the big challenge with AI will be less about trust or data security in simple terms, and more about transparency and consent within systems that are already black boxes.

So what can the IT industry do to overcome people’s scepticism and mistrust? Transparency will be the key, agrees Augustin:

Looking to create AI systems that have transparency so you can begin to understand the source of why an algorithm has been trained in the way that it has, so you can go back and see that it’s because of a bias in the training data. That is definitely going to be an important area of research before we get to the point where people are comfortable with AI. Lack of transparency is a big issue around the acceptance of AI, and it’s going to be a barrier until we solve that problem technically.

Image credit - Free for use

    Leave a Reply

    Your email address will not be published. Required fields are marked *