Main content

Customer service is 'completely transformed' by AI, says Intercom's product chief

Phil Wainewright Profile picture for user pwainewright April 12, 2024
Summary:
Intercom launches Fin AI Copilot, the second step in its strategy to reinvent itself and transform customer service and support with an 'AI-first' approach.

Fin AI Copilot screenshot - Intercom
(Intercom)

Customer service and support platform Intercom this week unveiled the latest phase of its mission to infuse AI into every step of customer service, introducing Fin AI Copilot, an intelligent assistant for customer service agents.

A year after the launch of Fin AI Agent, an automated intelligent agent that is already answering as many as 70% of incoming queries for some customers, the copilot will now help live agents to resolve the remaining queries that need human attention, by surfacing answers that the agent would previously have had to search for. Paul Adams, Intercom’s Chief Product Officer, explains:

When the human rep takes the question, there's already suggestions from copilot: ‘Hey, we think the questions were x or y.’ And that is conversational. You can talk to the copilot, ask it questions. It'll show you the sources of information, so you can check the source — where is it pulling these answers from? And then in one click can reply to the customer with the answer.

Early results show that the copilot is helping customer service teams produce significant productivity gains, on top of what they’ve already seen from deploying the automated agent. He tells me:

They already have an agent answering 40-50% of questions. And then on the remaining half to come in, the human reps are getting 20-30% efficiency gains, because Copilot’s helping them, stopping them from doing all that laborious, horrible searching.

It's also making it easier for new team members to get up-to-speed, because instead of having to turn to a colleague or mentor, the copilot is presenting the needed information to them, he adds.

The AI flywheel

Later this year, Intercom will add the final element of its AI-powered product set, which will provide analytics to team leaders to help them fine-tune performance. He explains:

It's ingesting all of the real-time conversations, ingesting all of the teammate performance, everything imaginable, customer satisfaction scores, whatever you feed into it, it'll analyze. We're in the early stages of building this. It's the one that's most nascent. I'd say we'll launch it maybe later this year.

Once in place, the analytics will complete what he describes as "a flywheel," in which the entire system will continue to improve over time. He goes on:

The analyst is providing all of these AI-generated insights, which then helps the — if you imagine, a 1-2-3, the customer question comes in, agent tries to answer, increasingly it can. If not, it goes to the copilot. Copilot helps the human rep answer. And then all the data and reporting goes to the AI insights engine, the AI analyst. Then the manager can use all of this to improve both. So it's this kind of flywheel effect.

In theory, a human rep should only ever answer the same question once. The AI agent couldn't answer, so it went to the human agent. They answered via Copilot. Hopefully it's correct, got a good satisfaction score. If so, the manager can then say, 'Okay, this answer is a fantastic answer to this question. So next time' — that's the loop — 'next time we see that type of question, the AI agent should try and answer in a similar way.'

This will lead to a massive change in how customer service and support operates, believes Intercom — and one that the company's founding CEO, Eoghan McCabe, who returned to lead the company a year ago after an absence due to ill health, is betting on to help it challenge larger rival Zendesk's midmarket dominance. Adams says:

The world changed when ChatGPT launched. We believe customer service is going to be completely turned upside down, completely transformed, unrecognizably transformed. We concluded that pretty early from playing with ChatGPT.

AI success rate

This conclusion has led to the company investing an extra $10 million in AI-first product development, starting with the launch of the AI agent last year and now progressing to the copilot and the future analytics offering. The platform uses Open AI's GPT-4 — both Open AI and Anthropic use Intercom for their own customer service — along with fine-tuning based on Intercom's proprietary customer service conversation history. The performance is constantly improving, says Adams:

We've charts of average resolution rate, which is the number of queries successfully answered by AI, as measured, typically by customer satisfaction score. When we launched our AI agent, last year, it went out the door at 25-26% resolution rate. It's now at 45-46%. And so over the course of less than 12 months, it's gotten better 20%, [from] mid-20s to mid-40s ...

The AI, the things that it can do blow our mind. Every week, every other week, it's getting way, way better.

But the success rate also depends on factors such as how well the customer's own knowledge base is organized. Adams says:

Equally important, equally as big, is us educating and teaching our customers about content, improving their content, investing in it as a strategic priority. In the early days of our AI agent, we got a lot of feedback that Fin was giving wrong answers. And then we do all these investigations in the team, and it was the content that was wrong, [or] the content was out of date.

The AI will get better at figuring out which content to use over time, he believes, but even now the effort needed to get started with an automated agent is much less than it was in the old days before the arrival of ChatGPT and generative AI. Intercom is retiring its earlier Machine Learning-powered chatbot called Resolution Bot, which relied on an entirely manual setup to prime it with all the answers. This meant it used to take a lot of effort to deliver worthwhile results. He elaborates:

[There were] loads of customers who did it really well. They'd get 60-70% resolution rate, really excellent. The problem is, most people did not do that. They set it up poorly, didn't invest in the content, etc. So a lot of the crappy bots experience was because they weren't set up well. The underlying technology was pretty good if you really tried hard. Very few people did that.

But then ChatGPT changed everything. This Large Language Model-based AI just totally changed it. And suddenly just out of the gate, they're just really good at a huge majority of questions. So many customer service questions are pretty basic. It's all changed overnight. So I do think that, at least in our business, what we've seen is a complete step change, dramatic change in just their performance, and people's appetite for turning them on.

Meeting customer expectations

While people may say they prefer to deal with a human rather than a bot — research shows that people give higher satisfaction scores when they believe they're dealing with a human — for the most part, they're happiest when they get a quick resolution, however it's delivered. "What we've found is, people don't care. They just want the answer," says Adams.

What's also interesting, he adds, is that if you run the AI engine over past answers that customers have been given via traditional customer support, you'll find that in some cases those answers have been wrong, even though they've achieved a high customer satisfaction score. He adds:

The uncomfortable truth is that the quality of human answers isn't as good as we thought it was. So there's kind of a reputation battle between bots and humans, and I think it'll shift over time.

Once the analytics piece is in place, the data on customer satisfaction will start to shift, too. Currently only 10-20% of callers bother to answer the CSAT survey question at the end of a call, leaving 80% of interactions unrated. Adams goes on:

AI can do that for 100%. That's a big game-changing thing for leaders. Because AI can look at sentiment. It can look at things like, what was the thing they were supposed to do? The actions piece is huge, like, 'We told them how to change their plan.' Then the AI can look at, well did they change their plan afterwards?

Setting expectations is also important, so Intercom is building more transparency into its support processes, showing what stage a ticket is at, who's dealing with it and when to expect an answer.

For now, the most significant impact of the automated agent and the copilot is to free up the human agents' time so that they can give more attention to the complex issues. He explains:

A lot of human support teams are under pressure. They're trying to just get through the queries as fast as possible. What AI has done is removed a ton of the questions. What that's meant is they have way more time. And because they have way more time, they can answer questions more quickly, and they can spend more time on them, because there isn't this red dot going up and up and up all the time as they're trying to furiously figure out the complex question that has been asked.

We've found that actually, human CSAT has gone up, just by the fact that people are using an AI agent, because the [human] agents have more time ...

What happens is, the [AI] agent's answering all the easy questions. What that means is that the human reps get all the hard ones, which is actually good. It's actually a much better job. It's a more interesting job. It's a more challenging job. It's a more fulfilling job.

My take

I was worried at first when I saw that Intercom is calling its strategy "AI-first customer service." While there are certainly times when I prefer dealing with a well-informed AI agent that's able to quickly resolve my issue, I want to get passed to a human as soon as I know that the AI won't have the answer. And when I get to the human, I don't want them to regurgitate the same AI-induced advice that has already failed to resolve my issue.

I'm reassured after my conversation with Adams. Adopting an AI-first strategy doesn't seem to have diverted Intercom away from keeping the whole process human-centric. And as Adams points out, deploying AI in the right places can end up delivering better outcomes for the humans involved. The findings about CSAT scores are particularly eye-opening, but hardly surprising when I think about my own interactions with customer service desks. Often the CSAT I give is based on whether I liked the agent or not, and is frequently given long before I've found out whether the suggested resolution actually fixed my problem. Intercom's 3-step process looks like it will avoid such gaps in the data for those companies that choose to pay attention to what it's telling them.

Loading
A grey colored placeholder image