Two takes on conversational computing and the limits of AI

Phil Wainewright Profile picture for user pwainewright October 10, 2017
Summary:
Visits to conversational computing vendors Tact and Apttus highlighted five lessons on the limits of AI and how to overcome them in B2B enterprise applications

Bob and Bobette - model bots in conversation © Gary Blakeley - Fotolia.com
There are two contrasting schools of thought currently doing the rounds in the tech industry on the state of artificial intelligence (AI). One side is eloquently represented by the recent essay on AI opportunity by Google digital marketing evangelist Avinash Kaushik, which essentially concludes that, "Humans simply can’t compete." The opposite view was set out by AI academic Rodney Brooks in an MIT Technology Review article last week, who cautions against over-optimistic AI predictions:

Today’s robots and AI systems are incredibly narrow in what they can do. Human-style generalizations do not apply.

Personally I side with the latter view. Let's not overestimate what can be achieved with AI today, or underestimate the decades it could take before AI escapes its current limitations. As another recent MIT article about the work of deep learning creator Geoffrey Hinton reminds us, today's AI is effectively riding a one-trick pony, a technique first described by Hinton 30 years ago:

These 'deep learning' systems are still pretty dumb, in spite of how smart they sometimes seem. A computer that sees a picture of a pile of doughnuts piled up on a table and captions it, automatically, as 'a pile of doughnuts piled on a table' seems to understand the world; but when that same program sees a picture of a girl brushing her teeth and says 'The boy is holding a baseball bat,' you realize how thin that understanding really is, if ever it was there at all.

Neural nets are just thoughtless fuzzy pattern recognizers, and as useful as fuzzy pattern recognizers can be — hence the rush to integrate them into just about every kind of software — they represent, at best, a limited brand of intelligence, one that is easily fooled.

AI and enterprise applications

The upshot of all this is that buyers should be wary of vendors claiming to add AI to their products, unless it can be shown that what they're adding is able to produce meaningful results. Fortunately, the highly specialized B2B world of enterprise applications has one important advantage when it comes to applying AI with all its current limitations. It is possible to achieve meaningful results by applying AI to a carefully constrained set of data and rules — for once, it turns out that the much maligned silos of enterprise applications have their uses.

Last Friday in Silicon Valley I met with two vendors that are successfully working within those constraints to bring AI-enhanced automation to the B2B selling process. We've written about both before — Tact, whose digital assistant automates workflow across multiple applications used by salespeople, and Apttus, whose virtual assistant automates tasks and makes AI-powered recommendations designed to improve sales outcomes.

Their early experience with chatbots and intelligent agents provides useful insights into some of the barriers that have to be overcome when using an AI-powered conversational UI in the workplace. Here are five lessons I took away with me.

1. An ontology goes a long way

One of the reasons agents like Siri and Alexa fail so often at natural language recognition is that the range of possibilities they have to cater for are virtually limitless. Unlike humans in conversation, they have no context for recognizing what they hear or receive. But in a B2B application, the context is much more limited and it becomes possible to build up an ontology for that particular realm, increasing the accuracy of recognition.

This is not just a matter of limiting the conversation to a particular context — in this case, the sales process. The Tact system, for example, also connects into an organization's corporate directory and CRM system, so that the names of colleagues, products, customers and contacts can all become part of the ontology. Tact's natural language understanding (NLU) engine can then take Siri's phonetic rendering of a name and find the most likely correct spelling from the directory. This is an example of how a more specialist domain makes the task much easier, Tact's CEO Chuck Ganapathi tells me:

The constraint is our friend, as is the data we have access to about the user and their context. So the ontology is a big advantage.

2. Matching patterns to rules

As the MIT's Hinton article points out, AI excels at fuzzy pattern recognition. In conversational computing, that pattern recognition operates at three layers. Working out what we are saying is the first layer. The next layer is composing the appropriate conversational response. Both of these layers remain consistent over time. The third layer is the predictive analytics that produces the content of that response — a suggested answer or choice. At this third layer, the interplay between patterns and rules becomes important. Here's why.

A common scenario for Apttus might be suggesting a discount, based on looking at the most successful outcomes when dealing with similar customer engagements in the past. The algorithm can look at more data in aggregate than any individual sales person will have encountered in their own personal experience, and therefore can make a better assessment of the most appropriate discount to close the deal without giving away more margin than necessary.

The problem with this in the real world is that sometimes the historic pattern suddenly ceases to be a predictor of future behavior — for example, a new competitor comes on the scene, or there is a sudden downturn in the economy. Over time, the system will learn from that change in behavior and rewrite its model. But the process can be sped up by adding rules that override the historic pattern analysis when external conditions change. There are many other examples — compliance is an important case in point — where the system needs to conform to externally defined rules as well as the patterns it discovers.

3. Building a graph

For Tact, the customer graph is the cornerstone of its platform, says Ganapathi. Instead of simply recording data, a graph database maps relationships. Whereas a conventional database can only produce query results, a knowledge graph has enough context to produce answers to questions.

This is a necessary foundation for a conversational UI, since the intelligent agent is expected to produce relevant answers, rather than just churning out data that people then have to interpret.

4. Sustaining a conversation

Existing chatbot frameworks are not sophisticated enough to provide the workflow automation that Tact wanted to build into its platform, says Ganapathi. They don't have the flexibility to participate in the type of conversation that's natural for a human being, in two specific ways.

The first is the ability to maintain context through a threaded conversation. For example, a salesperson might ask for the address of a contact, and then say, "Book me a taxi there." Tact's agent understands that the word 'there' refers back to the address previously mentioned. Most chatbots would not retain that context, and would ask for the address again.

The second element is the ability to manage a dialog. For example, after finding someone on LinkedIn, Tact might then ask, 'Do you want to add them as a lead?' And afterwards, 'Do you want to set up an appointment?' This ability to follow common patterns of sales behavior, while maintaining context through a threaded conversation, is a crucial part of Tact's automation of workflow across multiple applications, such as LinkedIn, CRM and calendar. Ganapathi says:

The dialog management is immensely important to make this stuff happen ... A lot of our secret sauce is in that component.

5. Explaining why

One of the early obstacles Apttus encountered was trust. Sales people didn't like being given a suggestion when they couldn't see the reason why it was being made. But one of the limitations of deep learning is that, although it can recognize patterns, it can't explain how it did it.

The solution Apttus found was to send in another algorithm to find other examples that were similar to the suggested answer. The way I think about this is that it's effectively reverse engineering the rationale for the suggestion. The sales person can then see that these other examples provide a justification for the suggestion. Once they're able to validate it using those other examples, they're happy to follow the recommendation.

My take

It's still early days for AI, but by using AI in highly constrained B2B environments, enterprise applications may be the first to show real benefits from deploying the technology. Nevertheless, it's important not to underestimate the complexity of what we're asking it to achieve, and adjust expectations accordingly.

Loading
A grey colored placeholder image