I liken the AI engines a bit to a brain in a jar, which is very smart. You can put loads of stuff into it, to get insights. But then when you actually have something you want to go and act on, there is no nervous system to go reach into the organization and automate a process, or a command, or an update, or just even basic things. It's actually quite difficult.
This is quite a neat encapsulation of how we use AI in the world today. Each AI service is a separate brain in a jar, to which we feed in data and answers come back. One brain recognizes images, another interprets language, a third we can train to look for anomalous patterns in business data. They're each very good at what they do, but they have no knowledge of the world outside their own individual jar. That's why, as the data analysts at FinancialForce discovered when they started feeding project data into Salesforce Einstein, some of the suggestions it comes back with simply don't make sense in the real world — such as moving a project start date into the past.
In an analytics context, that's not a major drawback, because there will always be a human looking at the results and figuring out which suggestions make sense and which are just plain stupid. The advantage of using AI is that it can process the raw data a lot faster and on a much broader scale than a human is capable of, says Mason:
AI for reporting, or discovering insights in well-defined data sets, is I think one area that the BI community is going to go after very heavily. In that environment, it really is about taking lots of data sets and then finding patterns and things that will be pretty quite hard for humans to do.
Brain in a jar = no nervous system
But when AI is used to drive automation, it has to apply its findings in a real-world context, and that requires a lot more knowledge of its environment. Enterprises use MuleSoft's technology to create what the company calls an 'application network' made up of many different API resources. An isolated AI resource won't have the necessary knowledge of that environment, explains Mason.
It can be hundreds, if not thousands of connections, depending on how broad the vocabulary of that particular engine is. So I think that's going to be a big blocker.
We think about APIs and the application network as a nervous system. If the brain is in a jar, if it doesn't have that API network nervous system, then it's going to be really difficult for AI to really take hold and see a ton of value in the automation space.
When AI performs an automation role in an application network, it's almost as though it's standing in for a human being. Gartner has the concept of a 'digital consumer' in an integration landscape that represents an automated process, says Mason.
Very often, you actually want AI to action something because it's not just insights, but it's also decision making, or it's updating something because something's changed. That's a 'digital consumer'. There's no user there, but in order to automate processes to the level that people want to automate, you need some way of triggering even basic things, like a chatbot that you say, 'I want to update my home address because I just moved.'
AI lacks the knowledge
But that digital consumer is not as knowledgeable as even the most naive of humans. Even in the narrow use case of API management, where the AI is simply determining which action to take in response to events and patterns it sees in API usage, MuleSoft has had to take extra care to ensure that the automated actions achieve the intended outcome. It's a useful capability that's necessary as API usage expands, but it has to be applied thoughtfully, says Mason.
The logical thing for us to do is monitor each connection and see where there's degradation over time, where there's spurious behavior from external connections into internal. If you've got 15-20 APIs, you can manage that as a human, you can see something go red and click it. As soon as you've got 150 to 500 ... you need a machine to go in and do that.
So it has to spot it, decide the right course of action, and then call a set of APIs to make that action happen. To think about how to solve a problem, it's fairly simple, but to actually make sure the APIs are there and that you know that what you said you're going to do did actually happen, is a little bit newer. We're an API-driven product, but even we had to make some adjustments to our APIs to make sure we could make those calls the right way.
There's a lot more work that needs to be done before AI can really take over and automate a broad range of operations, Mason believes. There are simply too many permutations, exceptions and variables that today's AI services have no knowledge of.
Encouraging API reuse
But in the meantime, AI is invaluable where it can cut through very complex or high-volume analytics tasks to make suggestions where humans make the final decision. Enlisting AI will make a big difference to Mulesoft's mission of encouraging reuse of APIs, says Mason.
A lot of what we train our customers to do is measure not how many APIs you've built, but how much re-usage you're deriving from each API. And then we set up this organization within IT that owns evangelizing and making sure people are aware of what's there and driving reuse.
That's a very human activity, but once you've built it into the tools to make suggestions that 'Hey, you're connecting to Workday, you're doing these three things, there's actually an API that does an employee update. Do you wanna just use the employee API?' That allows us to inject that reuse into the tools and start driving good behavior, even if developers aren't looking to try and reuse things. So that's one way we feel we can start to solve that part of the puzzle.
Mason and I talked more about the rise of APIs, serverless and conversational computing in our meeting yesterday, and I'll cover those points in a second post later on.