Main content

Appian World - eating your own AI dog food

Martin Banks Profile picture for user mbanks May 2, 2024
Summary:
Opening the kimono on how Appian plans to utilize AI.

dog food

Chief Technology Officer and co-founder, Mike Beckley, used the second day keynote of the Appian World conference in Washington DC to give delegates advance warning of what to expect coming down the line. Indeed, some of its plans for AI utilization show a strong hint of moving beyond mere monitoring and insight generation, and on to a more 'sleeves-rolled-up’ direct management and control of function as well. 

However, he side-stepped the opportunity to make the whole presentation himself and instead opened up the field for some of the staff actually doing the development work to talk through some early prototypes that users can expect to be announced later in the year.

First up was Chief Product Architect, Matt Hilliard, who set the scene for the presentations: a set of management processes used to run a hypothetical satellite operation. The idea was that the technologists would show a high level view of possibilities where AI, and the Appian AI Co-Pilot, might be used to enhance and extend the range of management actions that could possible in the near future.

Managing a fleet of satellites would be, by its very nature, a complex of highly complex processes, so finding the need for a new process to either ease or enrich project management would be a highly probable user case. The basis of the overall management process shown was a diagram of the various processes and the process flow For the demo, the first process change was to provide a way to handle reports from customers of anomalous behavior. 

The co-pilot allows users communicate with the AI resources in their own conversational way to describe what they would seek to achieve and where it fits in the diagram. The result is an addition to diagram with new process included. At this point it is possible to visualise missing elements in the process, such as a review step. Given the complexity of overall process, any review will require multiple inputs. Hilliard said: 

You need a lot of sets of eyes on things at this stage to get things right, and it's hard to get all those eyes in the same room, even a virtual room.

The review process must be able to work asynchronously across multiple time zones and multiple languages, providing near real-time translation so colleagues can work in their own language. All suggestions for the new review process are captured in the co-pilot, with every party to the review receiving the comments and additions in their own language, and the development lead can decide on its inclusion or not. Accepting the suggestion includes it in the updated process diagram. 

In addition, the co-pilot can also make suggestions based on what it knows about the way the business builds its enterprise applications and what's already been built. For example, it will be common business practice to ensure that the account manager for a customer with a problem is kept in the loop. This capability can readily extend out to include wider areas that might have implications on how the process is developed, such as industry sector compliance and governance regulations. The final decision on their inclusion or not remains with the individual – the development lead.

Another thing the development lead can do to help move this process forward and prepare things for development is ask the co-pilot for suggestions on additional steps that enrich the proposed process. A typical response would likely be to collect relevant data from the customer as to the nature of anomaly and what evidence has so far been collected, so that the relevant team knows where to start looking for appropriate remedial action. Hilliard commented: 

I can just answer in my rambling sort of informal way, and the co-pilot can turn that into bullets and format them, so that it's there for the dev team when they get started. I can even drop into the interface designer, where it will provide the notes I gave to the dev team so I can get a sense for what the form looks like and how it will work in practice.

This might also be an opportunity for it to also suggest code reuse opportunities where relevant, in order to resolve the problem faster with a known solution. This might well form part of the co-pilot helping development teams by organizing their work, especially in terms of setting priorities based on, say, business value being provided.

Getting more functional

Julian Grunauer, Technology Strategy Engineer, took over to talk through how AI can help users use these applications once they've been built.  In particular, he looked at the way AI offers the ability to overcome one if the great hinderances that come from the traditional structure of application:

Applications are comprised of three things, data interfaces, and actions. Data is the information you care about. Interfaces present that information to you in a compelling way, and actions can modify that data. This has worked for the past 30 years, but it's a design pattern not without its flaws, because the onus is on you, the user, to remember to navigate through a nested web of links to find where your data and actions are actually stored.

It can take many clicks and searches to get to the page needed, and saving it once there needs bookmarking an individual interface or opening  up a new tab to continue navigating elsewhere.  The answer is to have the computer display interfaces and semantically searched data, and simply ask the system to perform any required action. Appian, he said, is currently exploring how this new technology can be applied to reimagine how users interact with their business applications. 

Building off the same hypothetical satellite management company, the scenario shifted to the role of finding out why one of the satellites is not working.  There are already many reports concerning the failure but they are spread across and almost equal number of different applications and interfaces. So the need is now to instruct co-pilot to search through all available data across the system for all failure reports on that satellite, regardless of where they are stored, and present them in an ordered fashion that can often point to an actual cause, say power supply failures. 

From this an experience engineer can use co-pilot to search for all relevant data, including historical records such a maintenance reports concerning that area. This has, historically been a part of the job that is very time consuming, as well as being prone to not locating all relevant data that might prove useful. Grunauer said: 

Co-pilot can pull up any task that you want, filter your data, semantically create these searches, and translate natural language into these queries for you. It can fill out your form information and it can identify, generate and create all of this document information directly from within Appian, down to a Word document being edited directly with an app. And now our responsibility as the user is to actually edit and review this information that AI has generated for us directly within Appian itself. And this is a single display. I don't have to open up endless tabs or bookmark each individual page that I want to see, I have one single workflow that I can save for later.

3D Plus…more functional 

Finally, Brooks Watson, Product Strategy Software Engineer, got to make the one real announcement amongst the kimono openings on future probabilities - the formal launch of  Appian 3D Plus. This has the goal of squaring what Appian sees as an increasingly aggravating circle. Watson explained: 

Traditionally, 3D models have been locked away to fit client desktop-based applications. More recently, web-based services have hit the market but they require you to give up your files and store them on unsecure cloud environments. Meanwhile, related data is siloed away on disparate systems. What if you can not only view your models, but fully integrate them with the rest of your enterprise data? What if you could build workflows around them?

This piggy-backs off the enhanced Data Fabric Appian also announced at the conference, allowing 3D models to be brought directly into applications, where they can be viewed and utilised directly. Viewing also includes drilling in to view individual parts and making them available as additional sources of data. This can all then be made available as part of a  visualisation of a product, with which the co-pilot can interact directly. This makes it easier to investigate problems and identify causes, all within a secure, compliant platform.

Using that same hypothetical satellite power failure, he demoed 3D Plus indicating the solar panel as the most likely cause. He showed how it can then be used to operationalise those findings, automatically attaching and submitting the documents discovered by co-pilot. and then clicking Submit. Here, actions can then be guided via the Data Fabric by allowing event streaming capabilities to enable real-time monitoring of the actions, therefore ensuring that teams are always working with the most current data.

It was down to CTO Beckley to round things off by stating that the company had now moved well beyond retrieval, augmented generation and AI, and is now moving into functional action, calling AI agents and taking advantage of all the external tools now available.

My take

Despite seeing the need to play a role bringing AI applications to heel when at work in day-to-day business operations, this does not mean the company has such a downer on the technology that it sees no role for it in its own operations in an ever-broadening view of what constitutes process management. 

Loading
A grey colored placeholder image