Main content

Software AG enlists generative AI to ease process analysis and IT portfolio management

Phil Wainewright Profile picture for user pwainewright April 16, 2024
Summary:
We sit down with Software AG product chief Stefan Sigg at IUG 2024 to discuss the potential impact of generative AI on business process intelligence and modeling as it rolls out new AI capabilities for its ARIS and Alfabet products.

Software AG process intelligence slide
(© Software AG)

Software AG last week announced AI-powered capabilities designed to broaden access to its ARIS process intelligence and Alfabet IT asset management products. Attendees at the company's annual International User Group conference held this year in Dublin, Ireland, were also there to hear about its other products, across IoT, mainframe tooling, and integration. But with the webMethods and StreamSet integration products going off to become part of IBM as soon as its agreed acquisition closes, it was time for ARIS and Alfabet to take the limelight.

A new ARIS AI Companion is designed to enable business users to query process data and transform processes using natural language prompts, rather than having to be process experts. AI-assisted process analysis is available now, empowering users to ask questions such as, 'Find the anomalies in our purchase-to-pay processes,' or 'What are the biggest bottlenecks in our distribution networks'? and get detailed answers from the process mining tool. Later this year, the AI Companion will also allow users to generate process models with natural language prompts.

A major release of Alfabet introduces a redesigned user experience, intended to make it far easier for non-IT users to engage with its mapping of the enterprise IT landscape, in collaboration with enterprise architects and other IT experts. New features include a configurable Smart Data Workbench that can be customized to each person's needs, a data quality rules engine, and an AI-assisted chatbot to help speed product configuration. The aim is to help IT teams build business engagement with their custody of the digital enterprise landscape, in what is often called Strategic Portfolio Management.

The operational reality of business process

I sat down with Stefan Sigg, Chief Product Officer and the company's headline speaker at the event, to discuss the announcements. He feels that applying Large Language Models will have a particular impact on process analytics and modeling, where there's always been a challenge to engage business users. In the past, business process design has often been viewed, sometimes unfairly, as something of an isolated activity carried out by business analysts with little real connection to business operations. What ARIS brings to the table is a view of the actual processes that are taking place, based on mining and analysis of the data sitting in enterprise systems and documents. This typically leads to an 'ah-ha' moment when demonstrating ARIS. Instead of the ideal processes mapped out by analysts, it surfaces the everyday reality of a myriad of exceptions, variants and workarounds. Sigg says:

People become nervous when you not only show the happy path, but the deviation... This is not what you see in BI tools, because everything is aggregated. Here, the aggregation can be undone. 'Give me all the variants of that.' And all of a sudden, you see what's going on.

That is something new, and I totally agree with you, it's, to a large degree, still something that people find surprising. So that's why one, of the challenges of the business is you cannot just wait for RFPs. But you have to go to the customers and show it and explain to them what they are missing.

Although process mining is the traditional term, Software AG prefers to talk about process intelligence, because it's the analysis once the data has been mined that leads to accurate mapping of the real-world processes. This in turn can then inform better modeling to improve how the processes run. He continues:

You can say here, 'This is where you lose your money. This is where you lose your time. This is where you lose your energy...' Your model there, as the golden standard of the process, is really not what it's like, when I look at the data. You'd better look at that. And either you adjust what you have thought about it, or you do something so that there is a better match between the line process and, let's say, the idealistic process.

That's very important for those modelling people, because they were more or less also suffering from being the high flyers, the process gurus, with not enough binding to what's really going on operationally.

Having evidence from the data about what's really happening brings the process analysis out of what might have been seen as a remote, 'ivory tower' exercise and puts it on a more practical basis. Sigg sees this as bringing data binding to the blue-sky model:

We take it down from the sky and put it on the ground, and we keep it there. We put some chains on the ivory tower, that it cannot go again into the skies, because we're binding it with the data.

Of course, that does depend on having the data in the first place, which is where the comprehensive reach of the data mining exercise is so important. He says:

Of course, it all stands and falls with the quality and completeness of the data that you give the system to mine out. There are two things that are really crucial. One thing is the data provisioning, the other thing is the visualization of process. These are two interesting things, and you can debate what is more important, but if you don't have the data, you're screwed. You can have the best UI in the world, [but] you cannot mine something on the empty [data] set, it's impossible. What is really crucial is to have as good as possible data extraction.

AI-assisted discovery

The new AI assistant enables users to query the analysis with natural language questions, so that they don't need expertise in the application to be able to discover process issues revealed by the data. It can also generate descriptions of the processes and anomalies, the most frequent activities or variants, and the apparent causes of issues, saving time in documenting these details. It can even reveal issues that a process expert might not have spotted. Sigg explains:

It answers all the questions that you would have as a human being. It answers questions that you will not have, which is also very interesting... What comes back when you say, 'Here is the mining data, all the variance of the processes, all the values and all the KPIs, now you figure out what is special,' and it comes back with something that you wonder at. It's not like you say, 'Oh, I already knew that...'

The switch to a natural language interface is a big change from the increasing use of visualization in BI tools over the years, he adds:

It's also for the future, I think maybe even a complete disruption, because over all these years, especially in the BI world, the technology was focusing on making it easy for the user to find interesting things, supporting [them] with great visual components, or enabling users to do things on their own, the self-service...

What does it mean, if there is an intelligent mechanism that can do the same? And then, of course, you don't need any charts, because this algorithm doesn't have eyes, but it does it differently. Maybe we can augment it — I don't think it's going away, the whole visual thing, but I think there will be, not only in process mining, also in traditional BI, there will be this UI-less mode.

Later on, the AI assistant will also be able to generate models. Software AG is working with OpenAI GPT-3.5 and GPT-4, and also has a relationship with Anthropic. The LLM generates output in BPMN, the notation language used to describe business processes. Still in development for now, there are still issues to iron out. One is the cost of using this type of cloud LLM service. Another is that, at the moment, the results are variable. He says:

Currently, what we are seeing is, if you issue the same question multiple times, what it does can be very different. Not totally wrong, but different from the degree of sophistication you can get... [For example], I asked the system, please generate a process model for CI/CD process in a software company. And I tested that eventually maybe about 50 times. And it varied from giving me a model of maybe five nodes and a model with 50 nodes. Very different.

Prompt engineering will be very important in improving the output — "I think that is really where some secret sauce is to get great results," says Sigg. But he is skeptical of the value of customizing an LLM by augmentating and fine-tuning it with local data. He explains:

I'm not really a super-scientific expert of large language models, but I have a hard time to understand why that would be really such an interesting thing. Because if you understand how those large language models work, it's all based on quantity of data.

Now, if you look at the ChatGPT model, with all its trillions of nodes or what it has in the neural network. You come with some data that you have. Compared to that, it's a drop in the ocean. Why would that be impacting what the model would give to you, unless you force it to be that? But if you force it, then you're basically saying, I know already the answer. It's in that data. Then why would I then ask the big model?

So I am not sure, but I think it's a little bit of a reflex that you think you need to teach that gigantic model more with your own data and think the result will be better. I don't think so.

My take

One of the intriguing aspects of seeing how vendors are applying generative AI to their products is the huge variation in approaches. Looking at vendors that are focused on heavily transactional applications, all the talk is about constraining the model's output so that it doesn't introduce inaccurate results. At the other extreme of highly creative applications, generating images or video works better if the output has an unpredictable element because that can help the creative process — so long as the user is able to then edit the result to tidy up any rough edges. I suspect process modeling is more like the latter case, where having fresh ideas can be a boon, provided a qualified human user has the final say.

And as Sigg points out, the bigger barrier for a vendor in this market to overcome is simply demonstrating the value that process intelligence can bring. There's rising awareness of the need for more process efficiency, and the easier these tools become to deploy and use, the more attractive they will become. Adding a more natural language UI in the form of a generative AI-powered smart assistant may be just the boost they need.

Loading
A grey colored placeholder image