Can generative AI displace the enterprise UI? An unprompted debate with provocative implications

Jon Reed Profile picture for user jreed July 20, 2023
Summary:
The biggest enterprise debate of the Spring lurked right beneath the surface. So let's bring it out into the open - can generative AI displace enterprise UIs? If so, then software vendors better buckle up too as conventional licensing and product categories are right in AI's sights.

man-running-upstream

It happened early this spring, during a generative AI deep dive in California. A pesky analyst asked a vendor something along these lines:

Look, you've been trying to improve your UI for years - most vendors have. But enterprise software UIs mostly still stink. So why don't you just put a prompt in front of your software?

No surprise: this topic resurfaced throughout the spring, at many different shows. The context varied, but it boiled down to this: "Is AI the new (enterprise) UI?"

I'm not so sure, but this is one heck of a loaded question - more loaded than vendors (and analyst firms) realize. As I wrote:

Wouldn't that turn enterprise software vendors on their heads as well? Do you want your AI assistant to tell you, 'I can't complete that task, because you lack the necessary product licenses for this cloud or that cloud'? If AI is that disruptive to UIs, then won't enterprise software product categories (and their accompanying licenses) be part of the casualties?

I wasn't the only one:

AI's buzzword boomerang effect in action

When you chase AI down disruptive rabbit holes, be careful what you wish for. Are vendors prepared to rethink software licensing? Are analysts prepared to step back from product-centric waves, quadrants, and trapezoids? Perhaps celebrating AI's potential dominance has profound business model consequences? That's the buzzword boomerang effect on full display.

Start with the UI aspect. Let's assume users are skilled enough to work via prompt. Let's assume they prefer prompts to their own screen navigations, and don't mind a mandatory UI change (yeah, right).

LLM isn't technically capable of automating processes by itself. For all the vendor PR B.S. I am receiving these days, LLM cannot be entirely cured of hallucinations. When a machine can't understand what it is saying, there are consequences. But, soon we should have software offerings that tie the strengths of LLMs to other things. Example: an LLM prompt activates a workflow in a different enterprise system.

The explainability issue will get solved (at least to a point). Enterprise LLMs will be able to show the user where information was derived, or take them into a particular screen for data validation. That's the first question: should we impose LLMs on interfaces where humans may want to control the outcome?

Some possibilities: 

  • "Show me the demand forecast for Asia." Very doable.
  • "Show me the demand forecast for Asia if we ramp up production of ____" Doable and interesting.
  • "Show me the top five performing employees/stores/products in this region." Very doable, though must tread carefully on the employee one.
  • "Hire/fire this employee" - Doable, if the back end HR automations are in place, but ill-advised without human supervision, or at least a "you're about to hire ___, are you sure you want me to provision the onboarding now?" cross-check.
  • "Who should I hire/fire?" Doable, but dangerous.

These prompt interfaces are flexible enough to ask such questions, so we'd best be damn careful. Hmm... Is that why some vendors have backed off of equating AI with UI? As my colleague Stuart Lauchlan wrote to me:

If you're rightly saying that humans need to be final stage of AI, then the idea that AI is your UI is not compatible.

I raised a similar concern about placing an AI prompt over an HCM system:

Prompt-based software access raises a huge batch of fresh questions about bias. This applies to consumer-search like Google and Bing as well. As soon as an AI system is showing me just one result, rather than letting me scroll results, there are concerning implications for exclusion, bias, and the dominance of the incumbents that dominate the top search position. In other words, if AI designers inside and outside of software vendors get too headstrong about disrupting legacy user interfaces, they could fast forward past serious ethical (and legal) considerations.

AI as UI raises the bias stakes

As in: "Show me the five most qualified candidates for our Sales Director opening in Chicago." I'd much prefer the question "Show me the five most talented applicants we overlooked," but the prompts will serve up what we ask. Such a narrowing question raises the bias stakes considerably - even if we have taken great effort to de-bias our data and de-bias our model. If that's the AI path we're headed down, discrimination lawyers are going to be on speed dial.

In fairness, I'm excited about AI's potential to improve HR hiring by surfacing overlooked candidates using skills-based search, not brute force screening (though that requires a well-constructed and up to date skills ontology. Accenture did it, but that's no small effort for the typical enterprise). I also think AI is underemployed (and underrated) for surfacing human bias in hiring and promotions. It comes down to how the system is designed, and how much we (should) trust the results it surfaces.

As this plays out, I expect we'll see "hybrid" systems that experiment with prompt-based UIs, without excluding other forms of information retrieval or screen navigation. A prompt-based AI with sufficient explainability, guardrails, and human validation where needed could be a welcome option. But that won't put a lid on the licensing and product category upheaval.

The limitations of cloud application licensing isn't a new concern. About seven years ago, when cloud apps had even more sex appeal than they do today, a different pesky analyst asked a different software vendor:

Why are you selling product specific clouds? Aren't customers focused on end-to-end processes?

Now, we have a potent new way to get there. But it won't be overnight; the changes invoked are far-reaching. As Boulton, who has direct experience with ERP engineering and LLMs, wrote to me:

Are we 3, 5, 10 years away from AI ERP? I think it depends more on how license is abstracted from modules to workflows.

Prompts as full-on replacement for enterprise UIs is probably a premature conversation. But here's what I like about this debate:

  • Enterprise UIs still aren't very good (or very automated). AI is pressuring UIs for a reason; that pressure is welcome.
  • Seat and product-based user licenses are outdated, and in dire need of disruption. At this moment, I only know of two enterprise vendors really pushing change in enterprise software licensing (Acumatica and Zoho) and they are not in the large enterprise market.
  • Product-based categories (with rankings) are an analyst firm's monetization dream, but their usefulness is limited, and their objectivity is often questionable. This too is worthy of a big shakeup.

My take - AI overreach versus the art of the possible

AI is not the only way to compel these changes. Often, the best UI is no UI at all. Workflow automation vendors are certainly determined to make that so. Along those lines, sometimes the best prompt is no prompt.

But that's the great/scary thing about the best questions: they have a way of turning on you. It goes without saying that diginomica itself is potentially included in such disruptions. You can bet we are thinking about how to use this tech to serve enterprise readers better, and not be upended. In my view, that's healthy. Generative AI is notorious for giving overconfident answers, even when it is wrong; humans should not mimic that drawback.

Most of this current AI brouhaha is about the hypermonitized area of generative AI, which is itself just a subset of deep learning, which is just a subset of machine learning and so on. Up to this point, making generative AI suitable for the enterprise has centered on two main topics:

  • Human-in-the-loop design to curb generative AI's shortcomings, combined with:
  • Advances in explainability, data cleansing/de-biasing, training data enhancements, countering hallucinations with more controlled data sets and reinforcement learning, reducing biased/offensive results with better guardrails, ensuring customer data privacy within the LLM, and all the controls enterprise software vendors (and startups) are working on.

Those two ingredients/correctives lay the groundwork for many effective generative AI use cases for the enterprise. Downplaying these two ingredients with AI overreach will create project failures, something the enterprise is all too familiar with. Ill-conceived headcount reduction exercises will be part of this AI mix also.

In the future, another corrective/enhancement to generative AI will be other forms of AI. When one form of AI can compensate for another's weaknesses, then we are at a different point: an AI with far less drawbacks, and, perhaps, less need for AI as a UI, because humans won't necessarily be needed to nearly the same extent, as adult supervision in every process or piece of content. Whether not being needed is dystopian or utopian remains to be seen; today is not that day. Nor is the tech that will make this happen close to being a practical enterprise reality, despite the PRgasms, and excitable talk of even bigger LLM training data parameters.

Enterprise AI is all about proper design for proper use cases, rather than indulging in AI overreach, e.g. "Can I fire my content marketing team?" The much-slower-than-predicted results for fully autonomous vehicles are a great example of a use case where 99+ percent accurate responses to novel situations are non-negotiable. And yet, even in that field, when AI has not been over-extended, it has positively impacted safety for human drivers (e.g. lane and parking guidance). Though it's rare to achieve 100% accuracy with AI in any endeavor, there are plenty of enterprise cases where 95 or even 90% is very workable - especially when humans are brought in for escalations/validation (in some predictive scenarios, even 70-80% accuracy could be a notable improvement).

I expect reinforcement learning will close those gaps further, but the very reason we are talking about the need for iterative model improvement via user input shows us why "AI is our UI" is - at best - a longer term consideration. But if AI provokes this type of debate, count me in.

Thanks to Stuart Lauchlan for input on this article's themes, and for lending me his pun for the headline, unprompted.

Updated, 11:30am PT, July 20, with a few tweaks for reader clarity and some reworking of the paragraph on AI correcting AI, to make sure I nailed that down.

Loading
A grey colored placeholder image