Main content

Dystopian nightmare or silver bullet? Pega’s AI team urge buyers to avoid clichés and to follow the value

Derek du Preez Profile picture for user ddpreez June 11, 2024
Summary:
It’s been an AI heavy week at Pega’s annual user event, but the general theme has been pragmatism rather than hype.

An image of a compass showing good and evil at opposite ends of the dial
(Image by AvocetGEO from Pixabay)

Pega is hosting its annual user event in Las Vegas this week, where the company has, unsurprisingly, made a number of AI-related announcements. However, as diginomica noted yesterday, the vendor is taking a somewhat different approach to others in the market - whilst most vendors are showcasing the art of the possible and positioning AI as the tool to revolutionize the way we work, Pega is focusing on AI use cases that can help buyers get to what it calls a ‘center out business architecture’. Pega CEO Alan Trefler went as far as to say that a lot of the ‘AI agents’ that have been released by competitors amount to nothing more than ‘magic tricks’ and are ‘mostly garbage’. 

Trefler was keen to promote Pega as a vendor that was focusing on practical AI use cases, where buyers could find value in the technology today. This pragmatic approach was one that continued in a discussion with Pega’s top AI team, where discussions around ethics, responsible AI use and getting started to find value were dominant. For instance, Lead Scientist and Director of Pega’s AI Lab, Peter van der Putten, said: 

I think it is very good to just do it, to just do stuff. Because then you will find out what the value is, right? Is it actually delivering something? 

There's a lot of talk, high level talk, about AI - either it’s dystopian or it's fantastic and a silver bullet. But it’s only when you start to get real and apply it that you really find out what the value is. We highly encourage all our clients to actually start doing stuff. 

A restrained approach

Rob Walker, VP of Decisioning and Analytics at Pega, was keen to highlight that the vendor’s platform enables a controlled, manageable way for enterprises to test out different types of AI. He pointed to how there has long been ‘left brain AI’ (which uses statistical analysis to help make decisions) and then the newer ‘right brain AI’ (which is largely generative AI, a more creative tool). Walker argues that the Pega platform enables a pragmatic approach to using both sides of the AI brain, whilst maintaining human oversight and ethical principles: 

Fear mongering is a very ‘general’ thing with AI - that it will maybe take people's jobs or it will start a nuclear war, or something like that. That's not the business we're in! Not to oversell the point of the platform, but that makes it much easier for us to implement these [ethical] tests. 

And there's always a human in the loop. Always. So, you will always check…is this what the generative AI created, or this is what our statistical AI found - and is that okay? We simulate it and ask: Is it fine? Are there no weird things happening?

It’s almost like if you would ask a colleague to check before you put something in production, that is what we currently do with AI. 

Walker did admit that some clients are asking if they should ‘take off the straight jacket’ (meaning, the human in the loop), but Walker notes that this would probably run counter to AI regulations and questioned whether it is even possible at this moment in time. 

And whilst Pega is more broadly talking about the desired end state being an ‘autonomous enterprise’ through AI adoption, van der Putten was keen to reiterate that this doesn’t mean no people involved at all. Rather an autonomous enterprise is one that has functions that are using AI to continuously optimize and help businesses make better decisions:

Take the scenario of marketing offers…you could come up with new recommendations using generative AI and get very creative, but ultimately you sign it off and say ‘these are recommendations that we can release ‘into the wild’’. 

But you’re not releasing it into the wild, we're releasing it to the ‘left brain’. And there it's working autonomously. It's really self optimizing towards which next best action is most relevant to a particular customer in a particular context. 

We have customers doing billions, in excess of 10 billion, of those decisions per year. That process then is running pretty autonomously, but you are in control of what type of recommendation you want to release in the first place. 

Walker also noted that whilst Pega is an AI company, it isn’t building large language models itself and isn’t throwing customer data over the wall to another provider. It’s worth remembering that when we talk about large language models, whilst they may be showcasing incredible abilities, scientists and researchers aren’t entirely clear on how they actually work. For enterprises that rely on traceability and explainability to show how and why they’ve processed customer data, this will likely be unsatisfactory. As Walker notes: 

Our focus is very much on very large organizations, complex organizations, because we solve complex problems. As a function, I think they're a little bit more cautious. They're also almost always fully regulated. I think in that sense there is relatively good discipline around AI. 

I think honestly, the use cases that we have, we don't do everything - Pega is an AI company, but we are not creating these generative models ourselves. And even the statistical AI that we do build, when it learns, it learns behind the firewalls of these organizations on their customer data. We don’t do any of that. So I think we have a restrained version. 

Ethics 

During the discussion the team brought up Pega’s AI Manifesto, which is a document that includes nine guiding principles for the responsible and trustworthy application of AI. But ethical AI use is often overlooked by vendors and customers are desperate for guidance for how to practically be cautious when embarking on these AI projects. A lot of the technology is new and peoples’ understanding of it is immature - but there is pressure from boards to get investing straight away. As such, buyers need vendors to not only sell them on the virtue of AI, but assure them that they can use it responsibly and show them clear routes to safe success. 

It was a point that van der Putten picked up on, where he again urged customers to think about the types of use cases: 

I have a bit of a two-fold answer. I don't think there's reason for concern, but I think it's very good to apply ethical principles, responsible AI principles, when you start to use AI. 

That could be: what is the right use case? Is it something that's benefiting, let's say, only a bank, or also the customer? Or depending on how material the decision is; for example, are we showing the marketing offer versus approving someone for a mortgage? Those are two different types of decisions. 

It could be that you have higher levels of scrutiny around how transparent that automated decision is. Or can we make sure that we test these decisions for unwanted bias, for example, before we release new logical models into the system? 

Or you continuously monitor these systems to make sure that there's no element of unwanted bias. I do think it's good to have ethical principles around the application of AI. 

We're also very pro sensible regulation around this. We think the EU AI Act is a great idea. I do think those risks can be managed right, through a mix of ground rules that are set for everyone versus best practices in how you apply AI in an organization. 

My take

Another very diplomatic and reasonable conversation with Pega today. It feels necessary that vendors take less of a ‘AI will solve everything and you better get using it now - or else’ approach with buyers, given the very real concerns and fears around getting things wrong. Pega’s approach appears controlled and aims to provide customers with sensible guidance about how to approach this. The test will be in the years to come when customers start to scale up these technologies - on both the left and right sides of the AI brain - but overall it was welcome to sit down with what felt like a room full of grown ups, rather than excitable sales people working on commission.

Loading
A grey colored placeholder image