Main content

Behind SAP's 2024 AI strategy - Chief AI Officer Philipp Herzig on why cloud is tied to AI results, and the rise of the prompt engineer

Jon Reed Profile picture for user jreed February 7, 2024
Summary:
After SAP's AI-centric restructuring, we went behind the news with AI Chief Officer Dr. Philipp Herzig. In this diginomica exclusive, Herzig updates on AI roadmaps, and shares strong views on why cloud deployments are superior for AI results. If out-of-the-box AI matters, then how do prompt engineers fit in?

Dr. Philipp Herzig of SAP
(Dr. Philipp Herzig, SAP Chief AI Officer)

It's only been a few months since I updated on SAP's AI strategy. But things are changing fast, inside and outside of SAP - including a major, AI-driven SAP reorg that evidently delighted Wall Street.

What should we make of these developments? Time for a catchup with Dr. Philipp Herzig, who has a spiffy new job title, SAP Chief AI Officer (Herzig now reports directly to CEO Christian Klein).

There is a flurry of AI news at SAP to make sense of (as well as a notable RISE with SAP update). Not to mention the initial passing of the EU AI Act; legislation of obvious importance to SAP. SAP currently has 155 AI scenarios in use by 24,000 customers, with a roadmap of 150 more use cases in 2024 (mostly gen AI, some "traditional" AI as well). But what struck me most with Herzig was not the news itself, but the context he provided, including a candid take on AI and cloud:

This is also why we made the decision that AI and cloud only come together, because otherwise you are dying in complexity. And you will not reap the benefits, or the cost to produce outweigh the benefits, and there's no return on investment.

Herzig isn't drawing that quote from SAP marketing verbiage, but from AI project lessons. Herzig brought up two things I should have emphasized more:

  • Why cloud SAP deployments are core to SAP's AI plans, and
  • Why prompt engineering factors heavily into how SAP views AI

These two items have big implications for SAP customers' AI plans.

Context for the SAP AI reorg - "we have formed an end to end growth area"

In my pieces with Thomas Saueressig and my SAP TechEd AI update, I detailed SAP's AI architecture - including how SAP will customize LLM output with customer-specific data, and insights from SAP's own foundational model.

But there is a catch: this approach will take some time to progress. Herzig says that the SAP HANA Vector Engine, which will play a key role integrating real-time information into LLM output, is on the roadmap for GA in Q1 - but this is only part of SAP's overall AI plan.

However, SAP customers shouldn't wait - there is plenty to implement/evaluate even now. So how would Herzig characterize SAP's latest AI developments?

Organizationally, you saw this in the news. One of the big announcements we did in January is that we are also reorganizing how we think about AI overall in SAP - which is, in my opinion, a reconfirmation of the importance of AI.

But how do we make sense of this major reorg? Herzig says it starts with this: SAP Business AI is more than a product.

We were very much focused on the product engineering side of the house, to make sure we deliver great first cases, and then we build our platform services consistently. But in this new setup, we really have formed an end-to-end growth area which entails all the lines of businesses. So including sales, including marketing, including our services functions.

Herzig believes this new org structure will result in quicker delivery:

We really want to be very, very quick. When a new case comes, we immediately have customers. We get them to adopt, get referenceable customers in order to very quickly across all of lines of business.

But Herzig says this is about execution - the strategy hasn't changed (he pointed to SAP's September 2022 AI white paper as a core document for SAP's AI strategy). "The good news is the strategy remains exactly the same," says Herzig. "Business AI is all about embedding AI into all of our products."

SAP Business AI needs to support partner solutions also. This is where SAP BTP comes into play as the core extensibility platform:

And then on the common AI foundation on BTP, with those capabilities our lines of businesses are using being made available, in a second step, to our customers and partners right on BTP, as you mentioned already - to provide this to partners and customers, to build their own things on top of that. [Herzig says that there are now about 380 apps in the SAP Store that involve AI built by SAP partners, though not all are built on BTP].

The EU AI Act and SAP

Though the EU AI Act won't be fully enforceable until 2026, it's still a landmark piece of AI legislation. Did SAP need to alter any AI plans based on these new regulations?

First of all, we are working with the European Commission and the government bodies also together. But when the first draft [of the EU AI Act] came out, the first thing I asked the team was, 'Okay, please provide me an assessment relative to our AI ethics policy.'

The interesting thing was that they came back and said, 'We don't have to change anything. We would actually propose some further improvements here and there, because we believe our policy goes maybe a little bit further.'

Such as?

Take, for example, the statement that HR was considered a high risk area that requires extra caution. That is what we do anyway, because it's very clear. In HR, we know large language models have their biases, right? We know that people are affected in language, and so forth.

I mentioned it in one of our earlier calls - we pulled back certain scenarios already. Because we said, 'We have no technical means to ensure certain qualities. So we pulled them back.' [Author's note: for detail on how SAP changed an AI feature for HR based on ethics concerns, see my last article.]

SAP AI in action - what it means for customers

Herzig and I discussed SAP's "mid-term" AI architecture, including the timeframes for the HANA Vector Engine and SAP's Foundational Model. But Herzig was quick to emphasize: he would like to see SAP customers achieve significant AI value, even without all those pieces. He sees AI near-term value in three categories:

1. Out-of-the-box - AI that just works. Using Joule as a workplace assistant might be one example, and/or SAP's job description generator.
2. Prompt engineering - Adjusting the AI output, but in a way that just about any SAP customer can do, without needing a data science team (there are different levels of prompt engineering; more on this shortly)
3. Retrieval Augmented Generation (RAG) type use cases, infused with customer or real-time data. This can sometimes require a data scientist to set up the proper data architecture (the first one of these is now running in SuccessFactors via Joule, pulling in data from help.sap.com)

Herzig explains:

There's a short term and a mid-term answer to what we're doing...  First of all, in many of the use cases, you can actually come a long way with prompt engineering. You take the master data off the system. You pick some example data in a secure way out of the database, give it some examples, and then all of a sudden, it can write these things. It can write a job description, write goals, performance reviews.

Ideally, says Herzig, you don't even need prompt engineering. Herzig's team has learned from past AI projects: adoption is about embedding useful features, not elaborate ramp-ups.

First and foremost, our strategy is embedded AI. What I believe - and what we also learned over the years - is that AI is most adopted if it comes out of the box.

It's basically turning on the button. You want this capability because it has value, whether it's scanning documents in the warehouse or transportation management, whether it's writing all the various texts in HR that you can imagine. The user doesn't even really know it's AI.

Beyond that, says Herzig, what you have is a barrier to adoption. And yes, he includes advanced AI techniques as barriers:

It's a barrier to adoption if they still need to train on their own data, or need to curate data, or need to do any sort of heavy lifting that prevents them from reaping the value quickly.

Herzig's team wants to minimize the need for this type of heavy-lifting-AI:

Of course, you also dilute the value prop, because you have to do all this last mile production, so to speak, instead of getting up and running quickly.

Customer readiness for generative AI - add prompt engineering to the list

But sometimes, Herzig says, you do need prompt engineering.

Note - we must define prompt engineering - the phrase "prompt engineering" refers to how most gen AI chat prompts respond better to skilled queries (experimenting with different query language can yield a better result from the bot). This user-level prompt dexterity is sometimes called "prompt engineering."

But as I understand it, what Herzig means by "prompt engineering" goes further than this, including what some call "prompt tuning." Simple example: a prompt engineer embeds a command into the underlying prompt that the user doesn't actually see (e.g. specifying that the output should only be in French, or should only pertain to a certain geographical region. Of course, these underlying prompt parameters can be more nuanced than my examples).

Prompt tuning can go deeper. Example: the most effective queries or front-end prompts are fed into your AI model, in order to give it a task-specific context. The prompts can be extra human words, or perhaps an AI generated number introduced into the model's embedding layer. Ideally, a prompt engineer/tuner would collaborate with a business team or department to customize the prompt for a better result.

By Herzig's definition, this isn't AI "out of the box," but it doesn't require a data scientist either. For the rest of this article series, assume that the skills referred to in "prompt engineering" encompass more than users experimenting to find the best queries.  

This leads to an obvious conclusion: when it comes to AI readiness, there is plenty for customers to do - and we should clearly add cultivating prompt engineering skills to the list, alongside data quality/platform improvements and use case evaluation.

I'll detail Herzig's view on prompt engineering skills in the next installment, but for now - he sees this person coming from IT, likely having some SQL skills, and the ability to collaborate with business users on optimizing prompt output (SAP Generative AI Hub tools like the Prompt Template Builder can help). A two pronged approach is emerging for SAP customers:

1. Consume AI as it is being delivered, via existing SAP applications and partner apps.
2. Prepare for longer-term AI pursuits via skills planning and data governance initiatives, while SAP advances its "mid-term" AI architecture and use cases.

Of course, customers with data science teams in place have additional options to consider, including open source AI. That's a topic beyond the scope of this piece, but I've put a stake in the ground here: Attention enterprises - your AI project success in 2024 is not a given. What will separate wins from failures?

Meanwhile, SAP will continue to add to Business AI features, including the HANA Vector Engine -> SuccessFactors announcement:

We released the first Retrieval Augmented Generation case in Q4, as part of Joule, and also on the first internal version of the HANA Vector Engine, which is all our help.sap.com documentation.

If you access SuccessFactors, and you have a question about your screen, you would usually go to the web right now - to Google or Bing. You would do a search, and then go to the help pages. You haven't known context, right? But you can now ask this question to get the summary from all our help pages - basically end user documentation and admin documentation, in the context of the application. That's the first way we brought in unstructured content [via AI].

My take

It's no accident that SAP's first use of embedding RAG data was in a cloud application. Herzig doesn't hold back on this point:

I think it always again ties back to out-of-the-box consumption. We have been doing AI for many years, and we are reporting 24,000 customers who are using our AI scenarios in production. The reality is: most of them are in the cloud.

Why? AI tends to rely on cloud services and, obviously, ease of data access. Standardized data structures are likely important for the out-of-the-box use cases Herzig is championing here.

However, this opens up so many cans of worms I wouldn't be able to close them all. Here is my summary position:

  • Cloud applications are surely easier for leveraging AI, but SAP shouldn't overlook the desire for customers on older releases to move ahead with AI now. In fact, SAP AI adoption would only spur further application upgrades.
  • Not all cloud applications are created equal, and that goes for AI as well. SAP sometimes talks about limiting AI innovations to RISE/GROW customers, but a highly-customized private cloud instance of S/4HANA wouldn't be ideal for AI anyhow, even if it runs on RISE - whereas SuccessFactors, which does not require RISE, is an important AI application area for SAP. So, the firm connection between AI and RISE is much more questionable than SAP has sometimes implied.
  • SAP BTP, not RISE, is the ideal way to provide AI-based cloud services to customers across SAP applications and releases. RISE is a hyperscaler management program that would limit AI access to RISE customers. In my view, SAP will find requiring RISE for AI will unnecessarily limit AI adoption.

User groups are lining up with their views on this. UKISUG just issued a statement welcoming SAP's RISE Migration and Modernization Program, but also asserting that BTP should be the AI services delivery platform, not RISE. DSAG, the German speaking SAP User Group, put forth a similar statement, timed with DSAG Technology Days.

In my heated podcast debate with ASUG CEO Geoff Scott and analyst Josh Greenbaum, Scott articulated a different position, urging customers to assume responsibility for their own modernization imperative, and not to cling to legacy software/mindsets in such a fast moving world (though I suspect that Scott would be amenable to AI services delivery through BTP, given ASUG's big emphasis on BTP education).

These debates will continue. But I will say, unlike early this fall, I see more middle ground now in how SAP is positioning Business AI currently (you can see this in the shifting tone of the user group releases also). However, even if the RISE vs BTP thing is eventually settled, the need for cloud deployments to realize AI's potential will remain an important question.

I believe Herzig is correct to cite AI project success as the primary criteria. If most of the SAP customer success Herzig has seen with AI is based on cloud deployments, we should listen to that quite seriously - and dig further into the reasons why.

Based on this talk with Herzig, it now seems to me that SAP's Foundational Model will not be in full release mode anytime soon. That's largely because for the Foundational Model to be effective, Herzig says it needs to pull data from a contextual Knowledge Graph, which is still under development also (I'll share more details on this in part two)

There is a longer conversation to be had on customer AI readiness - SAP itself is working on this from a number of angles, including university partnerships such as the recently-announced SAP Collaborates with UC Berkeley to Advance AI Research. I also want to learn more on how "reskilling" fits into SAP's own "AI-centric restructuring."

I'll pick this skills conversation up in part two of this article. I also expect my upcoming AI and customer readiness podcast taping with Geoff Scott and Josh Greenbaum will be a big challenge, as we attempt to make sense of all the worthwhile issues raised here.

Loading
A grey colored placeholder image