Last week I met with Vijay Vijayasankar, global VP SAP of what I call 'skunkworks' and one of his team leaders. They talked about a new business intelligence solution that's in stealth mode. I won't go into all the details here as it may never see the light of day. But I do like the concept.
As background, I first met Vijay when we were both members of the SAP Mentor community. At the time, he was a team leader and associate partner with IBM. Unlike many at IBM, Vijay was and remains one of those guys who 'gets shit done.' He's a higher order do-er whose responsibilities included projects with some of the best known names in the technology sector.
I've always found Vijay's approach to problem solving attractive at multiple levels. He sees a common issue, finds ways to solve them and then tests them out on tough audiences. In this case me. He then tries to find ways to ensure that the solution is self evident. In other words when you see it, you just 'know' what it does. Here is an example of a working capital dashboard that his team created. Anyhoo...enough of the plaudits.
In a recent post entitled The cost of precision in BI Vijay talks to the general tenor of his latest development:
Look at an average BI project in enterprise world – 90% of time is spent on plumbing data – designing schemas , defining exception workflows , writing transformations and so on . Remaining 10% is used to make reports useful to its audience . This is unavoidable because BI is very static in nature – even what is called as-hoc analysis is limited by schemas in back end . In short – even the best BI solutions cannot mimic how human beings make decisions .
The quest for extreme upfront precision is what works against BI being useful – ironic as it might sound . And BI has no chance of being seriously disrupted till it stops expecting tightly defined schemas on back end , and high precision right upfront in all cases.
Instead, he proposes an approach that approximates to the way we think. He argues that:
Context is way more valuable than precision . That is how we make decisions eventually in real life . And context changes with time – which means BI has no chance to keep up given its hard dependency on static things . BI world needs to think in terms of real world entities – not in some arbitrarily defined data models .
So here's the problem. Business intelligence - BI - has never been about intelligence. It's largely been about reporting through the rear view mirror and/or forecasting with as much precision as possible. The irony of this approach is that it is largely useless because information is always open to interpretation. The old joke goes: 'put a set of accounts in front of six CFOs and you'll get seven different interpretations.' That's as much a contextual problem as a commentary on the imprecision of financial reports.
Take the constraints inherent in the world of BI to the next level where real people want real insights and you quickly discover that it is nigh on impossible for end users to gain the kinds of insight they both need and want. What's more, they almost never have the means to construct those queries for themselves. An example:
Industry specific bad debt is notoriously difficult to reduce. Most CFOs benchmark their businesses and if they are more or less in line with data from Bloomberg or Hackett then they're good to go. They don't have good ways to understand root causes of bad or overdue debt that could be anything between price irregularities through to rejected product or impending insolvency. That's context.
Vijay showed me a way to discover these root causes through the marriage of external and internal data sources. More important, he is looking at how we infer the right information from a 'cluster' of data. This is similar to the way Google provides search results. Even here, Google is limited because at best, it can only take four search terms in a single query. However, this should be good enough in a business context to get going. An example here might be a lost invoice. This is a very common problem inside large enterprise. Here's how it plays out.
Two execs are chatting and one discovers that his cost of product acquisition for a repeat capital asset is much higher than his colleague? How can this be when there are procurement standards and processes in place? It happens. So where's the evidence? The answer of course is in the detailed invoices covering each person's spend. But what happens when a search fails to turn up invoices? This is where Vijay's solution might come to the rescue. By inferring different variations on 'purchase X from Y' it is possible to conceive of a tool that is able to grasp meaning from the semantics involved in the overall transaction. That could be date, name, supplier, item number, description, price...the list goes on.
The trick comes in getting the system to approximate what the person wants with sufficient accuracy that the right result emerges in reasonable time. In SAP's case, they have the benefit of the high speed HANA system as replacement for the long run batch method of assembling data. That's a good starting point.
During our conversation, we didn't touch on the obvious 'artificial intelligence' angle as that has been done to death and has yet to prove terribly reliable. And there is a sense in which whatever Vijay's team come up with, there will be a dependency upon the system being able to extract the right inferences based upon the human sense of what say 'customer' means. All the same and based upon what I saw at this early stage, there is much promise in this way of search/retrieval.
Disclosure: at the time of writing, SAP is a premier partner
Image credit: MSc Machine Learning, featured image: Your Story