Ten rules for business models

Profile picture for user Neil Raden By Neil Raden May 8, 2019
Summary:
BI and business models take to the catwalk.

Image of a chalk board with writing on that says ‘Know the Rules’

BI has historically been pidgeon-holed into a role of reporting what-was and what-is, but it is today the perfect place for more expansive roles. This may frighten the IT organization, which has always cast a jaundiced eye to BI anyway. But the Zen master Suzuki Roshi had an answer for that concern:

To control your cow, give it a bigger pasture. 

Given that most organizations have some level of investment in BI software, applications and skill, how do you adapt to a new set of processes and tools such as those described in previous chapters?

By relieving business analysts of the tedious chore of wrangling data and dealing with the problem of reconciling semantic discord in the data, BI becomes easier to use and more productive.

In our own research, we found that people did not find BI software, OLAP in particular, difficult to understand, it was the data that created the problems. Neo-ROLAP with a single semantic model for a broad collection of BI tools also eases the learning phase. BYOBI is the answer.

No BICC anymore

What is not an answer is a concept that has been promoted by the BI industry, the BICC or BI Competency Center. The BICC is responsible for not only being a help desk, but it usually takes on the role of BI application development, creating a tiered structure of BI and analytical skill. Being able to analyze data should be a core competency in most departments. Would you want a financial analysis CC for the Finance department or marketing metrics CC for the marketing department? What holds back departments from gaining the competency is that the existing BI process is located elsewhere.

Wouldn’t it make sense for an actuarial department to be able to create solvency scenarios based on interest rate projections rather than trying to explain the actuarial process to generalist in the BICC?

The solution to both MOLAP and ROLAP drawbacks is abstraction of the models and cubes driven by a general sematic layer that can be used by most BI tools. The problem with ROLAP was that the data was housed in a relational database, typically a data warehouse, that could not provide the near instantaneous response of a MOLAP cube because the database architecture was not, in most cases, designed for analytical/dimensional queries. Neither is Hadoop. 

BI for business modeling

In our experience, the biggest paybacks in BI come from modeling, not just reporting. 

All business people use models, though most of them are tacit, that is, they are implied, not described explicitly. Evidence of these models can be found in the way workers go about their jobs (tacit models) or how they have authored models in a spreadsheet. The problem with tacit models is that they can’t be communicated or shared easily. The problem with making them explicit is that it just isn’t convenient enough yet. Most business people can conceptualize models. Any person with an incentive compensation plan can explain a very complicated model. But most people will not make the effort to learn how to build a model if the technology is not accessible. 

There are certain business models that almost every business employs in one form or another. Pricing is a good example and in a much more sophisticated form, yield management, such as the way airlines price seats. Most organizations look at risk and contingencies, a hope-for-the-best-prepare-for-the-worst exercise. Allocation of capital spending or, in general, allocation of any scare resource is a form of trade-off analysis that has to be modeled. Decisions about partnering and alliances and even merger or acquisition analysis are also common modeling problems. These are all types of business models. 

They do not necessarily involve data science or advanced mathematics.

Models are also characterized by their structure. Simple models are built from input data and arithmetic. More complicated models use formulas and even multi-pass calculations, such as allocations. The formulas themselves can be statistical functions and perform projections or smoothing of data. Beyond this, probabilistic modeling is used to model uncertainty, such as calculating reserves for claims or bad loans. All of these models are still just variations of the first type. When logic is introduced to a model, it becomes procedural and mixing calculations and logic yields a very potent approach. The downside is, procedural models are difficult to develop with most tools today and are even more difficult to maintain and modify because they require the business modeler to interact with the system at a coding or scripting level, something for which most business people lack either the temperament or training or both. It is not reasonable to assume that business people, even the “power users,” will employ good software engineering technique, nor should they be expected to. Instead, the onus is on the vendors of BI software to provide robust tools that facilitate good design technique through wizards, robots and agents. 

Modeling requirements 

For any kind of modeling tool to be useful to business people, supportable by the IT organization and durable enough over time to be economically justifiable, it must either provide directly or interact seamlessly with the following capabilities: 

  • Level of expressiveness sufficient for the specification, assembly and modification of common and complex business models without code; the ability to accommodate all but the most esoteric kinds of modeling 
  • Declarative method, such that each “statement” is incorporated into the model without regard to its order, sequence or dependencies. The software, freeing the modelers to design whatever they can conceive, handles issues of calculation optimization. 
  • Model visibility to enable the inspection, operation and communication of models without extra effort or resources. Models are collaborative and unless they can be published and understood, no collaboration is possible. 
  • Abstraction from data sources to allow models to be made and shared in language and terms that are not connected to the physical characteristics of data and further, to allow the managers of the physical data much greater freedom to pursue and implement optimization and performance efforts 
  • Horizontal and vertical extensibility because many modeling tools today are too closely aligned with either vertical applications (CRM for example) or horizontally in a market niche, such as desktop data visualization. Extensibility means that the native capabilities of the modeling tool are robust enough extend to virtually any business vertical, industry or function, or in any analytical architecture. 
  • Closed-loop processing is essential because business modeling is not an end-game exercise, or at least it shouldn’t be, it is part of a continuous execute-track- measure-analyze-refine-execute loop. A modeling tool must be able to operate cooperatively in distributed environment, consuming and providing information and services in a standards-based protocol. The closed-loop aspect may be punctuated by steps managed by people, or it may operate as an unattended agent, or both. 
  • Zero code: In addition to the fact that most business people are not capable of and/or interested in writing code, there exists sufficient computing power and reasonable cost to allow for more and more sophisticated layers of abstraction between modelers and computers. Code implies labor, error and maintenance. Abstraction and declarative modeling implies flexibility and sustainability. Most software “bugs” are iatrogenic, that is, they are introduced by the programming process itself. When code is generated by another program, the range of “bugs” is reduced to the domain of the interaction, that is, the dialogue between modeler and computer. 
  • Core semantic information model: Abstraction between data and people and programs that access data isn’t very useful unless the meaning of the data and its relationships to everything else are available in a repository. 
  • Collaboration and workflow is the key to connecting analytics to every other process within and beyond the enterprise. A complete set of collaboration and workflow capabilities supplied natively with a BI tool is not necessary, though. Rather, the ability to integrate (this does not mean “be integrated,” which implies lots of time and money) with collaboration and workflow services across the network, without latency or conversion problems, is not only sufficient, but also preferred. 
  • Policy may be the most difficult requirement of them all. Developing software to model policy is tricky. Simple calculations through statistical and probabilistic functions have been around for over three decades. Logic models that can make decisions and branch are more difficult to develop, but still not beyond the reach of today’s tools. But a software tool that allows business people to develop models in a declarative way to actually implement policies is on a different plane. Today’s rules engines are barely capable enough and they require expert programmers to set them up. Policy in modeling tools is in the future, but it will depend on all of the above requirements. 

One can argue whether BI has been, in the long run, successful or not. The take-up of BI in large organizations stalled at 10-20% depending on which survey you believe. I believe that the expectations of wider acceptance of BI were optimistic and the degree to which is was adopted is probably the right level for the functionality delivered.

For BI to become pervasive, to appear on every employees Smartphone, is inevitable, but it will be wrapped in new technologies that provide a more complete set tools, especially what has become known as “decision management” – the amalgam of predictive modeling, machine learning, natural language processing, business rules, traditional BI and visualization and collaboration capabilities. Will BI survive? Yes, but we may not recognize it.