Data Fabric - great idea. Now, make it work!

Martin Banks Profile picture for user mbanks December 2, 2022
Summary:
Data lakes don't cut the mustard; step forward Data Fabrics.

knowledge

There is always some level of argument to be had over buying an integrated soup-to-nuts solution or engineering your own out of individual applications and tools. Both offer a solution to a situation, and it usually depends upon where along the maturity curve both the users and the technology vendors are.

It would appear, following on from a presentation given at the recent Appian Europe conference in London’s Docklands - given by co-founder and CTO, Michael Beckley - that the point has been reached where the maturity levels of both its customer base and its own technology and tools have coincided both with each other and the emergence of Data Fabrics.

The rise in acceptance of Data Fabrics, as witnessed by its de facto recognition in the form of an 'official' definition by research company Gartner, has appeared at the same time users have started to realise that metadata, while useful in and of itself, no longer goes far enough to define or manage the data environment they now need. What they need is the ability to essentially create and work directly with `information’ about their business and its processes, created from data sources scattered around the functional units that comprise that business.

The common procedure for dealing with this has so far been the creation of data lakes, into which all the available data is poured. This, however, then creates the problem of finding the data needed to build up the required information set from within the lake, and engineering the necessary links between them and the processes that require said information. If that is a once-only task providing multiple use cycles, that can still be effective, but in practice the rate of change of the data that goes into an information set and the rate at which new sets are required, is accelerating. New customers, new markets, new products and the inevitable demise of old ones are just some of the reasons that rate of change continues to accelerate. Data lakes no longer cut the mustard.

The idea of a Data Fabric, however, means that the data held in the siloes created by different applications, tools and processes can be left where they are, with the processes seeking to locate them navigating to them round the 'warp and weft’ of the Data Fabric. This plays well to the needs of most businesses, which have many applications that are not near end of life and still working efficiently, and have their own established data siloes. The traditional threat from new technology of the aggravations of wholesale rip-and-replace activities can be set aside.

Data Fabric defined

To start with, making this happen in reality will need tools, and the Appian Platform can claim to have what is needed to create a viable Data Fabric. As Beckley observed, automation is now one of the biggest trends, and in his own view it is automation’s connection with low code applications development that is a key combination. He sees the convergence of automation with low code, process mining, robotic process automation and document processing as the melting pot that becomes a Data Fabric:

In our minds, these are not separate products. These are separate capabilities of one platform and one development environment. So that instead of needing a separate team for everything you want to do, and then having to have another team that coordinates all those teams, you should be able to have one happy end developer, one tester, one lead and be able to do great things with all of this power. Low code Design Process automation with RPA, API integrations, intelligent document processing and process mining gives you the data and facts to know that it all works right together, unified into one platform.

Beckley pitches Appian’s ability to declaratively define applications so that the platform is always upgradeable, together with the combination of low code and process automation, gives users the tools to build applications that fit a business process that needs to be automated in order to solve a business problem, regardless of hardware or existing applications. He sees it adding a layer of abstraction that provides the freedom of choice, be the target a mobile app or a web application. The Appian platform now manages the technical development processes that choice demands.

The acquisition of Lana Labs earlier this year added process mining to the Appian platform as a core piece of the move to Data Fabric creation, as it provided the ability to have a data-driven perspective on what to prioritise. According to Beckley this is something that is typically the preserve of a data science team, but now users can mine records directly, in the Data Fabric:

Therefore, it becomes that much simpler to get direct insights into your processes and your sub-processes. Many users have hundreds and even thousands of processes, and mining gives a quick way to consolidate them into a clearer view. You can do away with subject matter experts taking the time to give you their opinions on where there are problems, the data can tell you. And we have new savings scorecards that can automatically populate from this information and show you whether or not you're on target to meet your efficiency goals.

The Platform already comes equipped with tools that can build collaborative environments by integrating most of the mainstream legacy business applications, such as Salesforce, Microsoft, Oracle, Oracle Financials, and the Appian database, without having to first build and manage a data lake: with a Data Fabric users create one data model, even though the data doesn't have to live together, those siloes remain as siloes that can be accessed and worked with natively. It becomes possible to build data relationships in real time, both faster and more easily, allowing users to build new processes and functionality with that information.

For example, there will almost certainly be a need to add new knowledge about data, such as joining data from a billing system with data from a policy system, even if the data comes from different systems or databases. The Data Fabric allows users to define new custom fields like profitability by product or by customer, and provide risk scores, without having to do specific integration coding. Beckley argues: 

Here's the most beautiful part about the Appian Data Fabric. It's not a new product, it doesn't have a price tag. It doesn't have a new training manual. It doesn't have a whole new thing for you to learn. It's part of Appian that records technology.

My take

As noted above, the acquisition of Lana Labs earlier this year added data mining tools to Appian’s low-code/Process Management platform, and gave it the makings of a Data Fabric. With the addition of some code in the latest version on the platform that has now been made and released by the company into a marketplace where most existing users already had the makings in place and working. But in so doing, it is taking the idea further than Gartner’s original concept of accessing and combining data from different, existing, data siloes rather than pouring it all into a data lake and sorting through it. Appian is set on exploiting its other tools and services to allow users to build new automated business processes based on the information and insights that will result.

Loading
A grey colored placeholder image