The recent Appian World Conference gave me an opportunity for a virtual sit down with Matt Calkins and Mike Beckley, respectively CEO and CTO of the low-code, workflow and automation specialist. This, in turn, opened up an interesting new perspective on the future development of edge computing and the bridges that need to be built between the possibilities of the future and the realities of the present and its well-entrenched legacies.
These now represent the gap that must now be bridged for businesses to get the best of all possible results out of what the technology will be able to deliver. The edge is still a land seen from afar - ‘telescopes’ let us see the potential and make plans for how to exploit it, but the majority of us are a way off from the shore line. We know the land of legacy well, and understand that it is significantly different from the edge, to the point that there is widespread belief that the two can never meet or be made to work effectively together.
Even Appian’s CTO Beckley seems to tend towards this viewpoint, observing during our discussion that most mainframes are planned for some type of retirement over the next few years because they simply can't keep being patched:
It's out of out of gas, you know. The advances we're talking about, like mobile operating systems, like cloud services, like Kubernetes, it's causing an entire generation of applications to be obsoleted. The tech that isn't growing incrementally, it grows to an exponential breaking point.”
Looked at from one direction that is true, yet looked at from a different perspective, it is possible to shape a different opinion: Those legacy applications are still in service because they not only still work, but are arguably still the best option at doing said work. After all, over the years there have been many attempts to replicate the functionality and replace the systems, yet most of them have fallen short, often by a long distance. Most people say ‘Don’t re-invent the wheel’ because it is a waste of effort, but with many legacy applications, the results have usually been several spokes short of being a workable wheel.
Yes, the next attempt might be the one that works, that bridges that gap between legacy applications (doing important legacy jobs) and the cloud, the edge and that bright and burgeoning future, but it will be difficult to find a business that will let its mission critical applications be used for such experimentation, especially if an option exists to continue using them, as is, in the new world. What was even more intriguing was the fact that the means to do this is now a core part of what Appian can offer users, even if Calkins and Beckley were not immediately aware of it.
When technical debt = continued investment
Beckley had been talking about technical debt and the fact that so many user businesses are drowning in it. He suggested that, instead of trying to pay down the debt, a growing number of customers are taking a different route and using the low code capabilities of Appian to bypass the problem of moving forward, while still servicing the debt by continuing to use the old legacy technologies:
They decided to start with something new and was palatable to them, because it could still talk to their existing systems. It gave them a bridge to the future. We're working with clients now, who are, say, upgrading their SAP from an ancient version to R4 HANA. But they know that's going to take years. So they put low code Appian in front, and Appian can read and write the data from both old SAP and the new. Those don't change very fast, you know, and so Appian is able to broker between them. With low code, ow you can actually customize the system to match you, rather than customising yourself to match the system.
According to Beckley, this is true whatever the systems of record are, and he acknowledged the suggestion that Appian could be considered the management layer of technical debt because it can transform data to fit any legacy application. This, he claimed, then gives businesses the opportunity to create new products, services and bundles:
We have a department that deals with this. We invested heavily years ago in a very flexible architecture we call Connected Systems. It's where a lot of the magic happens. It's basically a template generator for how to broker secure communication between Appian and any other arbitrary set of API services from somewhere else. There are hundreds of Appian partners who are contributing their own connected systems, so we don't have to keep up with every version and every change and every new cloud service out there.
The issue here is that, when looking to integrate legacy applications that are still current and business critical with cloud-native and edge-native applications, the options of re-engineering or redeployment onto new applications just because they are cloud-native marks several steps too far beyond the acceptable risk barrier. Yet the potential benefits of making that join, especially in terms of new, more agile business tools and services, are what most businesses are looking to achieve.
More to the point, just a little thought suggests that there must be hundreds of thousands of companies which that would fit very well. CEO Calkins was minded to agree. He also agreed that Appian will be able to provide the interface to the SAPs, the Oracles, the ‘whatever legacy applications’ businesses need to continue with, but allow them to function in the way the business wants them to function without having to spend five years and $50 million, which is the traditional rule-of-thumb estimate for any major move forward:
So, as a developer using Appian, I don't need to write any code to work with different data types, different data elements, different sorts of security, sorts of certificates, and provide them a means in which they can describe what they need to do.
Take it to the edge
Appian’s COVID-related experiences demonstrated the importance of this and played a part in one important development announced at the conference. According to Calkins, corporate mobile usage more than doubled last year, worldwide, and the reason is clear: employees working remotely wanted to log into systems with their iPads or mobile phones:
But it wasn't obvious in 2019, and not many apps were ready to be used on every mobile device. Last year, remote workers wanted to log in, they wanted to collaborate and, in many cases, technology held them back. But while overall corporate mobile usage doubled, Appian mobile usage multiplied by 19.7 times. The difference is that all app in applications were ready for change.
The key here is that Appian has incorporated the deployment goal of ‘write-once-use-anywhere-on-anything’. So the same application can be used on an on premise data center, out in the cloud, and on a mobile device.
This year has seen the capability extended in a couple of important respects with the introduction of what Appian is calling a Workplace Portal Service. This is a low code, Application Object, and instead of it only working in the Appian runtime in the cloud, it can now be deployed to an external micro-service, such as Amazon Lambda. But according to Beckley, that same technology can export objects to mobile devices and have it run locally, offline, without any network connection, as a dynamic data driven application.
This approach can then be extrapolated beyond mobile devices to all the sub-systems that Intel will be producing and the Rasberry Pi-based controllers that are appearing and will appear right out at the edge - and indeed the millions of old PCs which can have the motherboards repurposed to become local controllers/service managers/general compute nodes if all else fails.
A growing number of the dumb sensors/control devices already have low level processors built in that can be used to run low level compute services. And Appian, with the Workplace Portal Service, now has the ability to (a) orchestrate business functions from the data center outwards – and orchestrate the orchestraters lower down to the furthest points of the edge, and (b) automate complex processes out of large numbers of smaller, simple processes, and make writing the necessary code a reasonably straight forward task.
Calkins and his team take a different cut on the edge, though the end result is likely to be very similar:
We've thought about it, but in different terminology. We don't use the term edge, but we are all about allowing a dispersed enterprise and bringing computation to where it's needed, rather than forcing data together or fitting a straight jacket over the architecture.
He also agreed with the notion I had put forward recently that the edge will end up `eating’ much of the traditional data center role, except he has a very different view of the timescale:
I would say the edge already did eat the data center. And there are those that are trying to make the data center eat the edge and I don't think it's practical because data is distributed, and computing is distributed. Most applications are not centralized, homogenised. I feel that already companies are dealing with a vast sprawl like compared to an archipelago of data and processing, which occurs on the location most of the time. And sadly, as a result of this, most data is wasted because it isn't brought to bear properly outside of it's one area of focus. We end up with a silo causation. And that's what I mean that really most applications, most processing, most awareness is only on the edge.
Calkins goal here is to put the enterprise back together, so that latent, wasted information and awareness can be brought to the moment of truth. Appain’s role is to accept the dispersed enterprise the way it is, and bring data to bear at the moment of decision or the moment of connection with the customer so that it can be valuable to the organization:
Our goal is to understand that there's a great deal of dispersion amongst data, processes, applications and everything. And we'll do the best we can to not physically reunify, but to summon from all that dispersion the needed awareness, at the moment that awareness would be most important. You're actually asserting here that it would be better to break the enterprise into finer pieces?
I suggested that the contribution of every sensor, every device out at the farthest point on the edge, is as valuable as every other one. It contributes to the way that data is then amalgamated with data from other sensors, and this group of devices is amalgamated with other groups of devices to build up a density of process and management, though most of the density of data stays where it is created because it is also processed there. Only the results are fed back, and commands fed down. Calkins reaction was:
Well, this is really interesting. My point of view was the enterprise is dispersed because it is, and we need to deal with that. And your point of view is the enterprise is dispersed because it's good, and we should disperse it even more. That's interesting. And, you know, we could follow that. Any customer that believes what you're saying could definitely use Appian to enable that vision, because we are capable of stitching together an excessive number of decentralized data sources.
Either way a business, be it global enterprise or small start-up, can be the logical best it can be, able to achieve the business goals it needs to achieve, which is why what is happening now out at the edge is so important.