How the edge will absorb the middle – welcome to the meta-connected hyper-hybrid cloud

Martin Banks Profile picture for user mbanks May 4, 2021
Summary:
All-pervasive change is coming and CIOs need to be aware of what happens next...

web

A couple of weeks ago the West’s bête noire, Huawei, held its annual Analyst Summit and, as might be expected for a company with a growing number of fingers in a growing number of technological pies, it had a lot to talk about. Bleeding edges of development were to be found in every direction, and one of them in particular caught my ear.

This was in one of the opening keynotes, the presentation by William Xu, a Director of the Board at Huawei and president of its Institute of Strategic Research. The part of it that tantalised my tympanic membranes contained an implicit validation of my current pet theory – that developments now starting to appear in the domain of edge computing will grow in such strength and importance that they will work up through the communications networks and into the data centers themselves.

In short, the developments just now starting to appear as the bleeding edge of the current specialised outlier of edge computing will, over the next few years - 10 at the outside is my guess - eat the data center as we know it today.

Xu pointed to the culprit, the classic Von Neumann architecture, where data needs to move between CPUs, memories and media, and for the security of that data, leaves duplicates all over the place, just in case. However, the progress of PCIe and DDR bandwidth lags far behind that of the outside networks and so gets swamped. Instead of the Von Neumann limitations and CPU centricity, incoming computer architectures will be data-centric. Instead of moving data to the location of compute, they will be moving compute to data in order to get adequate performance.

Future storage systems will also see changes, with breakthroughs expected in large capacity, low latency memory, plus multi-dimensional optical storage with new spatial model encoding, where fast and easy access to huge capacity is needed. Here, speed will also be important, growing from today’s Terabyte per second to Petabyte per second levels, where asset latency will decrease from milliseconds to microseconds.

At the other end of the spectrum, out at the edge, data items will get smaller and tasks will often revolve around processing small sets of these small data items together locally, at high speed, to generate real time status and change-of-state monitoring and reports. These will be performed by localized compute resources, increasingly incorporated inside monitoring devices, running operating systems specifically designed to work with cloud services utilizing applications dynamically loaded via containers.

Wu also pointed to the shift from general purpose processor chips to special purpose processing systems units, which is exactly where Intel’s new CEO Pat Gelsinger is targeting its future development goals. Indeed, it will soon become difficult to tell the difference between a processing sensor and a sensing processor, for that model will become ubiquitous across limitless individual forms. Xu even predicted the re-emergence of analog computing as one of the outcomes of this model.

Non-Von Neumann-ization time

More evidence that what is currently considered to be something different and not part of the cloud mainstream will in fact supercede the ‘meta-Von Neumann’ architecture of current cloud environments, emerged in a recent roundtable discussion jointly hosted by IBM Cloud and Mimik Technology, which have recently formed a close partnership aimed at developing and implementing an easier path for users into the world of edge computing.

What follows can only hope to be the edited highlights of what emerged in a wide-ranging discussion, so don’t expect too much in the way of `she said/he said’ reportage. (In many ways, the subjects covered are more important than who raised them, as there was a high level of consensus between the parties.)

Suffice it to say that IBM was represented by Evaristus Mainsah, General Manager of IBM's Hybrid Cloud and Edge Ecosystems, and Rob High, CTO for IBM Edge in Network Computing and former CTO for IBM Watson and its AI offerings and capabilities. Mimik Technology, a start-up specialising in building out edge services that can interact with and extend current cloud capabilities by using APIs and micro-services, was represented by its CTO Michel Burger and CEO Fay Arjomandi.

Collectively, they see the core shift that is now happening as being a change from the mobile internet era to a world of hyper-connectivity, an evolution they see being accelerated by demand for greater digital transformation driven by COVID. The mobile internet era was about mobile connecting to internet and consuming content. The hyper-connected era is about digital intersecting with every aspect of our physical lives. It is about mimicking the physical interactions that we have every day by using digital solutions. This will inevitably lead to new ecosystem partnerships that bring all these different aspects of technology and business and services together in order to drive and exploit the hyper-connectivity.

IBM Cloud’s contribution to this has grown significantly with the acquisition of RedHat and its use of RedHat Openshift as an enterprise grade secure container and Kubernetes environment that enables customers to build, manage and run applications in an open way. This is the classic ‘build once/run many’ argument, where the application can be for the edge or the cloud data center, and in a private or public cloud environment. They expect to see it used to modernize environments, add automated services, exploit AI, especially in terms of operations management, and enhance and extend security services.

The key factor is that the edge and the cloud share a lot in common in terms of development practices, containerization, separation, loose coupling, continuous integration and delivery processes, DevOps, and Agile development practices. However, there are also fundamental differences, particularly because the edge tends to be much more diverse,  with much more variation in the underlying compute architecture, configurations, purpose, and locations where edge computing is being performed.  Edge services are much more dynamic and changeable, with systems reconfiguring and changing purpose. In short, it is at least an order of magnitude more complex.

Also, unlike the cloud where one is dealing with a few variations on the Intel x/86 processor architecture, the edge consists of a far wider variety of devices. Increasingly, they are also all going to be connected and come with enough memory and CPU power to conduct at least a small amount of in-situ processing, even if that is not required at the moment. The challenge here, therefore, is achieving a usable level of integration between such disparate devices, which is the challenge that Mimik Technology has taken up.

The edge is the new normal

The Mimik approach targets that wide variety of small devices right out at the edge, with the aim of providing a common, to call it something, operating system. This exploits containerization to provide a level of abstraction from the infrastructure itself, coupled with an API and micro-services-based architecture, to create edge services that run in exactly the same way as the back-end cloud services. Instead of the edge representing a significant discontinuity, it becomes a continuum from the smallest single sensor through to the largest cloud data center, and allows them to work as a context-driven mesh, where devices can discover each other, test their communications capability and facilitate their own integration, driven by the context of the task or workload. From the Mimik point of view, this can be considered as Device-as-a-Service.

As IBM is the partner most likely to be helping users move in on the edge, the opinions of Mainsah and High best represent the starting point most new users will understand, and it is one that is not hard to predict: get started with anything where you could begin to get your feet wet and learn, most likely via a small example. IBM uses a methodology called Garage, which brings together a small group of domain experts from the customer, the service providers and a sponsor from the line of business, to work together over a two to three month period.

The goal is to produce the minimum viable, deployable product that can be delivered at the end of that period before moving on. The result will be able to work across both the edge and data center as different parts of exactly the same Software Defined Network.

The number of devices involved out at the edge gives a good idea of where the centre of gravity is going to move over the next few years. The four speakers agreed some approximations, such as there being some 15 billion edge devices in use already, growing to an expected 150 billion in the next few years. Each of those will be its own compute resource, able to implement, deploy and serve software that solves a set of business functions or requirements. In addition, what constitutes the edge is set to expand outwards, and diffuse backwards into what many might think of as core infrastructure.

So to make this work effectively will require not just common standards across a meta-connected, hyper-hybrid cloud (to call it something), but also some common value chains where no single service provider can claim that everything is centered round their specific value set. While analysts may argue about which cloud service provider is top revenue dog for now, that will soon enough be an irrelevance. Even the biggest service providers will need to remember an old business adage - '10% of a humungous market is always worth a lot more than 100% of a not very big one’.

The potential marketplace that is this meta-connected hyper-hybrid cloud will create is likely to make 'humungous' seem extraordinarily parsimonious.

My take

That last sentence sums it up from a business perspective. I have written before about how the technology is becoming irrelevant, precisely because of its ever-increasing complexity. For businesses now, that complexity has to be accepted and exploited, rather than making increasingly vain attempts to understand it. The analogy, to me, is that the cloud is dying, to be reborn as air. To understand our technical relationship to air we would all need to be pulmonologists, but instead we just breath the stuff, 99.99999% of the time without ever thinking about it. Actually, the fastest way to suffocate yourself is to attempt breathing under direct, personal control, including trying to control specific muscle sets. That is a trap a good number of enterprise users will fall into.

There is much more that could be written here about this subject that I could extract from the content that emerged at the roundtable session, and much more that will be written over the coming year or two. But I suspect that is all the time the average CIO has to get their heads round the fundamentals of all this. Change is coming and while it may not seem drastic, it will be all-pervasive. Without realising it, CIOs will find themselves just brokering the use of services that just do what they need to have done, rather than fretting about being unable to hire those with sufficient skills to stand a chance of engineering something that gets even a little bit close to what they need as a minimum.

Loading
A grey colored placeholder image