Are CIOs being left to navigate a `fractal sea’ by tech vendors?

Martin Banks Profile picture for user mbanks March 13, 2020
Summary:
Complexity leads to bickering over technology. Time for CIOs to reframe the rules of engagement with more of an eye on business outcomes.

Image of a boat in the ocean
(Image by Foundry Co from Pixabay)

I accept this is becoming something of a `cause’ for me, but a recent Technology Live seminar, a regular event where a small group of leading European technology journalists and analysts come together with representatives of `bleeding edge’ technology vendors to hear about some of their latest developments, once again highlighted how the growing complexity of much of the technology now in play is so complex it is reaching the point of unexplainability.

When, for example, the nuances of explanation cause even the people responsible for the technology to bicker amongst themselves as to what their collective words really mean then you know there is a problem. And when highly experienced journalists and analysts try freshly nuanced variants of previous questions to try and unpick the nuances of the technologists’ previous answers it is no surprise that they sit, heads slightly tilted to one side, with that universal expression: `que??’ on their faces. And these were real subject experts of the `forgotten more than most will ever know’ variety, not just me.

The subject in question was cloud storage systems, and it highlighted the fact that the technology’s increasingly granular complexities make it difficult for the people responsible to avoid differences of opinion as to how the technology achieves what it does.

And it struck me – if they can’t explain it to people who should understand it, and whose job is to explain it to potential users, where does that leave those poor CIOs who then have to decide what needs to be done for the best?

Then a motif popped into my head: CIOs are a bit like sea captains, they need to know where they are on the sea, which direction they are heading and how much sea is beneath them. But what if the `sea’ is fractal? While they stay on the surface they are in control of the direction, the speed, the goals and objectives. The trouble starts once they are tempted to look beneath the surface, in ever-finer granularity, at what is supporting them.

Many are getting dragged into such waters and finding that the sea has no bottom. Like a fractal, as they approach the limit of what they think they will be able to find, there is always more of it, plunging deeper into evermore detailed complexity. Soon that complexity is far deeper than they will ever need to know – after all, they only ever need to know that there is more sea below them than there is ship. And by that same token CIOs need a minimum of knowledge, but it is by no means certain they need to dragged down as deep as the vendors seem so set on.

This begs the question

Should CIOs now admit `defeat’ and push the job of understanding that level of infinitesimal detail to the likes of the major – and not so major niche specialist – systems integrators? Indeed, is this really where the Cloud Service Providers themselves should be heading, providing services and support that are really tailored to the needs of their customers rather than the cost-effective benefit of their own revenue streams?

It has to be noted that three of the four vendors participating in the Technology Live day have strong relationships with systems integrators and channel partners, with HPE gaining `honourable mentions’ with two of them. And it does raise an important issue: given that so much of the underlying technology is now built around commodity kit and code this does raise a valid argument that such businesses should now be seen as the de facto route for all users to follow.

There is a real danger that the value propositions of using the cloud will be lost as users are obliged to wade through a murk of obfuscation and confusion which is that fractal sea of never-ending technological granularity. And perhaps those systems integrators are now best placed to interface that granularity to the business objectives that CIOs are ultimately charged with fulfilling.

One of the technologies covered during the event did strike some chords with me, not least because it proposes a surprisingly simple solution to a big data problem that is only going to get worse, the bigger the volume of data gets, which is something of a given these days.

The presentation came from Alex McDonald, Director and Vice Chair of the Storage Network Industry Association (SNIA), who also an industry evangelist and member of the Office of the CTO at storage specialist, NetApp. What made me sit up was his subject for discussion, in-memory computation. This has been something of a holy grail for many years, not least because data has mass, and it takes increasing amounts of energy to move it around increasingly large systems.

It is also the case that much of that energy is also wasted, as McDonald pointed out:

The first step in any compute process is to get all data that might possibly be remotely relevant back from storage. Most of it is `thrown away’ because it is not actually relevant to the specific task.

It is this conundrum at which this version of `in-memory computation is aimed. The goal is not to find that holy grail of a complete move away from the old Von Neumann architecture but instead get that architecture down to small blocks that can perform the small, functional tasks that all systems need to run but which can be sensibly off-loaded from the processor.

One typical task, he suggested, might be managing a RAID systems’ off-load or compressing data to save storage space – and, as a by-product provides a simple form of data encryption to improve data security. Here, the logical level for a `small block’ within the system is, in fact, the disk drive – a convenient dumb entity for storing data which can then be subject to a limited amount of intelligence by the addition of what McDonald called a Computational Storage Processor built in to the drive.

The next step is already following on rapidly, using NVMe Solid State Drive units coupled with the associated PCI-e bus for high capacity communications. The idea here is to use Docker container technology to provide multipurpose compute tools to do jobs like pre-sorting data. For example, instead of having all drives sending their contents over the network to a processor which then seeks out `all the red ones’, the task can be turned around.

All drives can instead be sent the instruction: `find and send red ones’. This gives a parallel processing time saving, and the main processor gets appropriate data that is pre-cleaned as part of the upload process. Alternatively, containers can be sent which provide a real time service which monitors all data loaded onto a drive to provide a real time response to the instruction which effectively asks: `let me know when you see a red one’.

The beauty of this approach is that it is easy to understand as a concept, easy to visualise what technology might be required and how it  would operate and, perhaps most important of all, it is easy to see the applications it could be used for and price up the potential benefits  before a penny has been spent. In a world where big data has got so big no one bothers calling `big data’ it is easy to see the advantages of getting the storage systems themselves to do vital pre-sorts before the data has even left the `platter’. And when coupled together with the use of SSDs and the PCI-e bus it is not too difficult to see the future complexity spectrum spreading out, and be making plans for how it might be exploited.

My take

The SNIA presentation is something of an object lesson in getting across what is being made available and how it might be used. You don’t need to understand semiconductor technology to see what it can do, work out why that might be useful, and consequently want a piece of the action. It is a lesson many tech vendors still have to learn.

Loading
A grey colored placeholder image