2019 - the Martin version

Martin Banks Profile picture for user mbanks December 30, 2019
Summary:
Infrastructure introspection via Martin's 2019 highlights.

Martin

These days, `IT infrastructure’ as a subject ranges far further than servers, disk drives and networking systems and topologies. The cloud has seen to it that infrastructure now includes just about anything that makes a contribution to companies accessing and exploiting any business process that helps them meet their goals. With that in mind, here are some of the highlights from 2019.

Brute strength

This is an interesting indication of what IT infrastructures are now expected to survive and deliver on reliably. I am sure Alibaba threw every resource, including several kitchen sinks, at ensuring its services survived this one day, but it should be noted that it is just a one day exercise, and it left the three day Thanksgiving/Black Friday/Cyber Monday exercise (that actually includes the intervening Saturday and Sunday) trailing far behind.

Last year, figures generated by Alibaba and Adobe Analytics, suggest that while Black Friday and Cyber Monday combined generated revenue of $13.9 billion in 2018, the Alibaba Singles Day generated more than double that, $30.9 billion. This year, the estimate is more than $14.5 billion for Thanksgiving, Black Friday, and Cyber Monday combined. Singles Day, by contrast, generated some $38.4 billion, with the first $1 billion being handled in just 68 seconds.

It is, obviously, a very public test of the systems and services operating under real stress, and Alibaba claims the 24 hours of Singles Day resulted in zero down time. At the same time, there appear to be no reports of any system failures during the Black Friday/Cyber Monday weekend. Indeed, the only problems that have emerged are complaints about the huge spike in non-disposable plastic packaging these exercises generate.

But they make an important point: that much business successfully handled by cloud services with little or no down time is a real demonstration of the resilience that is now available to businesses.

Cloud + Kubernetes = no lock-in

As one of the big infrastructure providers, VMware is probably best-placed to use its breadth, depth and strength of coverage to gently move users, especially cloud users, to what would be just the latest version of the classic technological lock-in. It is a legitimate fear as well, for cloud services are both simple to use – in theory and so long as there are at least some people on the payroll that understand it well – and full of pitfalls and `gotchas’ for the unwary.

The company’s CEO, Pat Gelsinger, doesn’t see it that way, and one of the key reasons is the use of Kubernetes, fast becoming the universal intermediary between all cloud services:

It is not a lock-in because it is opening up the real multi-cloud world for users, with all those platform options. And with Kubernetes we also have available the biggest up-stream anti-lock-in technology there is. Everyone is already using it.

Kubernetes also plays an important role in helping users move their thinking whole levels of abstraction away from struggling directly with the technology, which will be the key to real growth in the cloud market and user base.

A channel to make infrastructure ‘disappear’

This is a trend that has really started to emerge during 2019 and can be expected to become a leading factor in the development and growth of cloud services during 2020. There are many pitfalls for every business looking to transition to the cloud, most of which revolve around making sense of the complexities of hardware resource planning, software interdependencies and the management of collaboration across it all.

This is another aspect of the move that all vendors are now finding necessary – the need to work their way up the levels of abstraction away from the base technologies, so that a wider range of possible users can start to understand what they can achieve using the cloud, and how they can achieve it in practice.

Two of these stories are about Splunk, but they tell an increasingly universal story of how vendors working at the bleeding edge of technology are having to abstract themselves away from their home ground and acknowledge that they now have to map onto what users are seeking – business solutions, not technology.

The third is about Computacenter, which is learning fast that it now faces, at the top level, pretty much the same issues as its customers. The company and its customers both have to match much quicker go-to-market cycles. Both now face increasingly disruptive competitors that are more agile at exploiting cloud services, while the importance of knowing where their data is, who is using it, and what they are using it for, is vital.

One also points to the need, certainly felt by small and medium sized businesses but also the larger departments and regional offices of major enterprises, for a new layer of intermediary between them and the cloud service providers and tech vendors. In essence this is not new, just a re-run of the old value-added reseller model. As Splunk has noted, getting them started (or redirected) is not proving easy, but the need for them is now getting great.

Unify, then unify some more

A different view of the wider infrastructure `abstraction’ trend was that put forward by Tibco, and with the continuing exponential growth in the rate of data generation it is likely to become a necessary approach for most data hungry businesses.

This is the need for users to unify their operations around their data rather than treating it as a by-product of their applications. That means having the right tools in place to both exploit that data better `now’, and be able to develop and engineer the inevitable new innovations that will be needed to exploit it even more out into the future. It also means they need to have tools that help them avoid building new, different, data siloes.

Tibco sees this change as the opportunity to build new cloud-native infrastructures, with tools specifically designed for building cloud business and process management both from the ground up and the top down.

Buzz-phrases such as 'fail fast’ and 'fail often’ are now widely used, but if they are to be of any value then users will need to work in an environment where the 'suck it and see’ approach is easily engineered.

AI/ML is changing infrastructure

The classic x/86 processor architecture has been at, or near, the heart of computing for several decades, and with the coming of cloud systems would seem to have a certain future for a good few years yet. After all, its place at the centre of just about all the commodity hardware systems used in providing cloud services would seem to assure that. But the twin thrust of AI/ML development is looking set to change that over 2020 – at least where there is extensive use of AI/ML applications.

Rule-based computing is no longer sufficient, and we are now moving into new areas where there are no clear rules to work with. Till now applications have worked with datasets that are, in effect, the edited highlights of life. AI/ML is creating an environment where dealing directly with the chaos of life is crucial.

So scientists came up with the statistical computing model and, according to Huawei, this set to become a mainstream computing model. Five years from now it expects to see such systems consuming 80% of the total computing power as both AI and ML applications and publicly available services become far more widespread.

This will certainly impact the dominance of the x/86 processor architecture. Huawei, for example, has developed new processor architectures specifically with AI/ML in mind, and as these are based on core technology from ARM – technology widely used by other processor producers – others will certainly be following that path.

As for Intel, as creator of the x/86 family, it is not standing still and has a family of processor sub-systems available that cover the main AI/ML architecture options: scalar, vector, spatial and tensor. According to This Remi El-Ouazzane, VP and COO, Intel’s AI Product Group this really is an attempt to cover all options for the near future, there are no really clear markets, for the chaos is opening up an infinite variety as companies take their own paths to their own solutions:

You know, if I could pick one I would do it tomorrow morning, because it would be more efficient for everybody.  But the market is very heterogeneous and not one where one size fits all. And even in one market consumers take different paths. So that is why there are four architectures running in parallel.

You can’t secure what you don’t know about

They don’t call it the World Wide Web for nothing, for now it has strands everywhere and anywhere. That is part of its brilliance of course, for it creates a world in which users cannot only connect to their primary systems from anywhere in the world, but also create links with other businesses and service providers, anywhere.

From a business point of view that access and flexibility is incredibly useful, but from a security point of view it is potential disaster waiting to happen. It is impossible to secure a cloud service if it is impossible to know what is connected to it, where it is, who owns it, and what it is being used for. Unfortunately, most cloud services end up being this way, with no one able to tell what is now part of it, or what everything is doing.

Solving that problem is what Qualys has set out to achieve with its Global IT Asset Discovery and Inventory app. This is, in effect, an agent device that provides monitoring capabilities right across a business network, identifying what is on the network, when it is on, and what it is doing. Not only that, but unlike some other approaches to this problem – which take `snapshots’ of the whole network – it provides real time monitoring, a continuous, real-time inventory of known and unknown assets across the global IT footprint of a business, be they on-premises, endpoints, multi-cloud, mobile, containers, OT and IoT services and devices.

This way, security teams will at least know where to look, what they are looking for, and what remedial work is needed as soon as a security issue emerges.

The 'edge' is moving center stage

As the volumes of data grow ever bigger, its gravity gets harder to resist, and it starts to pull compute capabilities towards it. This will require users to reconsider their IT infrastructures and architectures, for the edge is where much of the original data of a business is generated, and the so the compute resources are, at least logically and in many cases physically, heading out their as well.

The time of the centralised data center as resource pool for everything is nearing end of life and the virtualised, distributed data center is moving centre-stage. That is going to bring new systems and technologies into play. Even older technologies may get a new lease of life.

For example, a spinning disk solution, Heat Assisted Magnetic Recording (HAMR),  has been in development for a while now, and is expected to deliver 40/50 Terabytes of storage in a single 3.5 inch drive. This is expected to fit very well with the needs of edge computing services for a lot more cost effective bulk data storage in order to manage the growing data gravity.

Moving much of IT out to the edge will mark a significant change for every company, and that brings with it significant risks and well as significant benefits. Once again, this looks like it will be a change during 2020 that will affect the vendors as much as their customers. The vendors will need to understand first that they are to succeed by virtue of their customers succeeding, and understanding what they need and how to deliver it.

The year of VDI? Quite possibly

Virtual Desktop Infrastructure (VDI) has never penetrated to more than 10% of business, despite the fact it seems to have so much going for it, such as the security of everything being run on the server and no other data than a pixelated representation of what is going on back there ever appearing on the user’s screen. The fact that there is a market can be seen in the number of people wanting to use Chromebooks.

VDI is, of course, home turf for Citrix, but this past year has seen Nutanix take a pitch at it, not least because it still requires a lot of server-side power. And even then it still does not offer graphics, nor does is have a GPU at a time when even phones have one. Providing this has been one of the aims for Nutanix moving in to the market. So now VDI can do graphics, which means travelling execs can have full power remote, secure access to their work environment wherever they are, regardless of the insecurities that may be present.

It is that location agnostic security that attracted Schroders to move to Citrix for nothing except a pixelated screen shot leaves the server. If the user makes changes they are sent back as pixel streams. And while the common opinion is that the pixel streams cannot be read even if tapped into, the company has opted for the simple belt and braces approach of encoding the pixel stream anyway, as secondary security.

So now users can now log in anywhere, anytime, and from absolutely any device. The company even jokes about BYDD – Bring Your Daughter’s Device, but is still awaiting a senior exec to turn up with a pink iPad.

Going commercial with Open Source

Open source software is great stuff and has found its natural home as the code infrastructure of just about all cloud services. Yet it does have an elephant in its `room’. The potential impact of open source licences on the ownership and potential revenues of applications constructed from them has been, and for many users probably still is, difficult to calculate.

One of the key issues is that the open source licences were originated at a time when most of the open source code created was free and worked on/developed for by anyone interested in the application and had the appropriate skills. Some licences had – and still therefore have - `gotchas’ about commercial use of the code.

Many of this work was on small, useful tools rather than full scale applications, but as time has moved on those tools have been used in other, bigger tools and services, which in turn have been used in applications – increasingly commercial ones. This creates a complex `mess’ of possible legal issues, with potential serious consequences. They played their part, for example, in providing a lever the US authorities could use against Huawei in the still-rolling 5G mobile comms saga.

French company, CAST, has become one of the first to provide investigatory tools that can identify individual open source apps and utilities buried in bigger applications, meaning vendors and users can at last start to identify risk areas, and one of the major open source distribution companies. GitHub, now recognises this a real problem area and is now working on solutions to the problem.

Let’s get political

At first sight there would seem to be little direct connection between international politics, especially the subject of Brexit, and the workings of IT infrastructure, but 2019 threw one up, and now that the UK seems set on pressing ahead with leaving the EU, one must expect – or at least hope – that 2020 will see some resolution to the problem.

This was the story about a research study conducted by University College London (UCL) which suggested that revenue of around £174 billion a year generated for the UK as a key hub in the global movement of data will be almost instantly at risk if there is a No Deal Brexit. In the great scheme of things set to happen during 2020 the possibility of a No Deal result is still very much alive, so the possibility of losing this business and this traffic must remain reasonably high. In addition, given the number of regulatory bodies that no longer be a part to, it is quite possible that those which currently govern the management and flow of such trade may not be highest on a very packed agenda.

For the management of all data traffic, therefore, the coming year is likely to be `interesting’.

Loading
A grey colored placeholder image