Businesses standing on the verge of specifying their next serious data center update face one serious question for CIOs – and for that matter, line of business managers as well. Can they really second guess what type of data center their businesses will require in say, 2021? And if they can’t what type of data center should they be thinking about?
Answering such a question involves two distinct factors: what is happening with data center technology and IT generally, and where are their businesses and markets headed? For the first time, CIOs and business managers stand a good chance of getting badly out of synch with both questions, and market factors suggest it will happen sooner rather than later in the natural data center lifecycle.
Marketplaces are now in flux, with more and more being seriously disrupted by new business models and market approaches. Unexpected disruption from the emergence of new business models and market approaches is causing business leaders to re-evaluate their strategic direction, coupled to business models that must be ready to respond to unexpected attacks.
The headline grabbing examples of Uber, AirBnB and Netflix demonstrate how network effects, coupled with unlocking previously untapped business value, have a direct impact on market sectors. These are not the only examples but the prevailing sense is that the disruptive impact they have on marketplaces will be both fast acting and permanent.
The fundamental shift is occurring not only within technology but with business models that have until now stood the test of time. This even challenges governments with rules that need to be reconsidered in the interest of the consumer.
Meanwhile, technologies such as Software-Defined Data Centers (SDDC) – indeed, Software-Defined anything and everything – are challenging the rigidity of traditional data center design. The smartphone-driven advent of simplicity in our daily lives has not extended to the data center, where complexity still rules and is taken as the norm.
The big risk
The big risk for businesses wishing to respond is that over the next five years – which is the expected lifecycle of the next data center upgrade the users make – they stand the chance of becoming seriously, damagingly out-of-date, both with their choice of data center infrastructure and their approach to doing business.
This runs counter to the traditional, risk-averse business management model of changing as little as possible. `If it works don’t change it’ used to be the sensible option, but no longer.
While modern data centers increasingly use standard hardware, where a five year lifecycle is considered acceptable for traditional applications, it is the change in applications lifecycles that is starting to force real change. Applications can now be developed, put into production and thrown away with lifecycles measured in months. Add in accelerated upgrade cycles and it is easy to see why the traditional approach to data center specification needs a radical overhaul. But it doesn’t stop there.
The growth in the use of DevOps and the Continual Delivery (CD) model of applications updates means that applications are not installed as huge and disruptive packages that require systems to be taken out of production. Instead applications emerge as individual adjustments and additions, as soon as the coder’s ink is dry. The argument goes that this method improves reliability because problems are discovered and can be fixed far more quickly than was the case in the past.
Taken together, it is increasingly likely that not changing and sticking with the traditional approach to data center architecture will be the most risky call a company can make. The new logic is that being risk averse is now the biggest risk of all.
The current data center operations model is still largely based on the notion of single, large applications being permanently assigned to whole racks of compute resources. In the same way that Uber and AirBnB are dramatically changing whole business models, so ‘software-defined’ architectures are changing data centers. The old, big server, approach is now just that – old.
This ‘old’ model no longer makes sense in the development context outlined above and the emergence of new approaches to the data center that support the development model. Public cloud deployments for example are now an accepted part of many new application deployments, although for the most part, large enterprise prefers a hybrid approach where cloud is restricted to private instances.
Software Defined Data Center (SDDC) architectures support these more flexible approaches, allowing software to define the environment applications need by self-configuring and self-optimising the hardware.
Looking to the future, the data center will continue to exist, but it will be a noticeably different animal because the traditional architecture of server racks with their own storage resources, dedicated to specific applications, is already being consigned to history. Scaling applications and their resources will become a function of that self-configuration and optimisation.
Risk aware or risk averse?
The need, therefore, is to be risk aware and check out the evidence of what is happening now in both data center technology and your own marketplace.
Risk averse arguments are probably no longer valid. It used to be right to say: `we need to run SAP on an approved and validated platform, so we need to stay with the same technology’. But this is not the case now that most major business applications are able to run in validated hyperconverged data center environments – including SAP. You might say – but doesn’t SAP fall into the ‘old’ bucket of applications that need to be ruggedized and made bomb proof by locating in my nuclear attack from data center?
Not really. The modern SDDC helps improve performance and delivers operational flexibility for all types of application landscape. Applications like SAP are part of the mix of applications a business uses, rather than high-maintenance, demanding applications. SAP frequently talks about simplified landscapes. It therefore makes sense to including large enterprise application deployments as an integral part of a strategy that talks to the twin needs of agility and resilience.
In short – do not be risk averse, be risk aware.
Bonus points: check out this customer story video which talks to the topic of moving large data sets at low risk
Image credit: still taken from Nutanix video