Some interesting thoughts came to the fore during the recent `Business of Cloud, Data Center and Hosting Summit’ run by analyst firm 451 Research. While the background was painted from sets of figures based on solid research of trends in cloud usage across the cloud services marketplace, the best thoughts were of the still-arguable, 'we-reckon…’ variety.
Such 'blue-sky’ thinking is, of course, essential fuel for the planning every CIO needs to have permanently underway, though it is often secondary to the grist needed for the everyday mill of keeping the lights of IT burning.
So one of the highlights of the event was when two of the company’s founding fathers, Distinguished Analyst John Abbott and Research Vice President Andy Lawrence, sat down head-to-head to kick over idea of where IT may be heading.
In fact there was little blood spilt as they largely agreed on the trends they see coming down the line. Those trends, however, do highlight a broader range of options for CIOs to consider, especially when the wider connotations of the often ill-used 'hybrid cloud’ tag is the subject at hand.
That tag, for example, seems to get used – especially by vendors – to define a straight forward divide between doing everything on premise, even if it is all cloud-delivered, and everything 'out there’ in hosted, public services. The latter, of course are then defined as gargantuan organizations where the service requirements of even the largest enterprises are seen as reasonably insignificant.
One consequence of such thinking is that both concepts – the enterprise data center or the huge resources of AWS, Microsoft and Google – lead to what is effectively a 'lock-in’ of sorts. The investment needed in either, as capital investment or long term contract commitments, can seem to degrade the potential for flexibility that the cloud offers.
One solution Abbott and Lawrence see as an alternative, additional tool for CIOs to consider is the appearance of micro-data centers. They see these as having a role at the edge of corporate networks, certainly to begin with. In other words, as small systems designed to provide either specific, specialist services or temporary additional resource at points in the enterprise network where they are needed.
These will not be just virtual systems flown in to occupy a corner of a Software Defined Data Center (SDDC) environment, but self-contained physical systems in small containers of some kind – in effect, the data center in a suitcase or backpack. Some systems of this physical form have been around for a while, but so far have tended to be for specialised roles in remote areas, such as might be required by the military or petrochemical companies operating complex exploratory systems in very remote areas.
The form factor is also being more widely used now as the basis for dedicated appliances that run specific resources or capabilities.
But with the ever greater commoditisation of data center technologies around standard architectures, coupled with the inevitable effects of Moore’s Law on both rising performance and lowering costs, the idea of a fully-functional, fully-definable resource that can be plugged in and available almost immediately could find wide range of use cases.
Indeed, though Abbot and Lawrence see it as an edge resource, it could become the basis of the resource for some businesses, a sort of Bring Your Own Data Center. Here, hierarchical, distributed networks of such appliances, perhaps even assigned to individual staff, could be used when and where necessary anywhere there is a connection available.
To this end, the pair do see this model complementing the arrival of 5G mobile communications as a strong possibility. This would, as they point out, create a market that is attractive to both enterprise users and the vendor community.
It also maps onto another trend they see coming, the extension of SDDC systems – that whole concept of 'Software Defined anything and everything’ – into what they call composable systems. These will be based upon the availability of fluid pools of resources working with software-defined intelligence and unified application programming.
The pair did express some doubts about the continued relevance of Moore’s Law as it applies to the development and production of processor chips, pondering whether it had outlived its relevance and applicability. At one level this may indeed be the case, but it also seems like that its relevance is moving up into the next level of abstraction.
It is possible now that Moore's Law applies to the work that the processors and memory chips are supposed to do for users. Maybe it is that which is start to double in performance and capability, and halve in cost to the business, every 18 months. In that context, the development of micro-data centers and composable systems could become an attractive approach by putting tailored work resources precisely where they are required, both logically and physically.
It could also make them more cost-effective in the generation of revenue and profits than opting for the low-cost option of either a private data center or running in a large public cloud resources farm.
A potentially complementary line of discussion at the event came from 451’s Research Director for Service Providers, Al Sadowski. His focus was on the growing market for OpenStack, which is currently largely made up of service providers, with Rackspace leading the pack.
However, he does see a growth in companies producing packaged distributions of the system that could overcome some of its current drawbacks – in particular the complexity of deploying it at present. This could certainly see the system being re-purposed beyond the service provider market. Sadowski sees it having a future role in hosted systems, but it could also apply well to the micro-data center environment, where its modularity and lack of single points of control could make an important contribution.
Some interesting thoughts from 451that could help CIOs that are starting to look for more flexibility and service potential than just a lower operating cost.