A coming year of platform instability
- Summary:
- Service based approaches to IT are set to accelerate in 2016. Charlie Bess explains what it means for platform instability.
I believe the coming year will do nothing but smash down on the accelerator, causing even the early adopters to swerve along the path to adoption. 2016 is going to move adoption plans into a different approach to computing consumption.
Even though software containerization has been around for at least a decade, 2016 will see approaches like Docker and Rocket move deeper into software architecture and deployment. One of the greatest values of containers is sharing operating system resources and not duplicating hardware abstraction and support and this has been recognized over the past year.
I feel the implications of software containers will be as profound as the effect of containers on the shipping business. My rational is the software container approach takes operation automation and deployment capabilities, as well as the associated flexibility and cost reduction, to whole new levels of granularity.
Software containerisation allows greater utilization, microservices, and applications to become an aggregation of services, rather than monolithic maintenance nightmares. The real value will not come from changes in behavior of the infrastructure support personnel but changes to the thinking of architects and developers, since it encourages cloud native development.
At the same time a macro-level aggregated approach to operating systems will become more common as solutions like Mesos and products like Mesosphere begin to get traction.
The virtual machine approach was based upon the fact that traditional applications only required a fraction of the computing available. Therefore we subdivide the increasing abundance of computing into ever smaller portions to enable isolation and efficiency --- VMs, an approach that dates back to at least 1972. In doing this, the operational management issues could increase along with the number of VMs, cutting into the cost savings expected.
The aggregated computing approach abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual). This enables more fault-tolerant and elastic distributed systems.
Although this sounds a bit like the grid computing approach that was tried at the turn of this century, the computing elements are treated more like cells in an organism that work together, rather than amoebas in a petri dish that are on their own. This shared pool approach enables developers (once again) to strive for a “write once and run anywhere” approach, whether it is on a local desktop, internally hosted server or the cloud or even all at once.
Organizations will need to take the change seriously, since it is not just an incremental improvement for operations but a revolution across the IT space.
Image credit: © Oleksiy Mark - Fotolia.com, Featured image via the author