For all the talk of computing as a utility, it seems that the motto 'You get what you pay for' applies as much to infrastructure-as-a-service (IaaS) as it does to any other commodity. The computing you get for Amazon's low prices may not be as cost-effective as that of other providers, depending on what you're using it for.
That's the message from a set of benchmark tests published last month by independent tester The Tolly Group, commissioned by IT and managed services provider Dimension Data. As you'd expect from research it commissioned and promotes, Dimension Data's cloud offering does well in the benchmarks. Let me disclose upfront too that Dimension Data's cloud unit recently paid me for some speaking engagements; but the company arranged to brief me on these benchmarks through normal media/analyst channels.
Some circumspection in interpreting the results is evidently advisable. Nevertheless, the factors that gave rise to the findings are worth digging into.
Finding the best fit
To my mind, the most important takeaway goes back to a memorable phrase favored by Netflix cloud architect Adrian Cockcroft: most organizations are "forklifting" existing applications into the cloud.
If that's what you're doing — just porting existing enterprise applications to a cloud platform in order to get economies of operation from a properly virtualized, automated and elastic computing infrastructure — then Amazon Web Services may not be the best fit for you. As Keao Caindec, CMO of Dimension Data's cloud business unit told me: "It's very difficult to run certain enterprise applications that require vertical scale on a scale-out cloud."
Cloud advocates like Cockcroft argue that you should instead be adopting new application architectures that are designed to leverage the scale-out nature of true commodity cloud platforms. Meanwhile, the likes of Dimension Data, IBM and Rackspace are there to service enterprises that have not yet got around to following Cockcroft's advice.
Getting down to details, the benchmarks compared four infrastructure-as-a-service offerings: Amazon Web Services, Dimension Data Public Compute-as-a-Service, IBM Smartcloud and Rackspace Cloud. I'll highlight a couple of findings here. The full report is downloadable as a PDF from the Tolly Group website (registration required). Or if you want a shorter digest, Information Week's Charles Babcock wrote a very cogent summary (though I'm not sure why he chose to emphasize Dimension Data's ownership by Japanese telco NTT, which doesn't seem to have any specific bearing on the results).
The first finding, which compares server CPU performance, highlights the difficulty of making direct comparisons. As the Tolly Group notes in the report, "it was not always possible to match requirements so the closest configurations were used." For example the IBM machines were running Red Hat Linux while the rest ran Ubuntu, and each provider's server used a different model or make of processor and motherboard. IBM and Rackspace didn't have an offering that could be tested for the entry-level single-processor instance. Having noted those provisos, here's the chart of results:The key finding here is that the cheapest provider is also the slowest: Amazon, which charges one-third less than Dimension Data, takes 50-60 percent longer to complete the CPU benchmark. IBM will be pleased to note that Smartcloud comes out noticeably the fastest, but makes up for this by being proportionately more expensive (based on comparative pricing provided by Dimension Data). Based solely on this benchmark, you do almost precisely get what you pay for.
The picture skews further when you add in the results of other tests that show up even greater disparities in metrics such as system memory throughput, local file transaction speeds and local area network performance. Dimension Data argues that the combined effect of all these benchmarks demonstrates a clear value-for-money advantage over other providers: "Clients may spend 30 to 300 percent more using other clouds to achieve the same level of performance in our cloud." Here is the chart of results for network throughput:There is a catch, though. The huge advantage in network throughput ("the only provider tested that delivers true Gigabit Ethernet-class throughput," crows Dimension Data) is achieved by allowing customers to individually fine-tune their network setup on the company's cloud infrastructure. As Caindec confirmed to me:
"We segment our traffic within our cloud using Cisco networking and we allow our clients to segment their systems within our cloud directly on Cisco networking hardware. That's why you see such a throughput difference here."
In fact, much of Dimension Data's performance advantage in the benchmarks is down to choosing the type of equipment that enterprises are already using for their 'scale-up' vertically integrated systems rather than the 'scale-out' custom commodity server infrastructure that AWS uses.
The hypervisor is VMware rather than Xen, the block-based storage is EMC, and the Cisco networking gives clients self-service control of built-in functions such as VLANs, firewalls, load balancers and VPNs. "A lot of it goes into the vendor choices we've made, but also how we've architected what we've created," Caindec concluded.
There are three takeaways to point up from all this:
- For enterprises that are moving existing workloads to the cloud, the choice of provider will often make a difference to the value for money you experience. Shopping around, based on a clear understanding of what your workloads are and the impact of different architectures, is essential. As Caindec declares: "It's not all commodity, it's not all the same ... It's probably worthwhile to look into multiple providers because it can impact your application performance quite significantly."
- Cloud trading platforms such as the recently announced Deutsche Börse Cloud Exchange may have significant challenges in establishing a standard market price for pay-as-you-go compute infrastructure. That is not an insurmountable problem. James Mitchell, CEO of cloud futures broker Strategic Blue, often points out that commodities markets long ago worked out how to establish standards for trading purposes: "No one digs standard coal out of the ground." But customers will still have to be aware that a standard unit of compute may deliver different value depending on how the design of their application aligns with the underlying platform.
- If enterprises are going to get into the habit of choosing different types of clouds for different workloads, then that implies an impending fragmentation of the IaaS market into hundreds or even thousands of specialists rather than convergence on half a dozen global giants. This is a dramatically different future scenario than most observers currently predict.
UPDATED; Oh and one other very important takeaway that I nearly forgot. Most enterprises will continue to get significant benefits from moving existing workloads to cloud providers that offer them suitable infrastructure-as-a-service propositions. Therefore they'll have even less incentive to do as Cockcroft suggests and turn to completely new, cloud-native application infrastructures. How that's going to pan out is anyone's guess, but I imagine it means we're going to continue arguing about what cloud computing really means for at least another decade, maybe more ...
Disclosure: As mentioned above, Dimension Data recently paid me a fee for speaking at customer events.
Image credits: Dollars in ball © suphakit73 - Fotolia.com; CPU and LAN charts © Tolly Enterprises, LLC