Cloudian and the building blocks of edge computing
- Summary:
- As the volumes of data grow ever bigger, its gravity gets harder to resist, and it starts to pull compute capabilities towards it. This will require users to reconsider their IT infrastructures and architectures
An interesting recent discussion with Michael Tso, CEO of Cloudian, has added more grist to the mill that churns away in my mind about the relationship of business users with technology.
One of the things that has been said to me by a number of people is that if they go to any of the big cloud service providers the vast majority of the potential users end up confronting a problem. An average-sized business, without the benefit of a huge IT department, has to deal in the service provider’s language and terms of reference, not their own. So, for example, they will be required to convert their reality of 25,000 invoices processed a month into resource quotients such as nodes and clusters in order to specify their needs.
There is some sense to this, for outside of the universal basics of business, such as `invoices’, it could mean AWS, Microsoft Azure et al having to invest in specialist staff for each market sector that customers might come from – in reality every possible sector in the known universe.
The alternative is for customers to look for specialist help, such as the Managed Service Providers (MSPs). Arguably, they are the best people perfectly designed for the job. Except that my information suggests that at the moment, for whatever reason, the majority seem content to trundle along with their current half dozen customers and live with the nice numbers they produce every month.
What I see coming along is the re-emergence of the ISV community that made intel and Microsoft the `big gorillas’ they now are. These are the guys that know about a marketplace and its problems and have enough knowledge of the technology to create an environment where they can talk the customers’ own language, and use their dual skills – cloud and specific market needs – to get them into using cloud services without the customers having to do it themselves.
Tso's response was interesting:
I’ve given that exact same speech to many people, so I agree with you completely.
Three rules for the edge
Founded some seven and a half years ago, Cloudian continues to subscribe to three basic ideas, all of which relate to scale. Today they are not new, though it is fair to say that many companies – both vendor and user – are still looking to catch on to them, especially as they drive relentlessly towards multi-Petabyte storage requirements being the `norm’.
The first is that data was and still is being created at a faster rate than wide area network bandwidth, so the need was, and is, to bring the cloud to the data rather than data to the cloud.
The second is that all storage had to be hybrid, because the first idea leads straight to running mini clouds at ‘the edge’ where the real work is performed. So that means that all storage has to be aware of the other clouds and their storage, and be able to move data seamlessly between them.
The third is that data gets so big that it makes more sense to take compute to the data rather than the other way around, which means a convergence with compute platforms, creating a general-purpose computer platform that can process and analyse the data in situ, out at the edge.
This, Tso thinks, is yet to happen, though the signs are there that it is coming faster than he suggests. Indeed, it can be argued that what Cloudian is up to may be one of the key precursors, along very similar lines to the Pure Storage/Nvidia AIRI appliance combo announced last year.
We met to talk about storage technology, and in particular a new partnership formed with disk maker, Seagate. This is an important marker on the Cloudian roadmap, but only as an implementation step towards a bigger goal.
Part of that implementation process has been the addition of hardware, storage appliances, to what is fundamentally a software company and as Tso pointed out, the reasoning is simple. Users are always looking for ease of use and ease of consumption:
If they have more than five petabytes of storage, those become ripe customers for us, because they get tired of how hard it is to manage storage when it gets to that scale.
The appliances are pretty much standard Intel server boxes, currently made by Quanta, but high volume data is the core of their purpose, the actual storage devices are crucial, and this is where the latest deal with Seagate comes in. It shares Cloudian’s view that the private cloud market is growing faster than public cloud, and that a crossover in volumes is coming soon.
Tso now feels it has come up with a spinning disk solution that will help that process with a technology, Heat Assisted Magnetic Recording (HAMR), that it has had in development for a while now. According to Tso, this should soon be delivering 40/50 terabytes in a single 3.5 inch drive, which Tso considers makes it absolutely ripe for the marketplace Cloudian targets. This does not rule out Solid State Drives, however, and the company’s software runs on both technologies.
From Seagate’s point of view, a partnership that puts the new disk technology in an appliance has advantages, not least being the ability to get products out into the market up to a year earlier than otherwise. This is because systems vendors normally require time to qualify the performance of new drives, and also tend to require the availability of a second source manufacturer, which naturally depresses unit prices and lengthens the time to get a return on some heavy investment.
So the combination of an appliance with the latest specifications, coupled with software that fulfils a business purpose, can be a package that breaks through those restrictions for all parties, from tech developer through to end user. He also sees such appliances being a help for the MSPs in taking on the expanding cloud services marketplace, especially for the smaller and medium-sized businesses:
If you’re a medium-sized IT shop, there’s probably half a dozen guys in your company that really understand the cloud, and those guys are being head-hunted, their phones are ringing off the hook and they’ll go to the next highest bidder. And what happens when they’re gone?
This market inevitability makes Tso sure the MSP community will not just survive but also take up the enormous slack that is starting to grow. And this development is one that makes Tso keen to demonstrate that, though Cloudian’s positioning has pointed it clearly at the high end of the enterprise sector, the SMB sector – and the channel partners that will grow to service them – is now very much a key target:
Our platform can start really small. Our system is built to scale to petabytes in data centers but we have paid attention to in the engineering to make it start small. Our whole product can run with all functionality on three VMs on a laptop, assuming you have enough memory. We don’t turn anybody away.
Indeed, it has acquired customers it didn’t know it had as users through its free trial offering. He tells of a US bank that, unknown to the company, downloaded a 10 T/byte trial licence. A few months later it appeared, requesting a production licence for a secret financial trading system coupling Cloudian storage and their own analytics tools:
We signed a partnership with Hewlett Packard last year when they were selling a competitor’s product. They said they were having a hard time maintaining and supporting this other product and would like an agreement with us. We asked them if they need to do any testing and they said, `no, we’ve had your code in our lab for several months already.’
Looking to the edge, not the hype
Tso is a great believer in the future of edge computing, though is hardly smitten with the now standard procedure with such developments, where every vendor in the business is now hyping the hell out of being in the edge computing market, even if in his opinion most of them are not. However, the rate of growth in data gravity makes its arrival inevitable as it will be far easier to get compute applications to the data at the edge than move the data to a central compute capability.
As a measure of its potential universality, he pointed to the smart phone, a tool where everyone wants to do tasks that require large amounts of memory, processing and storage right out at the very edge of the network. And those users are happy to pay for it because it is available, and is usually pretty easy to use.
This analogy maps onto the development of edge computing quite well, despite the fact that he sees others suggesting that it will not be a big part of the market and will only find use in high-end, specialist applications. He had noted, for example, that recent development in virtual desktop (VDI) services, such as the ability to work with 4k-standard streaming graphics tools, are starting to move this flexible and potentially highly secure technology well beyond the mundane, `head office’-based business admin applications that have typified its history so far.
But he also had a view on the other side of that coin:
I think when edge computing is used to do ‘boring’ tasks out at the edge, that is when you know it’s actually coming of age and becoming real. When people use it to do crazy stuff it is still kind of hype stuff. But when you’re doing VDI, just doing stuff that people need every day, that’s when it’s actually happening.
The key things that will make edge computing commonplace will be the widespread use of appliances – either the hardware/software package or containerised software implementations – coupled with the ability to define their networking interconnect to create whatever entity the users require. This makes it possible to create entities with global namespaces:
I think the most important point is that we give users the same set of APIs as they would expect in a cloud and their whole idea of their ISV community. That’s what is going to drive this whole adoption. We work hard to be completely compatible with the Amazon APIs because we want people to be able to run their application everywhere.
So the fundamental requirement is to ensure that the edge and the core both have the same set of APIs whether they are out at the edge, in a hybrid environment or in a very private cloud. The need is to provide a namespace that spans across private data centres and spans into the cloud. Or as Tso puts it:
Users need to be able to get any piece of data anywhere, do what they need to do, and then save the data anywhere, so that someone else can consume it.
My take
Here, arguably, is one of the important components in the development of edge computing, for the `appliance’ will be to the likes of Kubernetes as Kubernetes is to Docker Containers – the next level of functional abstraction in combining the highest of technologies to work with the largest, most complex workloads, with the contradictions of functional complexity and operational flexibility, and good old-fashioned ease-of-use. And within reason, users will be able to drop those appliances in just where they are needed, for as long as they are needed.