How Moore’s Law helps Pure Storage offer users a 'Pure Pays' option

Martin Banks Profile picture for user mbanks October 27, 2023
Summary:
Pure Storage’s commitment to solid state data storage technologies means that its customers can take advantage of new pricing options, where it becomes possible that the vendor may pay for some of the service costs. This is because of the way semiconductor chips go down in price and energy consumption, but up in capacity.

Image of a person paying by card
(Image by Michal Jarmoluk from Pixabay )

The dynamics underlying the rules businesses need to work by to provide themselves with industrial-scale data storage are rapidly changing, even down to the fundamentals of how a business now measures its storage requirements. 

The changes were outlined at a recent roundtable discussion at Pure Storage’s conference in London’s east end. They lie behind the company’s new plans for the way users pay for the services they receive and technologies they exploit.

Moore’s Law lies behind most of these rule changes, as is often the case with most things in the digital world. Pure’s Founder and Chief Visionary Officer, John `Coz' Colgrove, was on hand to talk through some of them. First, however, Prakash Darji, General Manager of the company’s Digital Experience Business Unit and responsible for its Evergreen Programme, was on hand to run through the news from the event - the changes involving the magic word, `payment’. 

You pay, they may pay too

Darji said that because of the energy efficiency of the company’s solid state storage technologies, with claimed 85% reductions in energy use and carbon emissions, plus up to 95% less rack space taken up, the company now has a commitment to pay its customers’ additional over-contract power and rack space costs. This offer comes together with new service guarantees, flexible financing, enhanced resilience, and AI-powered service capabilities as part of its Evergreen subscription-based services. Darji explained:

It's eligible for all new subscriptions starting in October, and customers that we work with at the time of renewal. We're not interested in micro payments, so we’ve started with a 300 KB minimum. It is upfront, and the customer can choose whether it's direct payment or a search. We've found that some people prefer cash, while for some people a lot of that has to do with budget politics and outcomes - in terms of whether the CFO is going to take the money or if its the spreadsheet that makes the money. So we offer both options.

The Evergreen services come in three forms: Evergreen//One, the base level Storage as a Service offering; Evergreen//Flex, which offers users more hardware control and usage-based payment plans; and Evergreen//Forever, which self-describes  - buy it once and pay a subscription just for updates. The new pricing applies to the first two, taking responsibility for the associated costs of power and rack units to run the storage offerings. Darji added: 

The one-time, upfront payment can be made directly as cash or via service credits, is based on kilowatt per hour (kWh) and Rack Unit (RU) fixed rates and is proportional to the customer’s geographic location and contract size. Expanded guarantees support customers who opt to own their storage via an Evergreen//Forever subscription.

The Power and Space Efficiency Guarantee supports Evergreen//Forever customers looking to consume less power, reduce their energy costs and store more data in a smaller footprint. In this context the company is moving away from reporting storage in Terabytes and instead has opted for the more accurate metric of Watts per Tebibyte (TiB). So, if the guaranteed Watts/TiB or TiB/Rack metrics are not met, Pure Storage will cover the cost. This joins the Energy Efficiency guarantee which is already available as an Evergreen//One Service Level Agreement.

Tebibytes, by the way, are calculated using Base-2 rather than the more common Base-10. A Terabyte is defined as 1012 bytes – 1,000,000,000,000 bytes. A Tebibyte is defined as 240 bytes, which is a good bit bigger at 1,099,511,627,776 bytes 

Darji also announced new developments in data resiliency, designed to help customers in dealing with ransomware, data protection and the key hot topic of Disaster Recovery. Known as Pure Protect, it is an on-demand disaster recovery service for VMware VMs in native AWS. Darji said: 

It's there for failover and failback, from customer primary data centres to Amazon. And the failover happens from VMware environments to a native Amazon environment that actually doesn't require VMware. It actually converts the VMware SDDC to Office and supports failover and failback. 

We've also introduced a concept that allows people to score their environment. It's actually somewhat simple, a comment on `How well am I’? Because, you know, even security is one of those things you can spend to get, but are you happy to do a functional inspection?

The idea behind this development is that it is always possible to improve the resiliency score, and he quipped that this would not just be by suggesting a look at Pure Storage capabilities. It will cover a wide range of options as the company onboards data about both existing a new partner vendors. It will also look at anomaly detection, such as when did the anomaly happen and what were the attack patterns, all particularly useful in detecting ransomware.

The major point is, how do you characterise your labour efficacy, that's the point of what we're trying to do. How do you reduce the amount of activities people need to take. And one of the major things we're doing is policy-driven lifecycle, so endpoints, the arrays in the data centre, are evergreen aware.

‘Cos we can

The reason Pure is able to make these pricing changes is because of the hardware technology now being used, as set out at the meeting by John `Coz' Colgrove. The application of Moore’s Law to Flash memory technology has led to the inevitable: storage capacity per device has shot up, while the cost per unit of storage capacity – and the power consumed - has shot down. That has meant that the days of hard disk storage devices are almost over: they can no longer match the capacity or the cost of Solid State Drives.

In addition, like all solid state devices. they tend to `suffer’ from what has traditionally been called `infant mortality’ – if they are going to fail, they usually die at the first test probe. If they work, then they work for their expected lifetime. According to Colgrove, Flash chips can be expected to work for some 30 years. In practice, earlier failures are most often down to other parts of the memory package, such as corroded connections or an external leak. To be on the safe side, Pure puts a 10-year lifetime on its memory modules. Colgrove said: 

We have direct flash modules in the field that are eight years old, we're still seeing in point 2% annual failure rates. Why is it way more reliable? Well, what is there to break? There are no moving parts, nothing to wear out. The thing that breaks is the software, and the real magic of this device is not what we put in, it's what we took out. 

One of the reasons we like electric cars is you don't have a carburettor, you don't have a fuel injection system, you don't have a crankshaft, you don't have a bunch of cylinders and all these things that go wrong.

The company has even removed some of the software problems by not using firmware or the complications of Flash translation. With its SSD modules, you do not overwrite NAND in place. When writing data, it uses a block erase and re-write approach, which reduces the number of re-write cycles per device, the one area where Flash can have a weak point. 

Each module works with a Flash translation layer built into the controller, which he defines as a small file system that remaps the data stored in all modules. He claims that using this approach provides much greater efficiency, much better data access and much lower failure rates.

He also talked about one important change in the way storage is measured. The increasing pressures coming from the energy consumption of data storage is obliging users to factor that into their planning much more directly, so while budgets are still planned in dollars, the next most important factor is no longer the number of Terabytes or Petabytes of storage needed, nor the number of racks required.

It is now the number of MegaWatts the system is going to consume. And when dollars per MW/hr is the all-important sum, the increasing capacity of Solid State Drives, coupled with their declining cost per byte, now play an increasingly vital role.

 

Loading
A grey colored placeholder image