Why you might need bare naked motherboards in oil

SUMMARY:

Putting servers in oil seems ready-made to provoke questions such as ‘Would you like fries with that?’ But Dutch start-up, Asperitas, is using the oil to cool the servers not cook them, is doing it in a way that has a lot of long-term economic arguments in its favour, and is maybe arriving just as new technologies come along that could exploit it

Here’s an interesting, if not exactly novel, approach to cooling servers that could have some valuable economic benefits and just might have appeared at a time when established models of data center facilities design are reaching a point where they can be superceded, and  technology developments and architectures of servers are starting to take a step-function forward.

The story concerns a Netherlands-based start-up, Asperitas, and its immersed computing approach to server cooling. This is genuinely interesting, and not so much for the technology but for the long term economic and ecological arguments that spin round it. It is not often that CIOs and data center facilities managers can feel that they are doing ‘good’ as well as doing their job, but the immersed computing model introduced by Asperitas is likely to induce it.

The idea is simple enough. Take a server motherboard, together with its power supply, associated disk drive and whatever, and instead of mounting it in a chassis and casing designed for air cooling – along with multiple cooling fans and ducting – dunk it in a bath of dielectric oil. ‘Dunk’ is a bit unfair, of course. The motherboards are mounted on purpose-designed panels and frames that then fit vertically into tanks of the oil.

This basic approach, oil cooling, is not new, and it is fair to say that so far, it has not managed to supplant air-cooling or water-cooling as mainstream data center temperature management technologies. There are, however, reasons to suspect that this time it might just find its moment, not least because datacentres are looking at the possibilities for some major upgrades to their resources over the next year or three.

The oil itself is not anything special. It is not just harmless but is indeed edible. The comp any describe it as a thin Vaseline-type product that is widely used in products ranging from face creams to foodstuffs. It brings a number of advantages to the party, not least being its ability to absorb 1,500-times the amount of heat as air. It also provides a medium through which it is possible to capture the waste heat in a variety of ways that make it re-usable in a range of potentially revenue-generating ways.

Finally, the heat absorption capacity means that devices on the motherboard can be more easily, and far more cheaply, maintained at near optimum operating temperature, with a much narrower fluctuation band than possible with air-cooling systems. The advantage of this is that ordinary commodity motherboards can be run much harder, with a higher workload throughput, than currently possible. In addition, because the oil coolant is in direct contact with the devices and connections, issues such as connection corrosion (a common cause of board failure) are pretty much eliminated. So reliability and workload both go up, reducing long term data center operating costs.

A future by-product of is that existing designs of high Performance Computing (HPC) motherboards – with densely packed, high performance devices on them – can then be used alongside commodity boards without the need for expensive specialised cooling plant and management systems. This opens up the possibility of HPC resources being mixed in with commodity systems, increasing the re-sale resources available to service providers, and giving the manufacturers of such motherboards a much bigger marketplace to pitch at. This would hasten the `trickle down’ of both high performance devices and denser board-packing models into the commodity market.

Economics and the bare naked server

Not only are there no fans sucking air through server boxes (and indeed no boxes) there are no pumps required. The motherboards and associated components such as power supplies are effectively screwed to the panel which in its carrying frame, is then inserted vertically into a tank of oil, ‘au naturelle’. This vertical mounting means that any frame can be extracted at any time, for example should a board need repair or upgrading. The important point here is that the tank does not need draining of oil, so all `hot standby’ systems management processes can by maintained.

The cooling process itself uses natural convection to move cool oil from the bottom of the tank to the top, extracting heat from the boards as it flows upward over them. At the top, the oil flows down side channels and through water-filled heat exchangers, back to the bottom. The hot water then goes to whatever heat capture process is selected.

At the show, Asperitas had a fully loaded and running 48-board `complete datacentre’ with the water passing through a convection cooler mounted in the wall of the stand, sucking in air from under the stand floor and releasing it out of the top to the atmosphere.

The economics of it are, however, where things can get interesting. Latest estimates suggest that the global community of datacentres now consumes 5% of all electricity produced around the world. And this consumption is growing fast. So energy use and management is now a, pardon the pun, ‘hot topic’. The Asperitas approach looks though it cut into these costs and others costs, in a number of ways.

For a start, there is no need for a chassis or `pizza box’, so no need for cooling fans and specialised designs of ducting and device layout. We are talking bare naked motherboards here, so the compute resource itself can be cheaper. The oil itself is not pumped so that reduces energy consumption, and because of the tighter management of the overall heat envelope, the traditional problem of datacentre hot spots will the company claims, disappear. Basically the need for a CRAC (Computer Room Air Conditioning) system can be taken off the CIO’s and Ops Manager’s checklist, as can the increasingly complex CRAC management systems that direct airflows and control the curtaining around `hot aisles’ in a data center.

It then offers the potential to start adding revenue sources generated by exploiting the waste heat (much of which currently gets lost to the atmosphere). Indeed, datacentres are now coming under pressure to re-use that energy is some way. As that heat is contained in water, this is quite easy to do and is now the focus of technology development in its own right. Even the old gambit of providing heating to local homes and businesses can be a worthwhile source of additional revenue.

The Asperitas model also maps onto some new technologies that are likely to start appearing more and more in datacentres. The trend for hyperconverged environments for example, where smaller, standardised server appliances are densely packed, looks ready-made for the immersed computing approach. Indeed, the demo system was running Supermicro motherboards, making it one of the first to move in that direction.

Asperitas is also understood to be in discussion with the like of HPE, and it does not take too much imagination to suspect that the immersed computing model might fit rather well with the upcoming in-memory processing technology that HPE is due to announce some time this year.

The one downside is, of course, something of an economic elephant in the room – the heavy commitment to air-based cooling systems and the investnments already made in the underlying infrastructure and facilities that requires. Moving to the immersed computing model could be expensive. Not only will it be a much bigger job than swapping out an old style of racking and swapping in new ones. There will be major re-plumbing required.

However, as the new technologies of hyperconvergence and (maybe) in-memory processing gain traction with users so the pressure will grow on both on-premise datacentres and Co-Lo/public/cloud services providers to add such resources. Here, Asperitas should have a chance to gain a foothold.

My take

For many this will be a side issue, and far less important than new processors or major disruptions in applications development. But in its own way this could be just as disruptive to how datacentres get put together in future. It may even be the reason towns will be fighting to have datacentre owners build in their locale – a bit of local employment, and shedloads and spare heat to exploit.

Image credit - Freeimages.com