The issue of sustainability, primarily in terms of energy consumption, but also all the associated sub-plots that spin round that issue, are common to all energy consumers, whether they be individuals at home or global businesses. So it makes sense that the solutions which emerge that can contribute to reducing energy consumption should be shared and widely as possible.
As a major global consumer of energy, the IT sector is starting to face up to the collective nature of this issue, as could be seen at the recent Amsterdam conference hosted by the Cloud Native Computing Foundation (CNCF), which coupled itself with a number of specialist technical events focussing on different elements of open source technologies, with the Kubernetes-focused Kubecon taking pride of place.
It was less a surprise therefore that sustainability issues were not a primary target for discussion at such an event, but the fact it was mentioned at all at such a fiercely `techie’ event shows that the community is now aware that it both contributes to the problem, and can play a role in developing and implementing solutions.
The CNCF opening keynote featured two presentations on how sustainable computing can impact energy efficiency, with empowering end users to manage their own sustainability being an important target. They are the ones that bear the direct costs of energy consumption, yet when it comes to operating with cloud service providers, particularly multiple service providers contributing to a hybrid environment, cost management is already becoming one of the major issues users face.
Start at the low hanging fruit
Addressing this issue directly were Kara Delia and Huamin Chen from Red Hat. Kara is a Principal Community Architect of Financial Services and Sustainability, and Huamin is a Senior Principal Software Engineer. Their starting point was to state that open source technologies are, by their very nature, now probably the best way to build the critical transparency, traceability and the inclusive decision making and collaboration that are going to be essential parts of the innovations that will be needed to engineer on-going sustainability into all aspects of IT operation and consumption.
Not surprisingly the initial targets for Red Hat are the obvious 'low hanging fruit' of energy conservation and CO2 emissions reduction. Estimates vary, but the International Energy Agency suggests that one percent of all global energy is now being consumed by data centers, and curtailing – if not reducing – that consumption is a prime target.
But the CNCF Executive Director, Priyanka Sharma, made it clear at the conference that this was only the beginning of a longer journey, with sustainability looking to span many areas, and touch upon almost every aspect of system architectures, from chip design to application development. To that end, building communities that encompass all aspects are needed if the right policy developments, research and future investment are to follow.
One of the key early developments out of Red Hat is Project Kepler [Kubernetes-based Efficient Power Level Exporter] ,which is aimed at providing end users with insight into the energy consumption of their workloads. This is a Prometheus exporter that provides the base data from which users can innovate energy-efficient workloads, scheduling, tuning, and scaling by using eBPF to probe CPU performance counters and Linux kernel tracepoints. This can then be fed into ML models to estimate energy consumption across a wide range of computing platforms.
This is a community that is led by Huamin Chen, who explains:
Kepler is created to capture the energy usage by a workload that's running your Kubernetes clusters, whether they are on-prem or in public clouds, whether they are bare metal, or virtual machines. It's using the eBPF to crack information from a system or hardware, SAS file or the operating system and And there is using this information to build after machine learning models that later on will be used to calculate the energy used by workloads, whether they are processes, containers, or Kubernetes pods.
His expectation is that these metrics will enable a lot of innovation amongst the user community, allowing them to tune workflows, schedule them to balance the time results that are required against times of lower energy consumption, and auto-scale them to make best use of the peaks and troughs of energy consumption.
As much as anything this can be seen as a useful future tool for budget management, creating an environment where at least the expensive peaks of energy consumption – caused by poor allocation of workloads rather than unavoidable demand - can be managed out of existence. Chen says:
The project capital is very collaborative. We have contributors from IBM, Intel, and we work and many other people; it is very collaborative. We have donated projects and are waiting for STEM box approval. The project consists of people from everywhere all around the world: North America, South America, European countries and Asian countries.
Schedule workload by carbon load
The second presentation on sustainability came from Jorge Palma, Principal PM Lead with Microsoft’s Azure Kubernetes Service. His target was to start looking further along the track from energy saving and carbon footprints and towards building more sustainable software for the longer term. He started by reminding the audience of the principles set out by the Green Software Foundation.
Number one, energy efficiency. Simply put, use the least amount of energy possible. For us, that's all about making sure our code is efficient. Number two, hardware efficiency. Everything that we do or use today needs carbon to be produced and dispose of using less devices. And less components in our solutions will ensure that that embodied carbon is less. Three is carbon awareness, which is all about the distinction between clean energy, and dirty energy, where clean energy regularly describes renewable energies, and dirty energy, is what we commonly call fossil fuels. It’s all about knowing, based on that, when to do more or when to do less.
He started with a look at the concept of carbon intensity, the measure of the amount of carbon released in the production of the energy used, usually represented as gC02/kWh (grammes of carbon per kilowatt hour). This varies because of the variability of many renewable energy sources such as wind and sunlight. That variability, therefore, still requires supplements from sources of dirty energy.
This is a metric that also applies to Kubernetes and other cloud native workloads. Recent developments out of CNCF have included Sandbox, effectively an incubator area where new projects and developments can be tried out. And out of that has come Keda (Kubernetes-based Event Driven Autoscaling), a component which has now been extended by the addition of a carbon-aware scaler. The main idea behind it is a concept called demand shaping - scaling workloads based on the carbon intensity of the infrastructure where they're running. A classic example of this issue is the geographic region of the cloud service provider running it.
In a world of increasingly hybrid cloud environments – ranging from one of the big three such as Azure, through co-lo hosters of bare-metal and on to on-prem facilities – being able to find the most appropriate or cost-effective location for every workload will become an increasingly important capability. With the daily, even hourly, variability of gC02/kWh that is possible, particularly across Europe and heavily populated areas of the USA, having a real-time handle on the carbon intensity of all workload at all times becomes a major tool in budget management.
This is particularly important across the European Union, where all companies are becoming obliged to report their IT operational costs – and hence energy consumption/carbon footprint - as a `raw material’ cost open to inspection by customers. It can become a justification for those customers to move to other suppliers.
Keda achieves this by reading the service providers’ data from a config map of both point-in-time and historical data to decide what the gC02/kWh is going to be. Palma says:
We also know that not all providers might actually have that data available today. So we also created another operator that builds on top of the Green Software Foundation's carbon aware SDK. This is an open source wrapper for public sources of data so that the config map will be created and available for everyone to use and kick the tyres on this new scalar.
Conferences such as these, setting out to enlighten people on a cornucopia of different developments across an amazingly diverse range of programming technologies and application areas, can only scrape the surface of all that is happening here, but this couple of examples demonstrate that sustainability issues are moving centre stage. It is not difficult to predict that there will be lots more to come, for the delivery costs of creating and exploiting data are only going to become an ever-more significant part of the corporate budget.