As we know, the environmental impact of IT mostly comes from the manufacture of equipment and the energy it consumes. Regarding this second dimension, efforts and innovations are more focused on the energy efficiency of data centers and the equipment they put in, with improvements on the hardware side and through virtualization. Improvements on the software component are less rare. Eco -design practices – specifically promoted by the Green Software Foundation – make it possible to develop applications that consume less energy. However, it is nearly impossible to know the carbon footprint of a production application, especially if it runs on a server that hosts other applications and beyond in the cloud.
In an article published in January, the researchers proposed fixing it using new standards and mechanisms that allow software to determine the energy consumption of the infrastructure they operate, whether it’s in the cloud, or in the organization’s data center. Called Treehouse, their project aims to “lay the foundation for a new software infrastructure that treats energy and carbon as a primary resource, along with traditional computing resources such as computing, memory, and storage”. To achieve this, the authors believe that three main abstractions are necessary for developers: energy usage monitoring, the ability to make energy/performance trade-offs, and a new implementation unit.
AI models to estimate carbon footprint
According to the researchers, the energy consumed by an application is so diverse (user code, operating system, storage, network, etc.) that it is almost impossible to measure it using hardware mechanisms. Instead, they suggest using artificial intelligence and train a model to estimate the energy consumed by the application based on various metrics (bandwidth, storage volume, speed cycles, hardware topology) and laboratory measurements. . Then combine these estimates with information in the datacenter itself (RPC, carbon footprint of energy used).
Inform the system software of application performance constraints
Researchers point to measures to improve energy efficiency (disabling processor boost, less frequent data movement between RAM and solid-state storage, use of less greedy hardware accelerators , etc.) usually have a negative impact on performance, and vice versa. If application developers can arbitrate between energy and performance, it will be more difficult for systems software designers who have no knowledge of what applications want to do. To answer this, the authors proposed to create an interface that informs software systems of the performance constraints of applications in the form of SLAs. “For very latency-sensitive operations, it may still make sense to use the best-performing solutions, even at high power costs. But when there is relief in user expectations, we can use this flexibility to select the most energy efficient solution while meeting user needs, ”the researchers explain.
More grain trade-offs
The researchers finally proposed to build a new unit – the microfunction – to combat two energy -intensive phenomena: on the one hand the immobilization of poorly used resources (calculation, memory, storage) that become unusable. for other applications, especially in the cloud. / no server, and stacks that accumulate excessive layers of software to facilitate the work of developers. Working on the scale of microseconds, the new programming model will make it possible to both provide resources in a finer way to select the most effective option according to the SLA, and to better integrate the resources demanded by applications. “Microfunctions represent a timescale large enough to perform a useful workload (i.e., several thousand cycles), but efficient enough to quickly balance resource usage as workloads change.”
>> Learn more: Treehouse: A Case For Carbon-Aware Datacenter Software