Towards Energy Auto-Tuning Sebastian G¨ otz, Claas Wilke, Matthias Schmidt, Sebastian Cech and Uwe Assmann sebastian.goetz@acm.org, {claas.wilke, matthias.schmidt, sebastian.cech, uwe.assmann}@tu-dresden.de Technische Universit¨ at Dresden, Department of Computer Science, Software Technology Group, D-01062 Dresden Abstract—Energy efficiency is gaining more and more im- portance, since well-known ecological reasons lead to rising energy costs. In consequence, energy consumption is now also an important economical criterion. Energy consumption of single hardware resources has been thoroughly optimized for years. Now software becomes the major target of energy optimization. In this paper we introduce an approach called energy auto-tuning (EAT), which optimizes energy efficiency of software systems running on multiple resources. The optimization of more than one resource leads to higher energy savings, because communication costs can be taken into account. E.g., if two components run on the same resource, the communication costs are likely to be less, compared to be running on different resources. The best results can be achieved in heterogeneous environments as different resource characteristics enlarge the synergy effects gainable by our optimization technique. EAT software systems derive all possible distributions of themselves on a given set of hardware resources and reconfigure themselves to achieve the lowest energy consumption possible at any time. In this paper we describe our software architecture to implement EAT. I. I NTRODUCTION The energy use of servers is steadily raising and soon will pass the asset costs as the U.S. Environmental Protection Agency (EPA) shows in their report on server and data center energy efficiency [1]. According to this report, the energy con- sumption of servers doubled from year 2000 to 2006 and will double again until 2011. More than 100 billion kWh (approx. $7.4 billion) will be the annual electricity consumption in the U.S. in 2011. The EPA recommends research and development activities to improve the energy efficiency of servers and also points out the necessity to investigate potential savings by power management across multiple resources [1, p. 118]. The energy used by hardware resources should be propor- tional to their utility for end users. In other words, if the end user does not utilize resources by using software running on top of them, the resources should not use any energy. This issue is known under the term of energy proportionality [2]. Recent work shows that we are far from energy proportion- ality. In [3] Tsirogiannis et. al. reveal that over 50% of the overall power consumption is caused by servers with idle load. Moreover, they detected that resource utilization does not directly correlate to its energy use. The actual energy use depends on the kind of task the resource has to accomplish. They show a 60% variation of power consumption for the same level of resource utilization. These results substantiate the nonexistence of energy proportionality. (On the other hand they show, that energy is proportional to performance, having idle power as offsets). Tsirogiannis et. al. point out the optimization of multiple, jointly used resources as a promising direction, too [3, p. 242]. Several problems derive from the energy-unawareness of IT infrastructures used to run distributed software. First, the energy consumption of software components is hard to predict. This is, because energy consumption of software components depends on the hardware they are running on and the user that interacts with the components. The users demand as well as the users utility w.r.t. service requests need to be considered. Energy efficiency is the balance between user utility and energy consumption. However, resource usage by software components and the user’s workload are not explicitly taken into account in current software development processes. Hence, to predict energy consumption of software components and correlate it with user’s requirements, an energy-aware software architecture and runtime environment are required. In this paper we introduce an approach for energy auto- tuning (EAT) software systems. Such systems derive all possi- ble distributions of their software on a given set of hardware resources and reconfigure themselves to achieve the lowest energy consumption possible at any time. Our focus are dis- tributed, component-based applications. We therefore propose the Cool Component Model (CCM) together with the Energy Contract Language (ECL) as appropriate means to capture energy-aware software architectures. Furthermore, we propose the THEATRE as our energy-aware runtime environment. We thus also investigate a development process for energy-aware software systems and do not solely focus on energy auto- tuning. This is, because energy auto-tuning requires such an energy-aware software architecture. The rest of this paper is structured as follows. In Section II we introduce a running example. Afterwards, we highlight requirements for EAT systems in Section III. Our proposed software architecture and runtime environment are presented in Section IV. Finally, we outline related work in Section V and conclude and give pointers for future work in Section VI. II. VIDEO SERVER EXAMPLE In this section we present an example of a small component- based system that can be energy-optimized using EAT. The example was adapted from a case study depicted in [4]. It describes a simple video server scenario consist- ing of two software components: a VideoServer and a VideoPlayer (cf. Fig. 1). The VideoServer is lo- cated at a server whereas the VideoPlayer is deployed at clients. The VideoServer provides services to select and transmit videos that are stored on a FileServer. The VideoPlayer can receive video streams via a network Unedited authors version of GSTF press. Find original paper at http://dl.globalstf.org. 1