Improving Dynamic Voltage Scaling Algorithms with PACE Jacob R. Lorch and Alan Jay Smith Computer Science Division, EECS Department University of California at Berkeley Berkeley, CA 94720-1776 lorch,smith @cs.berkeley.edu ABSTRACT This paper addresses algorithms for dynamically varying (scaling) CPU speed and voltage in order to save energy. Such scaling is use- ful and effective when it is immaterial when a task completes, as long as it meets some deadline. We show how to modify any scal- ing algorithm to keep performance the same but minimize expected energy consumption. We refer to our approach as PACE (Proces- sor Acceleration to Conserve Energy) since the resulting schedule increases speed as the task progresses. Since PACE depends on the probability distribution of the task’s work requirement, we present methods for estimating this distribution and evaluate these methods on a variety of real workloads. We also show how to approximate the optimal schedule with one that changes speed a limited number of times. Using PACE causes very little additional overhead, and yields substantial reductions in CPU energy consumption. Simu- lations using real workloads show it reduces the CPU energy con- sumption of previously published algorithms by up to 49.5%, with an average of 20.6%, without any effect on performance. 1. INTRODUCTION The growing popularity of mobile computing devices has made energy management important for modern systems, because users of these devices want long battery lifetimes. A relatively recent energy-saving technology is dynamic voltage scaling (DVS), which allows software to dynamically vary the voltage of the processor. Various chip makers, including Transmeta, AMD, and Intel, have recently announced and sold processors with this feature. Reducing CPU voltage can reduce CPU energy consumption substantially. Performance suffers, however: over the range of allowed voltages, the highest frequency at which the CPU will run correctly drops approximately proportionally to the voltage ( ). Since the main component of power consumption is proportional to , and energy per cycle is power divided by fre- quency, energy consumption is proportional to frequency squared This material is based upon work supported by the State of California MICRO pro- gram, Cisco Systems, Fujitsu Microelectronics, IBM, Intel Corporation, Maxtor Cor- poration, Microsoft Corporation, Quantum Corporation, Sony Research Laboratories, Sun Microsystems, Toshiba Corporation, and Veritas Software. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ACM SIGMETRICS 2001 6/01 Cambridge, MA., USA c 2001 ACM 1-58113-334-0/01/06...$5.00 ( ). So a CPU can save substantial energy by running more slowly; e.g., it can run at half speed and thereby use 1/4 the energy to run for the same number of cycles. Two factors limit the utility of trading performance for energy savings. First, a user wants the performance for which he paid. Second, other components, such as the disk and backlight, also consume power [12]. If they stay on longer because the CPU runs more slowly, the overall effect can be worse performance and in- creased energy consumption. Thus, one should reduce the voltage only when it will not noticeably affect performance. A natural way to express this goal is to assign a soft deadline to each of the computer’s tasks. (We call a deadline soft when a task should, but does not have to, complete by this time.) For example, user interface studies have shown that response times under 50– 100 ms do not affect user think time [21]; we can thus make 50 ms the deadline for handling a user interface event. Also, multimedia operations with limited buffering, e.g. on real-time streams, need to complete processing a frame in time equal to one over the display rate, and there is no need for any earlier completion. When goals can be codified this way, the job of a DVS algorithm is to run the CPU just fast enough to meet the deadline with high probability. Our soft deadline’s key property is that if the task completes by then, its actual completion time does not matter. Thus, if we run the task more slowly, but it still completes by its deadline, performance is the same. Our primary goal is to improve DVS algorithms so that performance remains the same but energy consumption goes down. Current DVS algorithms incorrectly assume that a constant speed consumes minimal energy even when task work requirements are unknown. But, we will show that in this common case expected energy consumption is in fact minimized by increasing speed as the task progresses. We therefore call our approach for improving algorithms PACE: Processor Acceleration to Conserve Energy. We will give a formula for a speed schedule that minimizes ex- pected energy consumption without changing performance. But, there are two problems with using this formula in practice. First, it depends on the probability distribution of a task’s work require- ment. Second, the schedule gives speed as a continuous function of time but real CPU’s cannot change speed continuously. To solve the first problem, we must estimate the distribution of task work from the requirements of previous, similar tasks. We de- scribe and compare various methods for this and find some general and practical methods that work well on a variety of real work- loads. For the second problem, we present and test heuristics for approximating the schedule with a piecewise constant one. Using trace-driven simulations of real workloads, we show that our improvements significantly reduce the energy consumption of previously published algorithms without changing their perfor- mance. We also show that our approach is practical and efficient.