Toward a Flexible, Environmentally Conscious, On Demand High Performance Computing Service G.B. Barone, R. Bifulco, V. Boccia, D. Bottalico, R. Canonico University of Naples Federico II Italy gbbarone@unina.it, roberto.bifulco2@unina.it, vania.boccia@unina.it, davide.bottalico@unina.it, roberto.canonico@unina.it L. Carracciuolo Italian National Research Council Italy luisa.carracciuolo@cnr.it Abstract—This work is related with planning and imple- mentation of an on demand computing service which is able to obtain a right trade-off between managment cost reduction, environmental sustainability and user satisfaction. Keywords-internet services; computing on demand; green IT; cloud computing; virtualization; High Performance Compu- ting; I. I NTRODUCTION On demand computing is a model in which computing resources are made available to the user as needed. It could be considered a valid solution for people who need a huge amount of resources, to reduce the Total Time to Solution, and cannot bear the costs of HPC systems. Those costs are related i.e., to energy consumption, cooling systems and specialized know how for hardware/software resources upgrade and maintenance. On the other hand, from the point of view of resources manager, it is important to find the right trade-off between an effective use of resources and the cost reduction also in terms of energy consumption. Certainly, both users and resources manager, would ben- efit by environmental sustainability of new environment- conscious HPC systems. The purpose of this paper is to describe our first experi- ence in designing and implementing a flexible infrastructure, built on the basis of both physical and virtual resources, in the name of saving energy, and overall cost reduction, and than in the name of a more efficient resources usage [7]. We note that the term ”flexible” is used to indicate an infrastructure where the number and type of resources may change on the basis of the user requirements and real workflow. In section II we describe the architecture of planned on demand computing service. In section III we describe a Case Study related with the implementation of the service on the SCoPE Computing Infrastructure of University of Naples[6]. Figure 1. Service schema In section IV we give some information on our future work. II. SERVICE PLANNING Local or distributed large scale systems offer on demand computational services to different, and often heterogeneous, user communities. These services usually have to meet constraints, defined in SLAs (Service Level Agreements), that need resources are always “online” even if they aren’t effectively used. However, computational resources power supply and cool- ing affect very heavily on total energy consumption and then on management costs. Therefore large scale systems owners have to find out a compromise between the sustainability of such systems and overall user community satisfaction. We just worked on planning and deployment of a soft- ware solution, whose architecture is represented in figure 1, that implements an energy-aware Resource Management