HPC Performance and Energy-Efficiency
of Xen, KVM and VMware Hypervisors
Sébastien Varrette
∗
, Mateusz Guzek
†
, Valentin Plugaru
∗
, Xavier Besseron
‡
and Pascal Bouvry
∗
∗
Computer Science and Communications (CSC) Research Unit
†
Interdisciplinary Centre for Security Reliability and Trust
‡
Research Unit in Engineering Science
6, rue Richard Coudenhove-Kalergi, L-1359 Luxembourg, Luxembourg
Sebastien.Varrette@uni.lu, Mateusz.Guzek@uni.lu, Valentin.Plugaru@gmail.com,
Xavier.Besseron@uni.lu, Pascal.Bouvry@uni.lu
Abstract—With a growing concern on the considerable energy
consumed by HPC platforms and data centers, research efforts
are targeting green approaches with higher energy efficiency. In
particular, virtualization is emerging as the prominent approach
to mutualize the energy consumed by a single server running
multiple VMs instances. Even today, it remains unclear whether
the overhead induced by virtualization and the corresponding
hypervisor middleware suits an environment as high-demanding
as an HPC platform. In this paper, we analyze from an HPC
perspective the three most widespread virtualization frameworks,
namely Xen, KVM, and VMware ESXi and compare them with a
baseline environment running in native mode. We performed our
experiments on the Grid’5000 platform by measuring the results
of the reference HPL benchmark. Power measures were also
performed in parallel to quantify the potential energy efficiency
of the virtualized environments. In general, our study offers novel
incentives toward in-house HPC platforms running without any
virtualized frameworks.
I. I NTRODUCTION
Many organizations have departments and workgroups that
benefit (or could benefit) from High Performance Computing
(HPC) resources to analyze, model, and visualize the growing
volumes of data they need to conduct business. Actually,
HPC remains at the heart of our daily life in widespread
domains as diverse as molecular dynamics, structural mechanics,
computational biology, weather prediction or "simply" data
analytics. Also, domains such as applied research, digital
health or nano- and bio- technology will not be able to evolve
tomorrow without the help of HPC. In this context, and despite
the economical crisis, massive investments (1 billion dollars
or more) have been voted last year (in 2012) by the main
leading countries or federations (US, Russia, China, India or
the European Union) for programs to build an Exascale platform
by 2020.
This ambitious goal comes with a growing concern for
the considerable energy consumed by HPC platforms and data
centers, leading to research efforts toward green approaches
with higher energy efficiency. At the hardware level, novel
solutions or architectures are currently under investigation,
typically in the direction of accelerators (Tesla K20, Intel Phi)
or low-power processors (ARM) coming from the mobile or
embedded device market. At an intermediate level (between
software and hardware), virtualization is emerging as the
prominent approach to mutualize the energy consumed by
a single server running multiple Virtual Machines (VMs)
instances. However, little understanding has been obtained
about the potential overhead in energy consumption and the
throughput reduction for virtualized servers and/or computing
resources, nor if it simply suits an environment as high-
demanding as a High Performance Computing (HPC) platform.
In parallel, this question is connected with the rise of Cloud
Computing (CC), increasingly advertised as THE solution to
most IT problems. Several voices (most probably commercial
ones) emit the wish that CC platforms could also serve HPC
needs and eventually replace in-house HPC platforms. In
the secret hope to discredit this last idea with concrete and
measurable arguments, we initiate a general study on Cloud
systems featuring HPC workloads.
In this paper, we evaluate and model the overhead in-
duced by several virtualization environments (often called
hypervisors) which are at the heart of most if not all CC
middlewares. In particular, we analyze the High Performance
Linpack (HPL) benchmark performance and the energy profile
of three widespread virtualization frameworks, namely Xen,
KVM, and VMware ESXi, running multiple VM instances
and compare them with a baseline environment running in
native mode. Actually, this study extends our previous work
in the domain proposed in [1]. This time, we are focusing on
larger experiments (closer to an HPC environment) while our
initial article used to model a single VM instance. As for our
seminal paper, it is worth mentioning the difficulty to find in
the literature fair comparisons of all these hypervisors. For
instance, in the few cases where the VMWare suite is involved,
the study is generally carried on by the company itself.
The experiments performed in this paper were conducted
on the Grid’5000 platform [2], which offers a flexible and
easily monitorable environment which helped to refine the
holistic model for the power consumption of HPC components
which was proposed in [1]. Grid’5000 also features an unique
environment as close as possible to a real HPC system, even
if we were limited in the number of resources we managed
to deploy for this study. Thus, while the context and the
results presented in this article do not reflect a true large scale
environment (we never exceed 96 nodes whether virtual or
physical in the presented experiments), we still think that the
outcomes generated by this study are of benefit for the HPC
community.
2013 25th International Symposium on Computer Architecture and High Performance Computing
978-1-4799-2927-6/13 $26.00 © 2013 IEEE
DOI 10.1109/SBAC-PAD.2013.18
89