Web service micro-container for service-based
applications in Cloud environments
Mohamed Mohamed
1
, Sami Yangui
1
, Samir Moalla
1
and Samir Tata
2
1
Faculté des Sciences de Tunis, 2092 Tunis EL Manar, Tunisia
mohamedmohamed@orange.tn, yangui.sami@yahoo.fr, samir.moalla@fst.rnu.tn
2
Institut TELECOM, TELECOM SudParis, UMR CNRS Samovar, 91011 Evry Cedex, France
Samir.Tata@it-sudparis.eu
Abstract— Cloud computing describes a new supplement, con-
sumption, and delivery model for IT services based on Internet
protocols, and it typically involves provisioning of dynamically
scalable and often virtualized resources. In this paper, we
propose to design and implement a new service micro-container
to address scalability by reducing memory consumption and
response time. We propose to dedicate a services micro-container
for each deployed service and thus avoid the processing limits
of classical services containers. Our micro-container is evaluated
and compared to conventional Web containers to highlight our
contribution.
I. I NTRODUCTION
Web services can be seen as a pillar block for achieving
electronic B2B transactions. More and more companies are
using Web services to achieve transactions with their partners
and/or offer on-line services. For instance, in a Mckinsey
Quarterly survey [11] conducted in 2007 on more than 2800
companies worldwide, 80% are using or planning to use
Web services. Among these companies, 78% says that the
Web services technology is among the three most important
technologies to their business.
To make their services online, companies can set up their
own infrastructure or can adopt the new economic model
offered by Cloud Computing. Cloud computing describes a
new supplement, consumption, and delivery model for IT
services based on Internet protocols, and it typically involves
provisioning of dynamically scalable and often virtualized
resources. There is a no consensus on a definition of Cloud
computing relying on the definition of some twenty experts[8].
Foster et al. [10] define cloud computing as a large-scale
distributed computing paradigm that is driven by economies of
scale, in which a pool of abstracted, virtualized, dynamically-
scalable, managed computing power, storage, platforms, and
services are delivered on demand to external customers over
the Internet.
Although there is no consensus or definition of the concept
of cloud but there are few common key points in these
definitions. First, Cloud computing is a specialized distributed
computing paradigm [10]; it differs from traditional ones on
the fact (1) it is massively scalable, (2) it can be encapsulated
as an abstract entity that delivers different levels of services to
customers outside the Cloud, (3) it is driven by economies of
scale and (4) can be dynamically configured (via virtualization
or other approaches) and delivered on demand.
In this work, we aim at showing that classical service
containers such as Axis2 are not adequate to be used for
services management in a context of Cloud computing. Indeed
classical service containers are not in line with characteristics
of Cloud environments. Infact, they are not designed for
elasticity. For example, the occupied memory of these classical
containers is limited to the size of the memory of the physical
node on which they are deployed even though virtualization
techniques are used.
In this paper, we propose to design and implement a
new service micro-container to make the tasks performed
previously by classical service containers possible in a Cloud
environment.
The micro-container that we propose should be as
lightweight as possible for an optimal usage of Cloud re-
sources and ensure good performance in terms of response
time and memory consumption. Regarding the issue of scal-
ability, we propose to dedicate a services micro-container for
each deployed service. So, we will have as many micro-
containers as deployed services and thus avoid the processing
limits of classical services containers. The new size limit of
memory consumption will be the size limit of all physical
nodes of the Cloud environment. One can even push up this
size limit when considering hybrid Cloud environments. So the
actual limit of service deployment would be the limit of the all
available physical resources in the Cloud. In addition of that,
the deployment process will be very easy and summarized in
enclosing a service within its own micro-container.
This paper is organized as follows. Section II presents a
state of the art of Cloud computing environments and the
motivations of our work. In Section III, we present concep-
tion and the architecture of our service micro-container. In
Section IV, presents the implementation and the experiments
of our realization. Finally, in Section V we conclude our paper
and present our future work.
II. STATE OF THE ART AND MOTIVATION
Cloud providers offer different API to access to their
Cloud services. We can cite among other the following APIs:
Amazon API [12], GoGrid’s API [13], Sun’s Cloud API [14]
and VMware’s vCloud [15]. The Service Oriented Architecture
(SOA) is one of the principle architectures related to cloud
computing; hence we notice the increasing use of “Everything
2011 20th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises
1524-4547/11 $26.00 © 2011 IEEE
DOI 10.1109/WETICE.2011.51
61
2011 20th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises
1524-4547/11 $26.00 © 2011 IEEE
DOI 10.1109/WETICE.2011.51
61