Resource Provisioning Using Batch Mode Heuristic Priority with Round Robin Scheduling Gaurav Raj 1 , Navreet Singh 2 , Dr. Dheerendra Singh 3 1 PhD Scholar, Computer Science Department, Punjab Technical University Punjab, India, 2 M. Tech., Computer Science Department, Lovely Professional University Punjab, India, 3 Professor and Head of CSE Department , SUSCET, Tangori, India er.gaurav.raj@gmail.com , navreet_lehal@hotmail.com, hodcse@suscolleges.com Abstract—Million numbers of these users tries to access the data in different type of applications like online shopping etc., which tends to increase the load on a single server. Increase of load on a server results in reduction of throughput and it leads to a strong need of developing and maintaining an efficient system with an appropriate load balancing algorithm that will be used to retrieve the important information with a reasonable response time. The main objective of our study is to propose a load balancing algorithm that can balance the requests coming from different users residing in different locations to retrieve the data from a distributed database environment using some virtualization techniques. Keywords— Cloud computing, load balancing, virtualization, distributed database. I. INTRODUCTION Cloud computing is a style of computing, in which dynamically scalable (and mostly virtualized) resources are provided as a service over the Internet. The actual Cloud Computing definition by the National Institute of Standards and Technology is: “Cloud computing is a model for enabling convenient, on demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Many users and consumers feel compelled to migrate to the Cloud in order to stay competitive and to benefit from the assumed improvements offered by the according infrastructure. The Cloud market is currently a high dynamic business field with new providers and business models arising effectively, therefore creating a lot of questions about what “A Cloud” actually is, and implicitly what to expect from it in the short- and long-term future. This makes this a vital time for deciding the future of Cloud computing. Load Balancing basically ensures that all the processors in the system or every node in the network does approximately the equal amount of work at any instant of time. The load can be CPU load, memory capacity, delay or network load. Hence Load Balancing is the procedure of distributing the load among various nodes of a distributed system to improve both resource utilization and job response time while also avoiding a situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work. Fig.1. Layered Virtualization Technology Architecture Today network bandwidth, less response time, minimum delay in the data transfer cost has become the main challenging issues in cloud computing. Now these challenges are also faced in the storage cloud side where in we have to balance the load across the various nodes or at the server sides. The random arrival of the load in such a cloud computing environment can cause some server to be heavily loaded. We have to maintain the load across the various servers in such a manner so that the overall performance of the servers and the network will be up to the mark and maximum at the peak level. Equally load distributing improves the performance of the Gaurav Raj et.al / International Journal of Engineering and Technology (IJET) ISSN : 0975-4024 Vol 5 No 3 Jun-Jul 2013 2959