Issuu on Google+

International Journal of Computer Science Engineering and Information Technology Research (IJCSEITR) ISSN 2249-6831 Vol. 3, Issue 3, Aug 2013, 7-14 © TJPRC Pvt. Ltd.

IMPROVED LOAD EFFICIENCY ALGORITHM IN CLOUD AARTI KHETAN & SUBHASH CHAND GUPTA Department of Computer Science and Engineering, Amity University, Noida, Uttar Pradesh, India

ABSTRACT Today Cloud Computing has emerged as a very powerful concept that has considerably transformed the fields of parallel as well as distributed computing. Internet consists of huge amounts of data and services that are distributed across the globe with the help of virtualized servers and machines. This can lead to problem of deadlocks. Henceforth, to make this data and services available to the users, the cloud service provider must perform load balancing which will provide maximum resource utilization, increased throughput with least response time and prevent the virtualized servers from overloading which can degrade their performance. This paper proposes a load balancing algorithm that will help improve the business performance by reducing response time and preventing deadlocks between the virtualized servers and support process migration. Also this paper presents the implementation results of the novel algorithm.

KEYWORDS: Cloud Computing, Load Balancing, Deadlock, Throughput, Response Time, Hop Time INTRODUCTION In order to improve the performance of the business organizations, the cloud service provider must provide the resources on demand. Cloud computing is based on the pay-go concept which makes it necessary for the cloud service provider to be able to service the request for resources made by its users to be instantly available with least waiting time. The time that is required for execution of all the jobs that are allocated to each processor is called processing time and the total amount of this processing time that is needed is called the workload of the processor [2]. The process of distributing the workload of processors in such a way that each processor performs almost same amount of processing of jobs at any time instant is called load balancing [1]. Some of the major goals [9] of any load balancing algorithm include: 

Prioritization of Resources The algorithm must be able to prioritize the jobs before starting the execution so as to service the time critical jobs

to the earliest. 

Adaptability and Scalability The load balancing algorithm must be able to adapt to the changes in the size and topology of the cloud system as

the number of host servers may vary frequently. 

Minimized Costs A load balancing algorithm must work efficiently to minimize cost of execution and provide maximum increase in

the performance. In cloud there can be a situation where the numbers of virtual servers that are deployed are less in comparison to the number of processing requests that are queued. In such a situation, all job requests will be striving to execute itself on


8

Aarti Khetan & Subhash Chand Gupta

at least one of the virtual server leading to a problem of deadlock. As a result, occurrence of deadlock will severely affect the business performance of the cloud system. Thus an algorithm is required which will balance the load on the virtual servers and prevent the occurrence of deadlock. This paper presents a load balancing algorithm which will focus on avoiding the situations of deadlock and in case od overload migrate the load from an overloaded server to some under-utilized server. This will lead to maximum resource utilization. Henceforth, with maximum resource utilization we can reduce the number of job rejections that can occur due to deadlock [3]. The paper outline is as follows. Section II presents the related work which is followed by section III presenting the design architecture. Section IV details the proposed algorithm for load balancing. The next section i.e. section V discusses the implementation and the expected results which is followed by the conclusion of the paper in section VI and lastly the references in section VII.

RELATED WORK Due to the significant advancement and remarkable growth of cloud computing technology, it has been adopted by many IT organizations which have lead to the rapid growth in data centers. Token routing algorithm [4] focuses on minimizing the cost of system by making the tokens move around the system. The drawback of this algorithm is that due to communication overhead, the load distribution information cannot be passed to the agents. Round robin algorithm [5] works by dividing the processes among all the processors and each processor is allocated a process in round robin manner. The task of allocating jobs to local processors is done independent of the allocations made by remote processors. The drawback of this algorithm is that even though the processing task allocated to each processor is approximately equal but they may have different processing time that is at a certain time instant one of the processors is loaded heavily whereas the remaining processors are in idle state. In Randomized algorithm [6], processes are allocated to processors by choosing some random numbers. Based on these random numbers, the processors are chose statistically. This algorithm performs optimally if all the processors are loaded equally. The drawback of this algorithm is that its performance drops when the load allocated to processors are of different processing complexities. The Central queuing algorithm [7] is based on the dynamic distribution of load. A queue manager maintains a central queue of all the jobs. Whenever a request arrives at the queue manager, it allocates the first job in the queue to service the received request. If there are no ready jobs in the queue then the requests are buffered until there is a job ready for servicing the request. The waiting time is increased as the jobs have to wait until a request is received. Also the requests have to be buffered until a job is ready for allocating it to service the incoming request and this is the drawback of this algorithm.Connection manager algorithm [8] is also based on dynamic distribution principal. The algorithm works to estimate the load of each processor or server by dynamically counting the number of connections to each server. The total connection numbers are stored in the load balancer. A new connection is established to the server whenever a request is to be serviced and after the completion of requests the connection is terminated or a time out may occur.

ARCHITECTURAL DESIGN MODEL The architectural layout of the proposed load balancing algorithm is represented in figure 1. According to the architectural design model, the system supports a large number of users say user1 to user n. each user has an authorized access to he cloud resources and services as each user is authenticated by the cloud service provider. Now, each user sends


9

Improved Load Efficiency Algorithm in Cloud

its request to the cloud manger. The cloud manager is responsible for managing the requests for services made by the clients. These requests are stored in a queue data structure. The cloud manager which is present in the data center of the cloud service provider is also responsible for distributing the load between all the virtualized servers or processors and maintains a record of the current status of load on each server. Whenever a request is received by the cloud manager, it parses through the records and tracks the current load status of the virtual servers to distribute the incoming load evenly among all the processors so that no server is overloaded at a certain point of time and the other servers are idle at that time. The cloud manager performs the load status check periodically or whenever a new request is received for execution.

Figure 1: Cloud Architecture Model While checking if the cloud manager finds that a server is overloaded then it migrates some its load to a server that is under-utilized at that particular instant of time. The selection criteria for an under-utilized server are the hop time and the waiting time [10]. By hop time we mean the time that is taken for migrating a job from an overloaded server to an under-utilized server. The server with least hop time is chosen and is the load is transferred to that chosen server. If there are two or more servers with the same minimum hop time, then alternately we check the waiting time for those servers. Waiting time is the time after which a server will be free and ready to service a new job. The server with least waiting time is then selected as the target server for migrating the load. Table 1: Depicts A Sample Data Record Table S. No. 1. 2. 3. 4.

VS ID S1 S2 S3 S4

Load Status 10% 35% 100% 80%


10

Aarti Khetan & Subhash Chand Gupta

In the table below, the virtual server with VS ID S3 has 100% current load status which means that this server S3 is fully utilized and it cannot take any more load and its some of the load need to b transferred to some under-utilized machine say S2 or S3. The decision of choosing the server is done by the cloud manager based on the hop time and the waiting time.

PROPOSED ALGORITHM: ILEA WITH PROCESS MIGRATION AND DEADLOCK PREVENTION The proposed algorithm works in following steps: Step 1: All the virtualized servers in the cloud are initialized to zero means they have no load in the beginning and are available to service the client’s requests. The current load status record is maintained by the cloud manager. Step 2: When a queue of requests is received by the cloud manager, it checks the current load status and accordingly distributes the load to each virtual server. Step 3: After the allocation of requests for service by the servers, the cloud manager updates the load status of each virtual server VS. Step 4: After certain time interval, the cloud manger checks the status of the load of each virtual server. If any server is found over loaded then the cloud manager transfers the load of that overloaded server to some under-utilized server. Step 5: The under-utilized server is chosen by checking the least hop time of all the servers that are under-utilized. But if there are more then one servers which have the same least hop time then the decision is done on the basis of the minimum waiting time. Step 6: After choosing the appropriate server, the cloud manager migrates the load of the over loaded server to the chosen under-utilized server and the current load status is updated. Step 7: The process cycle repeats until all the incoming requests are serviced. The advantage of the proposed algorithm with load balancing is that it involves less communication overhead that is required to check the availability of the virtual servers by analyzing the current load status because this check is performed simultaneously when as new request for resource arrives at the cloud manager.

EXPERIMENTS In order to test our algorithm, we conducted an experiment on a sample piece of workload configuration as shown in the table 2. Table 2: Workload per User Configuration Work ID W1 W2 W3 W4 W5 W6

Capacity of Work 500 900 1000 90 450 600

The above configuration on the user consists of the Work ID and the capacity of the work for which it will send the request. It is considered that at the same time only all the request is arriving. We also configure the datacenter as shown


11

Improved Load Efficiency Algorithm in Cloud

in the table 3. The datacenter configuration consists of the ID of Virtual Machine (VM ID) and their capacity (VM Capacity). Table 3: Configuration of Datacenter Virtual Machine ID VM1 VM2 VM3 VM4 VM5

Capacity of Virtual Machine 100 500 1000 100 400

From the above table configurations it can be easily concluded that the number of virtual machines is less than the works. So, the possibility of deadlock arises because the in the above configuration, more than one work which will try to use the same virtual machine. When we tested this configuration using existing algorithm shortest job first scheduling (SJFS), then the response time was very high arising due to the situation of deadlock. Table 4: Existing Algorithms Response Time Work ID W1 W2 W3 W4 W5 W6

Response Time (ms) 417.67 769.34 250.04 74.40 995.34 1011.23

From the above table we can conclude that the condition of deadlock leads to the random increase in the response time of the request sent by the user. The same response time can be shown in terms of response graph as shown in figure 2. The result of increased response time will ultimately affect lead to the rejection of work. And if the work rejection will be more then it will automatically deteriorate the performance of cloud service provider (CSP). But if any cloud service provider has to grow then it has to respond to the works as soon as possible in order to keep its business aspects valuable among the customers.So, the main aim of our proposed algorithm is to improve the performance of work load and to respond to the service quickly. Our proposed algorithm shows that the response time has been improved for the same configuration that we had used in the existing algorithms. The response time calculated by the proposed algorithm is approximately 45-55 % less than the response time we got in the existing algorithms. The response time for the same configuration using the proposed algorithms has been shown in the table 3. From the table 3, we can see that the response time has been reduced considerably for the same configuration. It means that the work rejection rate will be low as compared to the existing algorithms. It will also enhance the business of cloud computing and cloud service providers.


12

Aarti Khetan & Subhash Chand Gupta

Figure 2: Response Time Graph Using Existing Algorithms

Figure 3: Comparison of Existing and Proposed Algorithms Table 5: Response Time Obtained from the Proposed Algorithm Work ID W1 W2 W3 W4 W5 W6

Response Time (ms) 220.90 500.23 223.34 45.33 534.47 560.23

The table 5 shows the new response time for the same work. In the table we can see that the response time has reduced to a great strength. Figure also shows that the response time has been reduced in the proposed algorithm. Also the efficiency of VM migration can be obtained by comparing the hop time needed from overloaded VM to underutilized VM and the waiting time of VM when it becomes available to service the request.


Improved Load Efficiency Algorithm in Cloud

13

We will consider the waiting time if the time needed by VM to be available is less than the hop time. Also the proposed algorithm removes the overhead problem as it was in existing algorithms by analyzing the VM availability on a regular interval.

CONCLUSIONS Cloud computing is a very powerful concept in the field of IT and has brought about many transformations. Since cloud computing is based on distributed computing, so at the time of allocating services to the virtual server deadlock situation may arise. Thus to avoid such problems of deadlock from arising a load balancing algorithm has been proposed in this paper. The proposed algorithm maximizes the resource utilization by doing process migration and thus avoiding deadlocks. It also reduces the response time and provides high availability thereby improving the business performance of the system. The existing load balancing algorithms suffer from high communication overhead, increased waiting time and limited resource utilization. All the above limitations have been tried to overcome by the algorithm proposed in this paper.

REFERENCES 1.

Soumya Ray, Ajanta De Sarkar, “Execution Analysis of Load Balancing Algorithms in Cloud Computing”, International Journal on Cloud Computing: Services and Architecture (IJCCSA), Vol.2, No. 5, October 2012.

2.

Snadeep Sharma, Sarabjit Singh, Meenakshi Sharma, “Performance Analysis of Load Balancing algorithms”, World Academy of Science, Engineering And Technology, 2008.

3.

T.R. Gopalkrishan Nair, M. Vaidehi, K.S. Rashmi, V. Suma, “An Enhanced Scheduling Strategy to accelerate the Business Performance of the Cloud System” Proc. In ConINDIA 2012, AISC 132, pp. 461-468, Springer- Verlag Berlin Heidelberg 2012.

4.

Zenon Chaczko, Venkatesh Mahadevan, Shahrzad Aslanzadeh, Christopher Mcdermid, “Availability and Load Balancing in Cloud Computing” International Conference on Computer and Software Modeling IPCSIT, Vol. 14, IACSIT Press, Singapore 2011.

5.

Zhong Xu, Rong Huang, “Performance Study of Load Balancing Algorithms in Distributed Web Server Systems”, Parallel and Distributed Processing Project Report.

6.

Hendra Rahmawan, Yudi Satria Gondokaryono, “The Simulation of Static Load Balancing Algorithms”, 2009 International Conference on Electrical Engineering and Informatics, Malaysia.

7.

Abhijit A. Rajguru, S.S. Apte, “A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters”, International Journal of Recent Technology and Engineering, Vol.1, Issue 3, August 2012.

8.

P. Warstein, H. Situ and Z. Huang, “Load Balancing in a Cluster Computer”, Proc. of the seventh International Conference on Parallel and Distributed Computing Applications and Technologies, IEEE 2010.

9.

Shu-Ching Wang, Kuo-Qin Yan, Wen-Pin Liao, Shun Sheng Wang, “Towards a Load Balancing in a Three level Cloud Computing Network”, IEEE 2010, pp. 108-113.

10. T.R. Gopalkrishnan Nair, M. Vaidehi, “Efficient Resource Arbitration and Allocation Strategies in Cloud Computing through Virtualization”, IEEE 2011, International Conference on Cloud Computing and Intelligence Systems, pp. 397-401, September 2011, Beijing China.



2 improved load efficiency full