Sei sulla pagina 1di 5

AN INFRASTRUCTURE FOR DYNAMIC RESOURCE

SCHEDULING AND VIRTUAL MACHINE BASED


PREDICTION IN THE CLOUD
Karthika.M,
P.G Student, Department of Computer Science and Engineering,
V.S.B. Engineering College, Karur, Tamilnadu

karthikamurugesan22@gmail.com


Abstract: Resource allocation and job scheduling are the core functions of cloud computing. These functions are based on
adequate information of available resources. Timely acquiring dynamic resource status information is of great
importance in ensuring overall performance of cloud computing. A cloud system for analyzing performance, removing
bottleneck, detecting fault, and maintaining dynamic load balancing, thus, to help cloud users obtain desired computing
results by efficiently utilizing system resources in terms of minimized cost, maximized performance between cost and
performance. To reduce overhead, the aim of designing a dynamic resource allocation and prediction system is to achieve
seamless fusion between cloud technologies and efficient resource scheduling and prediction strategies. This work aims at
building a distributed system for cloud resource scheduling and prediction. In this project, we present the design and
evaluation of system architecture for cloud resource scheduling and prediction. We discuss the key effects for system
implementation, including virtual machine learning-based methodologies for modeling and optimization of resource
prediction models. Evaluations are executed on a prototype system. Our experimental results suggest that the efficiency
and accuracy of our system meet the demand of online system for dynamic resource utilization and prediction.
Keywords- cloud computing, dynamic load balancing, prediction , resource scheduling,
1 INTRODUCTION
A Cloud is a type of parallel and distributed system
which consists of a collection of interconnected and
virtualized computers .The computing resources can be
allocated dynamically upon the requirements and
preferences of user. The consumers may access
applications and data of the Cloud from anywhere at any
time, it is difficult for the cloud service providers to
allocate the cloud resources dynamically and efficiently
[1]. Physical resource are Computer, Processor, disk,
database, network, Bandwidth, scientific instruments
and the logical resources are Execution, monitoring,
communicate application and etc. The computing
resources, either software or hardware, are virtualized
and these are assigned as services from providers to
users.loud computing removes the limitation that exist in
traditional shared computing environment, and becomes
a most important trend in distributed computing system.
It combines heterogeneous resources distributed across
Internet, irrespective of differences between resources
like platform, hardware, software, architecture,
language, and geographical location. Such resources,
which include computing, storage, data, communication
bandwidth resources and other resources, are aggregated
dynamically to form high performance computing
capability of solving problems in large-scale
applications. Dynamically sharing resources gives rise
to resource contention. One of the challenging problems
comprises deciding the destination nodes where the
tasks of cloud application are to be executed. From the
linear perspective of system architecture, dynamic
resource allocation and scheduling are the most crucial
functions of cloud computing.However, the processing
frameworks which are currently used have been
designed for static, homogeneous cluster setups and
disregard the particular nature of a cloud. Consequently,
the allocated compute resources may be inadequate for
big parts of the submitted job and unnecessarily increase
processing time and cost.Resource allocation alone,
however, can only support instant resource information
acquisition. It cannot generalize the dynamic variation
of resources. Resource state prediction is necessary to
fill this gap. Typical previous prediction systems, can
provide both allocation and prediction functions.
However, dynamic features of cloud resources were not
taken into consideration in these design frameworks.
1.1 Review of Related Works
Many previous research projects focused on optimizing
traditional performance metrics, like system utilization,
client incentive and application response time in
controlled cloud.


Fig 1. System architecture

They did not consider dynamic resource allocation
based on providers load. Particular tasks of a processing
job can be assigned to different types of virtual
machines which are automatically instantiated and
terminated during the job execution.Dynamically
sharing resources gives rise to resource contention.
Job scheduling in cloud computing has attracted great
attention.Most research in job scheduling adopt a
paradigm in wich a job in cloud computing system is
characterized by its workload, dead-line and the
corresponding utility obtained by its completion before
dead-line,wich are factors considered in devising an
effective scheduling algorithm.This paradigm is known
as Utility Accrual (UT) paradigm.
Many researches have proposed different scheduling
algorithms that run under cloud computing environment.
Most of the scheduling algorithms that have been
proposed attempt to achieve two main objectives
namely, to run the user task within the dead-line and to
maintain efficiency(load balancing) and fairness of all
tasks (Li et al., 2010; Gupta and Rakesh, 2010; Yang et
al., 2011). Here, we reviewed the most relevant research
works done in the literature for job scheduling in cloud
computing.
Garg et al. (2009) addressed the issue of increases in
energy consumption by data centers in cloud computing.
A mathematical model for energy efficiency based on
various factors such as energy cost, CO2 emission rate,
HPC workload and CPU power efficiency was
proposed. In the model a near-optimal scheduling
algorithm that utilizes heterogeneity across multiple data
centers for a cloud provider was introduced.
Li et al. (2009) introduced a novel approach named
EnaCloud, which enables application live placement
dynamically with consideration of energy efficiency in
a cloud platform. They use a VM to encapsulate the
application, which supports the applications scheduling
and live migration to minimize the number of running
machines to save energy. however The proposed
scheduling method considers the scheduling problem as
an assignment problem in mathematics where the cost
matrix affords the cost of a task to be assigned to a
resource.
1.2 IMPLEMENTATION
In this paper, we present the design and
evaluation of system architecture for cloud resource
scheduling and prediction.The challenge is to develop a
cloud scheduling scheme that enables prediction based
resource allocation by middleware to make autonomous
decisions while producing a desirable emergent property
in the cloud system;that is two system wide objectives
are achieved simultaneously.We discuss the key effects
for system implementation, including machine learning
based methodologies for modeling and optimization of
resource prediction models. There are mainly two
mechanisms for acquiring information of cloud
resources: cloud resource monitoring and cloud resource
prediction. Cloud resource state monitoring cares
about the running state, distribution, and system load in
cloud system by means of monitoring strategies. Cloud
resource state prediction focuses on the variation trend
and running track of resources in cloud system by means
of modeling and analyzing each providers load i.e CPU
usage. Periodic updation by monitoring and future
variation generated by prediction are combined together
to feed cloud system for analyzing performance,
eliminating diagnosing fault, and maintaining dynamic
load balancing, thus to help cloud users obtain desired
computing results by efficiently utilizing system
resources in terms of minimized cost, maximized
performance or tradeoffs between cost and performance.
To decrease overhead, the goal of designing a cloud
resource monitoring and prediction system is to achieve
seamless fusion between cloud technologies and
efficient resource monitoring and prediction
strategies.We make the following contributions:
Server or virtual machine plays a role between
client and the resource providers.It maintain all the
details about client and providers details and status
which is used for prediction based dynamic
resource allocation.
By using load prediction and scheduling algorithm
middle ware allocate the job to the lightly loaded
providers to overload avoidance.

2 SYSTEM OVERVIEW

The architecture of the system is presented in Fig 1.
Server or virtual machine plays a role between client
and the resource providers . Before client job request,
server gets the resource providers details like resource
name and related details with its current system
connectivity which is used for prediction based dynamic
resource allocation. Client submits job to the
middleware instead of provider selection done by client,
then middleware or server or virtual machine schedule
the job through predicting system load (which as system
CPU usage) of the short listed resource providers. Based
on system low CPU usage of multiple providers, server
allocates the job to the provider whose load i.e. current
CPU usage is lower. Then server collects the result from
the providers and sends to the respective client. Thus
client waiting time or response time must be reduce and
also each provider utilization time and energy must be
less so overall performance must be better than the
existing work.We achieve following goals in our
process:

Dynamic resource allocation executed in cloud with
good performance metric values.
Improve the total utilization of server resources.
The performance evaluation affords a first
impression on how the ability to assign specific
virtual machine types to specific tasks of a
processing job, as well as the possibility to
automatically allocate/deallocate virtual machines
in the course of a job execution, can help to
improve the overall resource utilization and,
consequently, reduce the processing cost.

3 LOAD PREDICTION AND SCHEDULING
ALGORITHM

An essential requirement in cloud computing
environment is scheduling the current jobs to be
executed with the given constraints.The scheduler can
order the jobs in a way where balance between
improving the quality of services and at the same time
maintain the efficiency and fairness among jobs.job
scheduling in cloud computing has attracted great
attention.Most research in job scheduling adopt a
paradigm in wich a job in cloud computing system is
characterized by its workload, dead-line and the
corresponding utility obtained by its completion before
dead-line,wich are factors considered in devising an
effective scheduling algorithm.

We introduce the load prediction and
scheduling algorithm to perform efficient task
scheduling which also focus on long job or multi job
request from cloud user.Our algorithm periodically
evaluate the CPU usage of each providers and order the
providers based on low CPU usage.If the job is single, it
has allocate the job to the low CPU usage providers then
get the output and display it to the user.If the job is long
or multi job,first it can split the job into subtasks then it
can count the active providers who having the resource
that are requested by client with low CPU usage that can
be sort listed and assign the subtasks to the sort listed
providers.Then server collects the result from the
providers and sends to the respective client.
Example task client request: Prime number
calculation from 1 to 100000 Virtual machine predict
the providers (eg. 4 provides) based on system load and
split the task as 1 to 25000, 25000 to 50000, 50000 to
75000 and 75000 to 100000 and allocate to each
providers in parallel instead of assign to one resource
provider. So through our approach resource providers
utilization and the performance must be improved as
well as clients response time must be reduced. After
completion of sub task, server maps into task and deliver
the task output to the respective client. Thus through our
approach overall cloud utilization and performance is
measured perfectly and profitably.
The proposed algorithm work in following
ways:



LOAD PREDICTION AND SCHEDULING
ALGORITHM
Procedure:
1. Initialize A = 1,2, k to hold all provider
details
2. for each client request c
i
get the matched
providers

among the providers in A set.
3. if job.equals(single)
4. Compute the CPU usage be c
p
= a1,a2 ,.. k as
request to providers in the set A.
5. Order the provider a1 to k cpu usage present in
the hash table.
6. Allocate the job to the low cpu usage provider
as p1 then get the output O
t
and display it in
client C
i

7. Else
8. Compute the resource name as R
n
from client
request and download the multiple files in the
request C
i

9. if R
n
.equals(resource1)
10. Count the active providers as A
p
having
resource1 and allocate the files parallel to the
providers in the list of hash table.
11. Map the output from the providers and generate
the single output as O
i

12. Connection request to client C
i
and sends O
i
to
the client and displays it.
13. else if(resource2)
14. Count the active providers as A
p
having
resource2 and allocate the files parallel to
the providers in the list of hash table.
15. Map the output from the providers and
generate the single output as O
i

16. Connection request to client C
i
and sends O
i
to
the client and displays it.
17. else(resiurce3)
18. Count the active providers as A
p
having
resource1 or resource2 and split and allocate
parallel to the providers in the list of hash table.
19. Map the output from the providers and generate
the single output as O
i

20. Connection request to client C
i
and sends O
i
to
the client and displays it.
21. end for
22. Output the window in the client C
i


CONCLUSION
we have presented the design and evaluation of system
architecture for cloud resource scheduling and
prediction and the key issues for system implementation,
including virtual machine learning-based methodologies
for modeling and optimization of resource prediction
models. Evaluations are performed on a prototype
system.Our experimental results indicate that the
efficiency and accuracy of our system meet the demand
of online system for dynamic resource utilization and
prediction.
REFERENCES

[1]. L. Siegele, Let It Rise: A Special Report on Corporate
IT, The Economist, vol. 389, pp. 3-16, Oct. 2008.

[2]. M. Nelson, B.-H. Lim, and G. Hutchins, Fast
Transparent Migration for Virtual Machines, Proc.
USENIX Ann. Technical Conf., 2005.

[ 3 ] . T. Wood, P. Shenoy, A. Venkataramani, and M. Yousif,
Black-Box and Gray-Box Strategies for Virtual Machine
Migration, Proc. Symp. Networked Systems Design and
Implementation (NSDI 07), Apr. 2007.

[ 4 ] . C.A. Waldspurger, Memory Resource Management
in VMware ESX Server, Proc. Symp. Operating Systems
Design and Implementa- tion (OSDI 02), Aug. 2002.

[ 5 ] . 09 G. Chen, H. Wenbo, J. Liu, S. Nath, L. Rigas, L. Xiao,
and F. Zhao, Energy-Aware Server Provisioning and
Load Dispatching for Connection-Intensive Internet
Services, Proc. USENIX Symp. Networked Systems Design
and Implementation (NSDI 08), Apr. 2008.

[6]. Paul , M. and G. Sanyal, 2011. Task-scheduling in cloud
computing using credit based assignment problem. Int. J.
Comput.
[7]. Sci. Eng., 3: 3426-3430.Sindhu, S. and S. Mukherjee, 2011.
Efficient task scheduling algorithms for cloud computing
[8]. environment. Commun. Comput. Inform. Sci., 169:79-83.
DOI: 10.1007/978-3-642-22577-2_11

[ 9 ] . P. Padala, K.-Y. Hou, K.G. Shin, X. Zhu, M. Uysal,Z.
Wang, S.Singhal, and A. Merchant, Automated Control
of Multiple Virtualized Resources, Proc.ACM European
conf. Comp System (EuroSys),

[10]. T. Das, P. Padala, V.N. Padmanabhan, R. Ramjee, and K.G.
Shin, Litegreen: Saving Energy in Networked Desktops
Using Virtua-lization, Proc. USENIX Ann.Technical Conf.,
2010

[11]. Jiyani et al.: Adaptive resource allocation for preemptable jobs in
cloud systems (IEEE, 2010), pp.31-36.

[12]. Jose Orlando Melendez &shikhareshMajumdar: Matchmaking
with Limited knowledge of Resources on Clouds and Grids.

[13]. K.H Kim et al. Power-aware provisioning of cloud resources for
real time services. In international workshop on Middlleware for
grids and clouds and e-science, pages 1-6, 2009.

[14]. Karthik Kumar et al.: Resource Allocation for real time tasks
using cloud computing (IEEE, 2011), pp.

[15]. Keahey et al., sky Computing,Intenet computing, IEEE,vol
13,no.5,pp43-51,sept-Oct2009.

Potrebbero piacerti anche