Sei sulla pagina 1di 18

Agent Based QoS Provisioning in Cloud Aggregation of Services

Name of the Candidate: Smt. Sreedevi R. Nagarmunoli Name of the Guide: Dr. Nandini Sidnal

I Introduction and Problem Identification


With the rapid development of processing and storage technologies and the success of the Internet, computing resources have become cheaper, more powerful and more ubiquitously available than ever before. This technological trend has enabled the realization of a new computing model called cloud computing, in which computing resources such as CPU, storage etc. are provided as general utilities that can be leased and released by users through the Internet in an on-demand fashion. National Institute of Standards and Technology (NIST) [29-30], defines cloud computing as a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. The emergence of cloud computing provides many opportunities for academia, the Information Technology (IT) industry and the global economy as an information technology revolution. Compared to other distributed computing paradigms such as Grid computing and High Performance Computing (HPC), cloud computing provides broader inter operability over the world-wide web networks [6, 9]. Lots of research is being carried out by researchers in universities, software labs and industries [38] in several issues and challenges in cloud computing environment. Most of them are working on either security aspects in cloud or bandwidth bottlenecks or on data centers or large-scale data analysis etc. Till date research is not much carried out in monitoring and developing repository of cloud services, customized aggregation of services and distribution of services. There are several research issues [1-5, 11-14] in cloud computing area. They can be classified as technical and non technical. The technical issues are as follows:

Platform Management: Challenges in delivering middleware capabilities for building, deploying, Integrating and managing applications in a multi-tenant, elastic and scalable environment. Cloud-enabled Applications and Platforms: Challenges in building cloud-enabled applications and platforms to take advantage of the scalability, agility and reliability of the cloud. Cloud Management: Research challenges in delivering infrastructure resources on-demand in a multi-tenant, secure, elastic and scalable environment, scalable management of network, computing and storage capacity, scalable orchestration of virtualized resources and data, placement optimization algorithms for energy efficiency, load balancing, high availability and QoS , Accounting, billing, monitoring and pricing models, Security, privacy and trust issues in the cloud, Energy efficiency models, metrics and tools at system and datacenter levels etc. Cloud Enablement: Research challenges in enhancing platform infrastructure to support cloud management requirements in terms of technologies for virtualization of infrastructure resources, virtualization of high performance infrastructure components, autonomic and intelligent management of resources, implications of Cloud paradigm on networking and storage systems support for vertical elasticity and provision of service related metrics. Cloud Interoperability: Challenges to ensure that the available cloud services can work together and interoperate successfully, common and standard interfaces for cloud computing And portability of virtual appliances across diverse clouds providers Cloud Aggregation of Services: Research challenges in the aggregation of resources from diverse cloud providers adding additional layers of service management, novel architectural models for aggregation of cloud providers, brokering algorithms for high availability, performance, proximity, legal domains, price, or energy efficiency, sharing of resources

between cloud providers, networking in the deployment of services across multiple cloud providers, SLA negotiation and management between cloud providers, additional privacy, security and trust management layers atop providers and support of context-aware applications. Some of the other issues are elastic Scalability, trust, security and privacy, data handling, programming models and resource control, systems development and systems management. While it is true that most of these areas have been actively researched for decades, the emergence of the Cloud paradigm demands solutions beyond those produced to date in these areas. As mentioned, in particular scalability and heterogeneity pose complete new issues, but also the cloud implicit problems of latency, distribution and segmentation enhance the problem scope significantly. In particular, the networking and storage components that hitherto were often ignored need to be integral part of the management and design time stacks. On the other hand, non-technological issues play a major role in realizing these technological aspects and in ensuring viability of the infrastructures in the first instance. To these belong in particular (1) economic aspects which cover knowledge about when, why, how to use which cloud system how this impacts on the original infrastructure (provider) long-term experience is lacking in all these areas; and (2) legalistic issues which come as a consequence from the dynamic (location) handling of the clouds, their scalability and the partially unclear legislative issues in the internet. This covers in particular issues related to intellectual property rights and data protection. In addition, (3) aspects related to green IT need to be elaborated further, as the cloud offers principally green capabilities by reducing unnecessary power consumption, given that good scaling behavior and good economic models are in place. Few other non-technical issues are Legislation, Government & Policies,

extended business knowledge, improved QoS management, energy proportional computing etc. Agent Technology The field of software agent technology is a rapidly developing area of research which encompasses a diverse range of topics and interests [31-32]. Software agent is a program working on behalf of either a user or another program, autonomously and continuously in a certain environment. It is inhibited by other processes and agents, while it is also able to learn from previous experience in functioning in an environment over a period of time [33-35]. A mobile agent is a type of software agent that can migrate from one network computing device to another while executing. Mobile agents carry the program, data and execution state information to specific locations to complete task. A mobile agent can create clones to visit several machines in parallel in an asynchronous manner to perform certain distributed tasks.

One of the mandatory properties of software agents is decision making which requires intelligence like human beings. A central aim of Artificial Intelligence (AI) and cognitive science is the construction of intelligent agents which can be defined as software artifacts that exhibit intelligent behavior like human beings in complex domains over an extended period of time. These intelligent agents provide reasoning like humans for decision making and are called as cognitive agents [36]. Cognitive agents mimic human thought process and represent the logical transition of research on human information processing to practical application. Of the available agent architectures, cognitive architecture [37] may be adopted since it incorporates human intelligence and human perspective in terms of environment (knowledge/set of beliefs), desires (state of the environment the agent prefers) and intentions (state of the environment the agent is trying to achieve). II Literature Survey is based on on-going research works in the universities, academicians perspectives of issues and challenges in cloud computing environment and the ongoing project works in research laboratories such as HP, IBM etc. [13] discusses the concept of cloud computing, some of the issues it tries to address, related research topics, and a cloud implementation available today. [14] investigates the challenges of developing a Campus Cloud based on aggregating resources in multiple universities. The requirements model and the architecture model of this cloud environment

are presented. An implementation methodology using open source cloud middleware is also discussed. [100] presents a policy-centered QoS meta-model which can be used by service providers and consumers alike to express capabilities, requirements, constraints, and general management characteristics relevant for SLA establishment in service aggregations. It also provides a QoS assertion model which is generic, domain-independent and conforming to the WS-Policy syntax and semantics. Some of the ongoing research projects in cloud computing area by some universities are discussed in the following paragraphs. The Researchers at Boston University are exploring the merits of "Colocation Games" (CGs) as a novel, economically-sound framework upon which emerging cloud architectures could be implemented. Carnegie Mellon University is actively involved in several cloud computing research programs and is one of the test sites for the Open Cirrus program. Their research includes studies on Multi-Tier Indexing for Web Search Engines, Integrated Cluster Computing Architecture, and others. The researchers at Duke University are conducting research to explore and test Trustworthy Virtual Cloud Computing. Florida International University (FIU) researchers are leveraging cloud computing to analyze aerial images and objects to help support disaster mitigation and environmental protection. The IBM/Google Academic Cloud Computing Initiative (ACCI) is a joint university initiative to help computer science students gain the skills they need to build cloud infrastructures and applications. The IBM/Google initiative aims to provide computer science students with a complete suite of open source based development tools so they can gain the advanced programming skills necessary to innovate and address the challenges of the Cloud Computing model - which uses many computers networked together through open standards - and thereby drive the Internet's next phase of growth. The researchers at Indiana University are working on several cloud computing projects. Their research includes: Large-Scale Distributed Scientific Experiments on Shared Substrate;

Exploring the use of cloud techniques to overcome current medical computing obstacles such as long computation time and large memory requirements; and The FutureGrid project that will provide an experimental platform that accommodates batch, grid and cloud computing. The team at MIT is working in collaboration with Yale University and the University of Wisconsin at Madison on a comparative study of approaches to cluster-based, large-scale data analysis. In addition they are also independently studying Cloud Computing Infrastructure and Technology for Education. The Cloud Computing and Distributed Systems (CLOUDS) Laboratory, formerly GRIDS Lab, is a software research and development group within the Department of Computer Science and Software Engineering at the University of Melbourne, Australia. The CLOUDS Lab is actively engaged in the design and development of next-generation computing systems and applications that aggregate or lease services of distributed resources depending on their availability, capability, performance, cost, and users' quality-of-science requirements. The lab is working towards realizing this vision through its two flagship projects: Gridbus and Cloudbus. The aim of the project work [15] is to investigate how underused computing resources within an enterprise may be harvested and harnessed to improve return on IT investment. In particular, the project seeks to increase efficiency of use of general purpose computers such as office machines and lab computers. As a motivating example, the (small) University of St. Andrews operates ten thousand machines. In aggregate, their unused processing and storage resources represent a major untapped computing resource. The project will make harvested resources available in the form of ad-hoc clouds, the composition of which varies dynamically according to supply of resources and demand for cloud services. The work in [16] will investigate how this may be achieved. Implementers and users of cloud services may wish to consider various high-level emergent properties of those services..

The aim of the project [17] is to develop and evaluate techniques to allow desired high-level properties to be specified, mapped into appropriate low-level actions, and the results to be measured and reported in terms of the high-level properties. The goal of the research work defined in [18] is to make experiments better by using the Cloud in a number of ways. The core idea is that experiments are formed as artifacts, for example as a virtual machine than can be put into the cloud. This enables reproducibility of experiments, a key concept that has too often been ignored. While using the cloud, the project can feed back into research on clouds by investigating how experiments involving the cloud itself can be formulated for use in our new Experimental Laboratory. The MapReduce skeleton, introduced by Google to provide a uniform framework for their massively parallel computations is proving remarkably flexible, and is being proposed as a uniform framework for high performance computing in the cloud. The project [19] would investigate a range of problems in the established area of computational abstract algebra in order to see whether, or how, they can be effectively parallelized using this framework. The cost and time to move data around is currently one of the major bottlenecks in the cloud. Users with large volumes of data therefore may wish to specify where that data should be made available, when it may be moved around, etc. Furthermore, regulations, such as the data protection regulations, may place constraints on the movement of data and the national jurisdictions where it may be maintained. The aim of the project [20] is to investigate the practical issues which affect data migration in the cloud and to propose mechanisms to specify policies on data migration and to use these as a basis for a data management system.

The aim of the work defined in [21] is to investigate how a migration of applications may result in changes to the way that work is actually done. We know from many years of ethnography that work practice in a setting evolves to reflect the systems and culture of that setting and that people develop work-arounds to cope with system problems and failures. How might current work-arounds change when the system is in the cloud rather than locally provided? Do the affordances of systems in the cloud differ from those that are locally provided? What 'cloud-based systems' (e.g. Twitter) might be used to support new kinds of

work around and communications. The aim of the work in [22] is to investigate the use of cloud computing for mobile network data archiving: there are varieties of topics in distributed systems including network measurement, privacy, anonymisation/sanitization, data protection and computation caching. A major concern in Cloud adoption is security and the US Government has just announced a Cloud Computing Security Group in acknowledgement of the expected problems such networking will entail. However, basic network security is flawed at best. Even with modern protocols, hackers and worms can attack a system and create havoc within a few hours. Within a Cloud, the prospects for incursion are many and the rewards are rich. Architectures and applications must be protected and security must be appropriate, emergent and adaptive. The Ph.D work in [23] discusses the following: Should security be centralized or decentralized? Should one body manage security services? What security is necessary and sufficient? How do we deal with emergent issues?

Verification, Validation and Testing are all necessary to basic system evaluation and adoption but when the system and data sources are distributed, these tasks are invariably done in an ad hoc or random manner. The future of testing will be different under new environments; novel system testing strategies may be required to facilitate verification and new metrics will be required to describe levels of system competence and satisfaction. The topics of research within the topic of Cloud VV&T from formal verification through to empirical research and metric validation of multi part or parallel analysis are discussed in [24]. Testing can be applied to systems, security, architecture models and other constructs within the Cloud environment. Failure analysis, taxonomies, error handling and recognition are all related areas of potential research. A cloud may be viewed as comprising the union of a dynamically changing set of cloudlets, each of which provides some particular functionality. Each cloudlet runs on a potentially

dynamically changing set of physical machines. A given machine may host parts of multiple cloudlets. The mappings between cloudlets and physical resources must be carefully managed. To be practical, such management must be automatic - but producing timely highquality management decisions for a cloud of significant scale is a difficult task. The aim of the project in [25] is to apply constraint programming techniques to solve this problem efficiently. Cloud computing requires the management of distributed resources across a heterogeneous computing environment. These resources typically are, from the user viewpoint, "always on". While techniques exist for distributing the compute resources and giving a viewpoint of the user of "always on", this has the potential to be highly inefficient in terms of energy usage. Over the past few years there has been much activity in building "green" (energy efficient) equipment (computers, switches, storage), and energy efficient data centers. This work in [26] will explore the use of virtualization in system and network resources in order to minimize energy usage whilst still meeting the service requirements and operational constraints of a cloud. The research work in [27] discusses the consequence of dynamically provisioned resource allocation under denial of service attacks, to build protection against denial of service into the Cloud in order to reduce the wasting resources. Technology enhanced learning environments such as Finesse have always suffered from unpredictable and sporadic peak demands that are several orders of magnitude greater than their normal load. Due to their interactive nature it is essential that the extra resources are quickly allocated, and due to potential issues of cost, it is also essential that such jumps in resource allocation are quickly released when peaks subside. The work in [28] proposes to revise the analytical model to accommodate Cloud Computing and carry out experiments and

measurements, to compare the responsiveness with earlier work done on Web and Grid computing. In HP labs, the research is focused on delivering the secure application and computing end state of everything-as-a-service. This research envisions billions of users securely

accessing millions of services through thousands of service providers, over millions of servers that process huge bytes of data, delivered securely through terabytes of network traffic. Foundational technologies are created to expand the use and relevance of cloud computing in the enterprise. Work on an enterprise cloud platform, from computing

resources to human skills is being carried out. The work focuses on security analytics that will automate enterprise-grade security and address one of the biggest obstacles in the broad adoption of the cloud in the enterprise. The main goals of research in HP Labs are to collaborate with customers to develop a set of cloud-based applications, to examine data center and application design principles to determine future cloud computing requirements and to determine what an ideal cloud data center would look like.

IBM researchers need for faster turnaround times for the provisioning of resources for specific research projects drove the adoption of cloud computing. Typically, it required two weeks for researchers to gain approval for a resource request, get the appropriate infrastructure identified and provisioned and have usage monitoring in place. Valuable time was being lost. IBM has doubled its productivity using cloud. Google and IBM are jointly working on data centers in cloud. It is tackling the major challenges facing in todays storage clouds, including cost effectiveness, data mobility across cloud providers, security guarantees and massive computing power demands that are affecting QoS. Gartner is working on security risks and cloud based ERP.

III Objectives

In this work, we plan to study the following issues in cloud computing and try to resolve some of the issues in aggregating the services. Cloud aggregator is a platform or service that combines multiple clouds with similar characteristics (geographic area, cost, technology size, etc.) into a single point of access, format, and structure. Value is derived from cost savings and greater efficiency found from the ability to easily leverage multiple services providers.

As a cost-effective and time-efficient way to develop new applications and services, service aggregation in cloud computing empowers all service providers and consumers and creates tremendous opportunities in various industry sectors. However, it also poses various challenges to the privacy of personal information as well as the confidentiality of business and governmental information. The full benefits of service aggregation in cloud computing would only be enjoyed if the issues are addressed properly.

Some of the issues that need to be resolved in aggregating [7-8, 10] the cloud services are availability of services that may be hired in real time without conflicts, novel architectural models for aggregation of cloud providers, Brokering algorithms for high availability, performance, proximity, legal domains, price, or energy efficiency, Sharing of resources between cloud providers, Networking in the deployment of services across multiple cloud providers, additional privacy, security and trust management layers atop providers, Support of context-aware applications, Automatic management of service elasticity . Objectives of the research are to design an agent based QoS provisioning system for cloud clients. The issues to be considered in our research are as follows: To design a cognitive agent based novel architecture/ scheme for discovering futuristic cloud services (that may be in demand) and develop a repository of the same by networking multiple cloud providers. To design a scheme to autonomously and intelligently monitor, negotiate and aggregate the resources from the cloud repository based on the QoS (time, price, availability) defined in the cloud clients requests. The scheme shall explore the use

of virtualization in system and resources in order to minimize energy usage whilst still meeting the service requirements and operational constraints of a cloud. To design a scheme to dynamically and automatically deliver/distribute/schedule the services to the requested clients ensuring high availability of services and to develop billing and pricing model for measuring cloud services utility.

IV Proposed Methodology:
The cloud providers are networked by segmenting or clustering them based on type of services provided, geographical locations etc. Cognitive agents crawl blindly through the cloud to discover the futuristic cloud services and build the repository of services. Future next request prediction may be done using log record, click stream record and user information or Markov model to anticipate futuristic requests for discovering the cloud services. Discovery process may be carried out in parallel using the concept of agent cloning. Repository shall be updated at regular frequency to eliminate the stale information using aging techniques. Multidimensional data structure shall be deployed to store the cloud services in the repository. Efficient indexing algorithms and meta-services (service cache) shall be adopted to retrieve the service information from the repository to improve the performance of the repository access. The repository shall store the services offered, vendor details, pricing, current status, QoS etc. Based on the service requests from cloud clients cognitive agents monitor the status of the services, negotiate with the vendors, and aggregate them based on the specified QoS. Unsupervised learning mechanism may help the agents to negotiate intelligently for better prices to aggregate and distribute the cloud services. English auctions may be used to maximize the profits for vending the services. Multiple options of aggregated services are to be given to the clients in order to increase their satisfaction level.

After the services are aggregated they are to be distributed in a customized way. Scheduling has to be done in an optimal way so as to maximize the availability and utility of services. Billing and pricing algorithms are to be developed for the delivered services. All the above objectives will be simulated under various scenarios to assess the performance and effectiveness of the proposed scheme. The simulation shall be carried out on IBM Blade Center HS22 using compatible programming language

V Possible outcome
To develop a framework for agent based QoS in cloud aggregation of services that integrates the schemes/platforms covering the above mentioned objectives. To develop an application (campus cloud) covering the objectives. In this process a complete scheme for campus cloud shall be designed. The proposed work aims to bring out flexible and adaptable services by using agents. Agent technology offers several benefits in aggregating cloud services such as autonomy in discovering the cloud services, developing and updating the repository, embedding intelligence, flexibility in negotiation, adaptability to network environments, customization of QoS requirements etc. The research work may be enhanced in future by employing some agent based solutions to other issues such as cloud management, enablement, and interoperability and to develop some applications.

VI References:
1. Ignacio M. Llorente Key Research Challenges in Cloud computing

http://opennebula.org/_media/community:open_challenges_in_cloud_computing.pdf 2. B. Rochwerger, J. Caceres, R.S. Montero, D. Breitgand, E. Elmroth, A. Galis, E. Levy,I.M. Llorente, K. Nagin, Y. Wolfsthal, The RESERVOIR Model and Architecture

for Open Federated Cloud Computing, IBM Systems Journal, Vol. 53, No. 4. (2009) 3. B. Sotomayor, R. S. Montero, I. M. Llorente and I. Foster, Virtual Infrastructure Management in Private and Hybrid Clouds, IEEE Internet Computing, September/ October 2009 (vol. 13 no. 5) 4. Rafael Moreno-Vozmediano, Ruben S. Montero, Ignacio M. Llorente, Multi-Cloud Deployment of Computing Clusters for Loosely-Coupled MTC Applications, IEEE Transactions on Parallel and Distributed Systems, in press 5. MARK VANDERWIELE, The IBM Research Cloud Computing Initiative, Keynote talk at ICVCI 2008, RTP, NC, USA, 1516 May 2008. 6. WIKIPEDIA, Cloud Computing, http://en.wikipedia.org/wiki/Cloud computing, May 2008. 7. Hany H Ammar, Alaa Hamouda, Mustafa Gamal, Walid Abdelmoez and Ahmed Moussa . CampusCloud: Aggregating Universities Computing Resources in Ad-Hoc Clouds 8. Bernstein, David; Ludvigson, Erik; Sankar, Krishna; Diamond, Steve; Morrow, Monique, Blueprint for the Intercloud Protocols and Formats for Cloud Computing Interoperability, IEEE Computer Society, 24-5-2009. 9. Rajkumar Buyya, Chee Shin Yeo, Srikumar Venugopal, James Broberg, Ivona Brandic, Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility, Journal of Future Generation Computer Systems, December 2008. openNebula http://www.opennebula.org/

10. Marty Humphrey, and Glenn Wasson, The University of Virginia Campus Grid: Integrating Grid Technologies with the Campus Information Infrastructure, Lecture Notes in Computer Science, Volume 3470/2005, pp 50-58. 11. CamGrid. http://www.escience.cam.ac.uk/projects/camgrid/ 12. OxGrid. http://www.oerc.ox.ac.uk/resources/oxgrid/oxgrid-concept 278 13. Mladen A. Vouk Cloud Computing Issues, Research and Implementations Journal of Computing and Information Technology - CIT 16, 2008, 4, 235246 14. W. M. BULKELEY, IBM, Google, Universities Combine Cloud Foces, Wall Street Journal, October 8, 2007, available on http://online.wsj.com/public/article. 15. A. Dearle & Dr G. Kirby, Harvesting Unused Resources available from http://www.cs.st-andrews.ac.uk/node/1723 16. A. Dearle & Dr G. Kirby, Ad-Hoc Clouds available from andrews.ac.uk/node/1723 17. G. Kirby & Prof. A. Dearle, Specifying, Measuring and Understanding High-Level Cloud Properties, available from http://www.cs.st-andrews.ac.uk/node/1723 18. I. Gent, An Experimental Laboratory in the Cloud, available from http://www.cs.standrews.ac.uk/node/1723 19. S. Linton , Computational Group Theory with Map-Reduce , available from http://www.cs.st-andrews.ac.uk/node/1723 20. I Sommerville, Data migration in the cloud, available from http://www.cs.sthttp://www.cs.st-

andrews.ac.uk/node/1723 21. I Sommerville, Socio-technical issues in cloud computing available from

http://www.cs.st-andrews.ac.uk/node/1723 22. T. Henderson, Mobile data archiving in the cloud, available from http://www.cs.standrews.ac.uk/node/1723

23. I Duncan, Cloud Security, available from http://www.cs.st-andrews.ac.uk/node/1723 24. I. Duncan, Cloud VV&T and Metrics, available from http://www.cs.st-

andrews.ac.uk/node/1723 25. I. Miguel, A Dearle & G Kirby, Constraint-Based Cloud Management, available from http://www.cs.st-andrews.ac.uk/node/1723 26. Saleem, Bhatti, The Green Cloud, available from http://www.cs.st-

andrews.ac.uk/node/1723 27. Colin Allison and Alan Miller, Denial of Service Issues in Cloud Computing available from http://www.cs.st-andrews.ac.uk/node/1723 28. Mohan Baruwal Chhetri, Bao Quoc Vo and Ryszard Kowalczyk Policy-based Management of QoS in Service Aggregations 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing 29. National institute of standards and technology definition of cloud computing available from
researchers blog with URL as http://www.nist.gov/itl/cloud/upload/cloud-def-v15.pdf

30. Evelyn Brown, Final Version of NIST Cloud Computing Definition Published, 2011, available from http://www.nist.gov/itl/csd/cloud-102511.cfm 31. G. Weiss. Multiagent Systems: A Modern Approach to Distributed Articial Intelligence. USA:MIT Press, 1999, pp. 619. 32. UMBC Agents Web, http://www.cs.umbc.edu/agents, [May 2010]. 33. S. Franlin and A. Graser, Is it an agent or just a program, Proc. International Workshop on Agent Theories, Architectures and Languages (ATAL-96), 1996, pp. 2135.

34. N. R. Jennings. Developing Agent based Systems. IEEE Transactions on Proc. Software Enggineering, Vol. 144, pp. 424-430, 1997. 35. J. Bradshaw , Software Agents, USA: AAAI Press 36. A. S. Rao and Michel G. Modeling Agents within a BDI-Architecture. in Proc.International conference on Principles of Knowledge Representation and Reasoning, 1991, pp. 473-484. 37. P. Cohen and H. J. Levesque. Intention Is Choice With Commitment. Journal of Artificial Intelligence, Vol. 42, pp. 213-216, 1999. 38. Cloud computing Research http://www.cloudbook.net/directories/researchclouds/cloud-computing-research.php

Potrebbero piacerti anche