Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
INTRODUCTION
Cloud computing has recently emerged as a promising hosting platform that allows
multiple cloud users called tenants to share a common physical computing infrastructure.
With quick implementation of the thought of Software as a Service (SaaS) and Service
Oriented Architecture (SOA), the Internet has evolved into an important service delivery
infrastructure instead of merely providing host connectivity. We represent IntTest,
verification method that can enthusiastically verify the integrity of data processing results in
the cloud infrastructure and pinpoint malicious service providers when inconsistent results
are noticed. We confirm service reliability by analyze result consistency information with
graph study. We anticipated a new runtime service reliability verification scheme that
employs a novel attestation graph model to capture attestation results among dissimilar cloud
nodes.
Overview:
Some limitations slow down the acceptance of SaaS and prohibit it from being used in some
cases:
Since data are being stored on the vendor’s servers, data security becomes an issue.
SaaS applications are hosted in the cloud, far away from the application users. This
introduces latency into the environment; so, for example, the SaaS model is not suitable for
applications that demand response times in the milliseconds.
Some business applications require access to or integration with customer's current
data. When such data are large in volume or sensitive (e.g., end users' personal information),
integrating them with remotely hosted software can be costly or risky, or can conflict with
data governance regulations.
Constitutional search/seizure warrant laws do not protect all forms of SaaS
dynamically stored data. The end result is that a link is added to the chain of security where
access to the data, and, by extension, misuse of these data, are limited only by the assumed
honesty of 3rd parties or government agencies able to access the data on their own
recognizance.
Switching SaaS vendors may involve the slow and difficult task of transferring very
large data files over the Internet.
Organizations that adopt SaaS may find they are forced into adopting new versions,
which might result in unforeseen training costs or an increase in probability that a user might
make an error.
Relying on an Internet connection means that data are transferred to and from a SaaS
firm at Internet speeds, rather than the potentially higher speeds of a firm’s internal network
We provide a scalable and efficient distributed service integrity attestation framework
for large-scale cloud computing infrastructures. We present a novel integrated service
integrity attestation scheme that can achieve higher pinpointing accuracy than previous
techniques. We describe a result auto-correction technique that can automatically correct the
corrupted results produced by malicious attackers. We conduct both analytical study and
experimental evaluation to quantify the accuracy and overhead of the integrated service
integrity attestation scheme.
Cloud computing
The term "moving to cloud" also refers to an organization moving away from a
traditional CAPEX model to the OPEX model. Proponents claim that cloud computing allows
companies to avoid upfront infrastructure costs, and focus on projects that differentiate their
businesses instead of on infrastructure. Proponents also claim that cloud computing allows
enterprises to get their applications up and running faster, with improved manageability and
less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and
unpredictable business demand. Cloud providers typically use a "pay as you go" model. This
can lead to unexpectedly high charges if administrators do not adapt to the cloud pricing
model. The present availability of high-capacity networks, low-cost computers and storage
devices as well as the widespread adoption of hardware virtualization, service-oriented
architecture, and autonomic and utility computing have led to a growth in cloud computing.
Agility:
Accessibility to the software which enables, machines to interact with cloud software
in the same way that a traditional user interface facilitates interaction between humans and
computers. Cloud computing systems typically use Representational State Transfer (REST)-
based APIs.
Cost:
Enable users to access systems using a web browser regardless of their location. As
infrastructure is off-site and accessed via the Internet, users can connect from anywhere.
Maintenance
The cloud computing applications is easier, because they do not need to be installed
on each user's computer and can be accessed from different places. Performance
Productivity
It may be increased when multiple users can work on the same data simultaneously,
rather than waiting for it to be saved and emailed. Time may be saved as information does not
need to be re-entered when fields are matched, nor do users need to install application
software upgrades to their computer. Reliability
It improves with the use of multiple redundant sites, which makes well-designed
cloud computing suitable for business continuity and disaster recovery.
Security
In the most basic cloud-service model & according to the IETF providers of IaaS offer
computers physical or virtual machines and other resources. IaaS clouds often offer
additional resources such as a virtual-machine disk image library, raw block storage, and file
or object storage, firewalls, load balancers, IP addresses, virtual local area
networks (VLANs), and software bundles. IaaS-cloud providers supply these resources on-
demand from their large pools installed in data centers. For wide-area connectivity, customers
can use either the Internet or carrier clouds.
To deploy their applications, cloud users install operating-system images and their
application software on the cloud infrastructure. In this model, the cloud user patches and
maintains the operating systems and the application software. Cloud providers typically bill
IaaS services on a utility computing basis: cost reflects the amount of resources allocated and
consumed.
In the PaaS models, cloud providers deliver a computing platform, typically including
operating system, programming language execution environment, database, and web server.
Application developers can develop and run their software solutions on a cloud platform
without the cost and complexity of buying and managing the underlying hardware and
software layers. With some PaaS offers like Microsoft Azure and Google App Engine, the
underlying computer and storage resources scale automatically to match application demand
so that the cloud user does not have to allocate resources manually.
The latter has also been proposed by an architecture aiming to facilitate real-time in
cloud environments. Platform as a service (PaaS) provides a computing platform and a key
chimney. It joins with software as a service (SaaS) and infrastructure as a service (IaaS),
model of cloud computing.
Cloud Clients
Web browsers, mobile app, thin client, terminal, emulator,...
Saas
Games...
Paas
Development tools...
Iaas
Balancers, network...
Deployment Models
Private cloud
Private cloud is cloud infrastructure operated solely for a single organization, whether
managed internally or by a third-party, and hosted either internally or externally. Undertaking
a private cloud project requires a significant level and degree of engagement to virtualize the
business environment, and requires the organization to reevaluate decisions about existing
resources. When done right, it can improve business, but every step in the project raises
security issues that must be addressed to prevent serious vulnerabilities. Self-run data
centers are generally capital intensive. They have a significant physical footprint, requiring
allocations of space, hardware, and environmental controls. These assets have to be refreshed
periodically, resulting in additional capital expenditures. They have attracted criticism
because users "still have to buy, build, and manage them" and thus do not benefit from less
hands-on management, essentially "the economic model that makes cloud computing such an
intriguing concept".
Public Cloud
A cloud is called a "public cloud" when the services are rendered over a network that
is open for public use. Public cloud services may be free or offered on a pay-per-usage
model. Technically there may be little or no difference between public and private cloud
architecture, however, security consideration may be substantially different for services that
are made available by a service provider for a public audience and when communication is
effected over a non-trusted network. Generally, public cloud service providers like Amazon
AWS, Microsoft and Google own and operate the infrastructure at their data center and
access is generally via the Internet. AWS and Microsoft also offer direct connect services
called "AWS Direct Connect" and "Azure ExpressRoute" respectively, such connections
require customers to purchase or lease a private connection to a peering point offered by the
cloud provider.
Hybrid Cloud
HYBRID
CLOUD
App
Loads
Public
Private
Cloud Os
Cloud OS Placement Decisions
Another example of hybrid cloud is one where IT organizations use public cloud
computing resources to meet temporary capacity needs that cannot be met by the private
cloud. This capability enables hybrid clouds to employ cloud bursting for scaling across
clouds. Cloud bursting is an application deployment model in which an application runs in a
private cloud or data center and "bursts" to a public cloud when the demand for computing
capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an
organization only pays for extra compute resources when they are needed. Cloud bursting
enables data centers to create an in house IT infrastructure that supports average workloads,
and use cloud resources from public or private clouds, during spikes in processing demands.
Cloud Security:
Cloud computing and storage provides users with capabilities to store and process their data
in third-party data centers. Organizations use the cloud in a variety of different service
models (with acronyms such as SaaS, PaaS, and IaaS) and deployment models
(private, public, hybrid, and community). Security concerns associated with cloud computing
fall into two broad categories: security issues faced by cloud providers (organizations
providing software-, platform-, or infrastructure-as-a-service via the cloud) and security
issues faced by their customers (companies or organizations who host applications or store
data on the cloud). The responsibility is shared, however. The provider must ensure that their
infrastructure is secure and that their clients’ data and applications are protected, while the
user must take measures to fortify their application and use strong passwords and
authentication measures.
When an organization elects to store data or host applications on the public cloud, it loses its
ability to have physical access to the servers hosting its information. As a result, potentially
sensitive data is at risk from insider attacks. According to a recent Cloud Security
Alliance Report, insider attacks are the sixth biggest threat in cloud computing. Therefore,
Cloud Service providers must ensure that thorough background checks are conducted for
employees who have physical access to the servers in the data center. Additionally, data
centers must be frequently monitored for suspicious activity.
In order to conserve resources, cut costs, and maintain efficiency, Cloud Service Providers
often store more than one customer's data on the same server. As a result, there is a chance
that one user's private data can be viewed by other users (possibly even competitors). To
handle such sensitive situations, cloud service providers should ensure proper data
isolation and logical storage segregation. The extensive use of virtualization in implementing
cloud infrastructure brings unique security concerns for customers or tenants of a public
cloud service. Virtualization alters the relationship between the OS and underlying hardware
- be it computing, storage or even networking. This introduces an additional layer -
virtualization - that itself must be properly configured, managed and secured. [6] Specific
concerns include the potential to compromise the virtualization software, or "hypervisor".
While these concerns are largely theoretical, they do exist.
Deterrent controls
These controls are intended to reduce attacks on a cloud system. Much like a warning sign on
a fence or a property, deterrent controls typically reduce the threat level by informing
potential attackers that there will be adverse consequences for them if they proceed.
Preventive controls
Preventive controls strengthen the system against incidents, generally by reducing if not
actually eliminating vulnerabilities. Strong authentication of cloud users, for instance, makes
it less likely that unauthorized users can access cloud systems, and more likely that cloud
users are positively identified.
Detective Controls
Detective controls are intended to detect and react appropriately to any incidents that occur.
In the event of an attack, a detective control will signal the preventative or corrective controls
to address the issue.[8] System and network security monitoring, including intrusion detection
and prevention arrangements, are typically employed to detect attacks on cloud systems and
the supporting communications infrastructure.
Corrective controls
Corrective controls reduce the consequences of an incident, normally by limiting the damage.
They come into effect during or after an incident. Restoring system backups in order to
rebuild a compromised system is an example of a corrective control.
Identity management
Every enterprise will have its own identity management system to control access to
information and computing resources. Cloud providers either integrate the customer’s
identity management system into their own infrastructure,
using federation or SSO technology, or a biometric-based identification system, or provide an
identity management system of their own. CloudID, for instance, provides privacy-preserving
cloud-based and cross-enterprise biometric identification. It links the confidential information
of the users to their biometrics and stores it in an encrypted fashion. Making use of a
searchable encryption technique, biometric identification is performed in encrypted domain
to make sure that the cloud provider or potential attackers do not gain access to any sensitive
data or even the contents of the individual queries.
Physical security
Cloud service providers physically secure the IT hardware (servers, routers, cables etc.)
against unauthorized access, interference, theft, fires, floods etc. and ensure that essential
supplies (such as electricity) are sufficiently robust to minimize the possibility of disruption.
This is normally achieved by serving cloud applications from 'world-class' (i.e. professionally
specified, designed, constructed, managed, monitored and maintained) data centers.
Personnel security
Various information security concerns relating to the IT and other professionals associated
with cloud services are typically handled through pre-, para- and post-employment activities
such as security screening potential recruits, security awareness and training programs,
proactive.
Privacy
Providers ensure that all critical data (credit card numbers, for example) are masked or
encrypted and that only authorized users have access to data in its entirety. Moreover, digital
identities and credentials must be protected as should any data that the provider collects or
produces about customer activity in the cloud.
Data Security:
A number of security threats are associated with cloud data services: not only traditional
security threats, such as network eavesdropping, illegal invasion, and denial of service
attacks, but also specific cloud computing threats, such as side channel attacks, virtualization
vulnerabilities, and abuse of cloud services. The following security requirements limit the
threats.
Confidentiality
Data confidentiality is the property that data contents are not made available or disclosed to
illegal users. Outsourced data is stored in a cloud and out of the owners' direct control. Only
authorized users can access the sensitive data while others, including CSPs, should not gain
any information of the data. Meanwhile, data owners expect to fully utilize cloud data
services, e.g., data search, data computation, and data sharing, without the leakage of the data
contents to CSPs or other adversaries.
Access controllability
Access controllability means that a data owner can perform the selective restriction of access
to his data outsourced to cloud. Legal users can be authorized by the owner to access the data,
while others cannot access it without permissions. Further, it is desirable to enforce fine-
grained access control to the outsourced data, i.e., different users should be granted different
access privileges with regard to different data pieces. The access authorization must be
controlled only by the owner in un-trusted cloud environments.
Integrity
Data integrity demands maintaining and assuring the accuracy and completeness of data. A
data owner always expects that his data in a cloud can be stored correctly and trustworthily. It
means that the data should not be illegally tampered, improperly modified, deliberately
deleted, or maliciously fabricated. If any undesirable operations corrupt or delete the data, the
owner should be able to detect the corruption or loss. Further, when a portion of the
outsourced data is corrupted or lost, it can still be retrieved by the data users.
Effective Encryption:
Some advanced encryption algorithms which have been applied into the cloud computing
increase the protection of privacy.
In the CP-ABE, the encryptor controls access strategy, as the strategy gets more complex, the
design of system public key becomes more complex, and the security of the system is proved
to be more difficult. The main research work of CP-ABE is focused on the design of the
access structure.
Key-policy ABE (KP-ABE)
In the KP-ABE, attribute sets are used to explain the encrypted texts and the private keys
with the specified encrypted texts that users will have the left to decrypt.
Searchable Encryption is a cryptographic primitive which offers secure search functions over
encrypted data. In order to improve search efficiency, SE generally builds keyword indexes
to securely perform user queries. SE schemes can be classified into two categories: SE based
on secret-key cryptography and SE based on public-key cryptography.
Cipher Text:
Types of Cipher:
Historical ciphers
Historical pen and paper ciphers used in the past are sometimes known as classical ciphers.
They include:
Substitution cipher: the units of plaintext are replaced with cipher text (e.g., Caesar
cipher and one-time pad)
Polyalphabetic substitution cipher: a substitution cipher using multiple
substitution alphabets (e.g., Vigenère cipher and Enigma machine)
Polygraphic substitution cipher: the unit of substitution is a sequence of two or
more letters rather than just one (e.g., Play fair cipher)
Transposition cipher: the cipher text is a permutation of the plaintext (e.g., rail fence
cipher)
Historical ciphers are not generally used as a standalone encryption technique because they
are quite easy to crack. Many of the classical ciphers, with the exception of the one-time pad,
can be cracked using brute force.
Modern ciphers
Modern ciphers are more secure than classical ciphers and are designed to withstand a wide
range of attacks. An attacker should not be able to find the key used in a modern cipher, even
if he knows any amount of plaintext and corresponding cipher text. Modern encryption
methods can be divided into the following categories:
In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a
shared key set up in advance and kept secret from all other parties; the sender uses this key
for encryption, and the receiver uses the same key for decryption. In an asymmetric key
algorithm (e.g., RSA), there are two separate keys: a public key is published and enables any
sender to perform encryption, while a private key is kept secret by the receiver and enables
only him to perform correct decryption.
Symmetric key ciphers can be divided into block ciphers and stream ciphers. Block ciphers
operate on fixed-length groups of bits, called blocks, with an unvarying transformation.
Stream ciphers encrypt plaintext digits one at a time on a continuous stream of data and the
transformation of successive digits varies during the encryption process.
Ranked Search:
A ranking is a relationship between a set of items such that, for any two items, the first is
either 'ranked higher than', 'ranked lower than' or 'ranked equal to' the second.
In mathematics, this is known as a weak order or total preorder of objects. It is not
necessarily a total order of objects because two different objects can have the same ranking.
The rankings themselves are totally ordered. For example, materials are totally preordered
by hardness, while degrees of hardness are totally ordered.
By reducing detailed measures to a sequence of ordinal numbers, rankings make it possible to
evaluate complex information according to certain criteria. Thus, for example, an Internet
search engine may rank the pages it finds according to an estimation of their relevance,
making it possible for the user quickly to select the pages they are likely to want to see.
Multi-Keyword Search:
With the advent of cloud computing, data owners are motivated to outsource their complex
data management systems from local sites to the commercial public cloud for great flexibility
and economic savings. But for protecting data privacy, sensitive data have to be encrypted
before outsourcing, which obsoletes traditional data utilization based on plaintext keyword
search. Thus, enabling an encrypted cloud data search service is of paramount importance.
Considering the large number of data users and documents in the cloud, it is necessary to
allow multiple keywords in the search request and return documents in the order of their
relevance to these keywords.
Hierarchical Clustering:
Agglomerative: This is a "bottom up" approach: each observation starts in its own
cluster, and pairs of clusters are merged as one moves up the hierarchy.
Divisive: This is a "top down" approach: all observations start in one cluster, and
splits are performed recursively as one moves down the hierarchy.
In hierarchical clustering, the data is not partitioned into a particular cluster in a single step. Instead, a
series of partitions takes place, which may run from a single cluster containing all objects to n
clusters that each contains a single object. Hierarchical Clustering is subdivided into agglomerative
methods, which proceed by a series of fusions of the n objects into groups, and divisive methods,
which separate n objects successively into finer groupings. Agglomerative techniques are more
commonly used, and this is the method implemented in XLMiner.
CHAPTER 2
PROBLEM DEFINITION:
the workload size of a sub-application is the quantity that indicates the number of
computation tasks or the volume of data to be processed. To schedule an appropriate set of
VMs for serving customer requests is a challenging problem, Which is typically termed as
resource provisioning in the cloud the computational complexity for finding the solution is
much lower than a typical combinatorial optimization problem.
CHAPTER 3
SYSTEM DESCRIPTION
OBJECTIVE:
To Objective of this project is to allocate the chunks of files as uniformly as
possible among the nodes such that no node manages an excessive number of chunks and
also to reduce network traffic and maximize the network bandwidth available to normal
applications. estimate the accuracy of optimizing, to increase cloud service performance and
allocate the workload using on cloud.
EXISTING SYSTEM:
The files in a cloud can be arbitrarily created, deleted, and appended, and nodes can
be upgraded,
Replaced and added in the file system, the file chunks are not distributed as uniformly
as possible among the nodes.
Distributed file systems in clouds to manage the metadata information of the file
systems and to balance the loads of storage nodes based on that metadata.
When the number of storage nodes, the number of files and the number of accesses to
files increase linearly, the central nodes become a performance bottleneck, as they
are unable to accommodate a large number of file accesses due to clients applications.
The central nodes to tackle the load imbalance problem exacerbate their heavy loads.
DISADVANTAGES:
PROPOSED SYSTEM:
Jobs are classified into three types as advanced reservation, immediate, and best
effort where advanced reservation and immediate can preempt the best effort jobs and
they are not preemptible.
The best effort jobs are backfilled and maintained by the Control Management
System (CMS) to be scheduled when the resources are free.
o Though the computing resource means core, memory, and bandwidth, we
consider core and memory as resource capacity with the assumption that the
band- width is more or less the same in private cloud
In a cloud, the end users’ service requests are considered as job and the job is
assigned to a virtual machine (VM). In the proposed model, the hosts are assumed to be
homogeneous physical machines (or servers) that contain the computational power where
the VMs are deployed. Since the proposed algorithm focuses on the available cores and
mem- ory, adaptation of the proposed model in a heterogeneous environment is a straight
forward task. The architecture of the proposed cloud model is shown in Figure 1 and
notations used in the system model are described in Notations Section. The proposed model
pivots around a central mechanism named CMS (Control Management System) and a data-
center that consists of 𝑚 homogeneous hosts (servers) is interconnected with the CMS
and there may be a total of
ADVANTAGES:
• High speed in customer service.
• Increase cloud service performance.
• Low cost.
• Estimate the exact performance.
• Cloud service based upon reserved VMs is typically running for long terms, i.e.,
several years.
• The workload-allocation problem based upon the multi-tenancy principle in the cloud.
SYSTEM REQUIREMENTS:
Software requirements
Hardware requirement
Processor : Pentium dual core
RAM : 1 GB
Hard Disk Drive : 80 GB
Monitor : 17” Colour Monitor
SYSTEM ARCHITECTURE:
Datacente
r
Information
CMS database
Service Scheduler
dispatcher
Resource
monitoring
Scheduling Updating
Host/server
Host 1
Host 2 Host 3
CHAPTER 4
MODULE DESCRIPTION
DBw=b′i<=biDBw=b′i<=bi
(2)
Lsfin(α)=Ls(α)Lsfin(α)=Ls(α)
(3)
The system considers situations in which the resource-intensive tasks are still
running on allocated virtual machines while other VMs are waiting for tasks at
the same time. Therefore, the tasks must be distributed among the free VMs.
However, we must first consider how and when to preempt tasks.
Given the ranking of the tasks, they are allocated to the VMs using a
bipartite graph, but the issue is now one oftime,
Suppose ‘P1’-‘VM1’ has complete ‘t1’, or VMn-1 has completed at time
∑ Pn-1. However, one VM is stillrunning with high-priority tasks, and its
processing time is also greater.
Therefore, we preempt the task per the following methodology. Before
preempting, we should check the status of the VM (i.e. whether it is free
or busy).
CHAPTER 6
TESTING
Functional tests provide systematic demonstrations that functions tested are available
as specified by the business and technical requirements, system documentation and user
manuals.
System testing ensures that the entire integrated software system meets requirements. It
tests a configuration to ensure known and predictable results. An example of system testing is
the configuration oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and integration points.
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested . Black box tests, as most other
kinds of tests, must be written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a testing in
which the software under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the software works.
Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.
Field testing will be performed manually and functional tests will be written in detail.
Test objectives
Features to be tested
Integration Testing:
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.
The task of the integration test is to check that components or software applications,
e.g. components in a software system or – one step up – software applications at the company
level – interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing:
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
CHAPTER 7
SYSTEM ORGANIZATION
SOFTWARE AND TECHNOLOGIES DESCRIPTION
The Java Server Pages technology enables to generate dynamic web content, such as HTML,
DHTML, XHTML, and XML files, to include in a Web application. JSP files are one way to
implement server-side dynamic page content. JSP files allow a Web server, such as Apache
Tomcat, to add content dynamically to r HTML pages before they are sent to a requesting
browser.
When deploy a JSP file to a Web server that provides a servlet engine, it is pre processed into
a servlet that runs on the Web server. This is in contrast with client-side JavaScript™
(within <SCRIPT> tags), which is run in a browser. A JSP page is ideal for tasks that are
better suited to execution on the server, such as accessing databases or calling Enterprise
Java™ beans.
can create and edit a JSP file in the HTML editor by adding r own text and images using
HTML, JSP tagging, or JavaScript, including Java source code inside of scriptlet tags.
Typically, JSP files have the file extension .jsp. Additionally, the JSP specification suggests
that JSP fragment files should have file extension .jspf. If this convention is not followed, the
JSP validator will treat JSP fragments as regular standalone JSP files, and compilation errors
might be reported.
The Sun Microsystems JSP 1.2 Specification provides the ability to create custom JSP tags.
Custom tags simplify complex actions and provide developers with greater control over page
content. Custom tags are collected into a library (taglib). A tag library descriptor file
(taglib.tld) is an XML document that provides information about the tag library, including the
taglib short name, library description, and tag descriptions. To use JSP 1.2 custom taglibs,
can import the tag library .tld and .jar files into r project to use them, or associate them as
Web Library projects. Can also reference a TLD file by using a URI.
Java Server Page (JSP) is a technology for controlling the content or appearance of Web
pages through the use of servlets, small programs that are specified in the Web page and run
on the Web server to modify the Web page before it is sent to the user who requested it. Sun
Microsystems, the developer of Java, also refers to the JSP technology as the Servlet
application program interface (API). JSP is comparable to Microsoft's Active Server Page
(ASP) technology. Whereas a Java Server Page calls a Java program that is executed by the
Web server, an Active Server Page contains a script that is interpreted by a script interpreter
(such as VBScript orJScript) before the page is sent to the user.
An HTML page that contains a link to a Java servlet is sometimes given the file name suffix
of .JSP.
Overview of JSP:
Java Server Pages (JSP) is a technology for developing web pages that support
dynamic content which helps developers insert java code in HTML pages by making
use of special JSP tags, most of which start with <% and end with %>.
A Java Server Pages component is a type of Java servlet that is designed to fulfill the
role of a user interface for a Java web application. Web developers write JSPs as text
files that combine HTML or XHTML code, XML elements, and embedded JSP
actions and commands.
Using JSP, we can collect input from users through web page forms, present records
from a database or another source, and create web pages dynamically.
JSP tags can be used for a variety of purposes, such as retrieving information from a
database or registering user preferences, accessing JavaBeans components, passing
control between pages and sharing information between requests, pages etc.
Applications of JSP:
Java Server Pages often serve the same purpose as programs implemented using the Common
Gateway Interface (CGI). But JSP offer several advantages in comparison with the CGI.
JSP are always compiled before it's processed by the server unlike CGI/Perl which
requires the server to load an interpreter and the target script each time the page is requested.
Java Server Pages are built on top of the Java Servlets API, so like Servlets, JSP also
has access to all the powerful Enterprise Java APIs, including JDBC, JNDI, EJB, JAXP etc.
JSP pages can be used in combination with servlets that handle the business logic, the
model supported by Java servlet template engines.
Finally, JSP is an integral part of Java EE, a complete platform for enterprise class
applications. This means that JSP can play a part in the simplest applications to the most
complex and demanding.
Advantages of JSP:
Following is the list of other advantages of using JSP over other technologies:
vs. Active Server Pages (ASP): The advantages of JSP are twofold. First, the
dynamic part is written in Java, not Visual Basic or other MS specific language, so it is more
powerful and easier to use. Second, it is portable to other operating systems and non-
Microsoft Web servers.
vs. Pure Servlets: It is more convenient to write (and to modify!) regular HTML than
to have plenty of println statements that generate the HTML.
vs. Server-Side Includes (SSI): SSI is really only intended for simple inclusions, not
for "real" programs that use form data, make database connections, and the like.
vs. JavaScript: JavaScript can generate HTML dynamically on the client but can
hardly interact with the web server to perform complex tasks like database access and image
processing etc.
vs. Static HTML: Regular HTML, of course, cannot contain dynamic information.
JSP Architecture:
The web server needs a JSP engine ie. container to process JSP pages. The JSP container is
responsible for intercepting requests for JSP pages. This tutorial makes use of Apache which
has built-in JSP container to support JSP pages development.
A JSP container works with the Web server to provide the runtime environment and other
services a JSP needs. It knows how to understand the special elements that are part of JSPs.
Following diagram shows the position of JSP container and JSP files in a Web Application.
JSP Processing:
As with a normal page, browser sends an HTTP request to the web server.
The web server recognizes that the HTTP request is for a JSP page and forwards it to
a JSP engine. This is done by using the URL or JSP page which ends with .jsp instead of
.html.
The JSP engine loads the JSP page from disk and converts it into a servlet content.
This conversion is very simple in which all template text is converted to println( ) statements
and all JSP elements are converted to Java code that implements the corresponding dynamic
behavior of the page.
The JSP engine compiles the servlet into an executable class and forwards the original
request to a servlet engine.
A part of the web server called the servlet engine loads the Servlet class and executes
it. During execution, the servlet produces an output in HTML format, which the servlet
engine passes to the web server inside an HTTP response.
The web server forwards the HTTP response to your browser in terms of static HTML
content.
Finally web browser handles the dynamically generated HTML page inside the HTTP
response exactly as if it were a static page.
Typically, the JSP engine checks to see whether a servlet for a JSP file already exists
and whether the modification date on the JSP is older than the servlet. If the JSP is older than
its generated servlet, the JSP container assumes that the JSP hasn't changed and that the
generated servlet still matches the JSP's contents. This makes the process more efficient than
with other scripting languages (such as PHP) and therefore faster. So in a way, a JSP page is
really just another way to write a servlet without having to be a Java programming wiz.
Except for the translation phase, a JSP page is handled exactly like a regular servlet.
URL:
The Web is a loose collection of higher-level protocols and file formats, all unified in a
web browser. One of the most important aspects of the Web is that Tim Berners-Lee devised
a saleable way to locate all of the resources of the Net. The Uniform Resource Locator (URL)
is used to name anything and everything reliably.
Format:
A URL specification is based on four components. The first is the protocol to use,
separated from the rest of the locator by a colon (:). Common protocols are http, ftp, gopher,
and file, although these days almost everything is being done via HTTP. The second
component is the host name or IP address of the host to use; this is delimited on the left by
double slashes (/ /) and on the right by a slash (/) or optionally a colon (:) and on the right by
a slash (/). The fourth part is the actual file path. Most HTTP servers will append a file named
index.html or index.htm to URLs that refer directly to a directory resource.
Java’s URL class has several constructors, and each can throw a
MalformedURLException. One commonly used form specifies the URL with a string that is
identical to what is displayed in a browser:
URL(String urlSpecifier)
The next two forms of the constructor breaks up the URL into its component parts:
URL(String protocolName, String hostName, int port, String path)
Another frequently used constructor uses an existing URL as a reference context and
then create a new URL from that context.
The following method returns a URLConnection object associated with the invoking
URL object. it may throw an IOException.
2. SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In an effort
to support a wide variety of vendors, JDBC will allow any query statement to be passed
through it to the underlying database driver. This allows the connectivity module to handle
non-standard functionality in a manner that is suitable for its users.
5. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no exception.
Sun felt that the design of JDBC should be very simple, allowing for only one method of
completing a task per mechanism. Allowing duplicate functionality only serves to confuse the
users of the API.
6. Use strong, static typing wherever possible
Strong typing allows for more error checking to be done at compile time; also, less errors
appear at runtime.
History:
Tool Description:
ECLIPSE:
In computer programming, Eclipse is an integrated development environment (IDE).
It contains a base workspace and an extensible plug system for customizing the environment.
Written mostly in java, Eclipse can be used to develop applications. By means of various
plug-ins, Eclipse may also be used to develop applications in other programming: Ada,
ABAP, C, C++, COBOL, Fortran, Haskell, JavaScript, Lasso, Lua, Natural, Perl, PHP,
Prolog, Python, R, Ruby (including Ruby on Rails Framework), Scala, clojure, Groovy,
Scheme and Erlang. It can also be used to develop packages for the software Mathematical.
Development environments include the Eclipse Java development tools (JDT) for Java and
Scale, Eclipse CDT for C/C++ and Eclipse PDT for PHP, among others. The
initial codebase originated from IBM Visual Age. The Eclipse software (SDK), which
includes the Java development tools, is meant for Java developers. Users can extend its
abilities by installing plug-ins written for the Eclipse Platform, such as development toolkits
for other programming languages, and can write and contribute their own plug-in modules.
Released under the terms of the Eclipse Public License, Eclipse SDK is free (although it is
incompatible with the GNU). It was one of the first IDEs to run under GNU and it runs
without problems under Iced Tea.
History:
Release:
As of 2014, each Simultaneous Release has occurred on the 4th Wednesday of June.
Version Platform
Date Projects Main Changes
Name Version
Callisto
Callisto 30 June 2006 3.2
projects[15]
Europa
Europa 29 June 2007 3.3
projects[16]
Ganymede
Ganymede 25 June 2008 3.4
projects[17]
Galileo
Galileo 24 June 2009 3.5
projects[18]
projects[19]
Indigo
Indigo 22 June 2011 3.7
projects[20]
Kepler
Kepler 26 June 2013 4.3
projects[25]
Integrated Java 8 support (in the
Luna
Luna 25 June 2014 4.4 previous version this was possible
projects[26]
via a "Java 8 patch" plugin)
24 June 2015 Mars
Mars 4.5
(planned) projects[27]
CONCLUSION
we proposed heuristic algorithm that performs task scheduling and
allocates resources efficiently in cloud computing environments. We use
real Cybershake and Epigenomics scientific workflows as input tasks for the
system. When we compare our proposed heuristic approach with the
existing BATS and IDEA frameworks with respect to turnaround time and
response time, we find that our approach gives improved results. On the
other hand, from the viewpoint of resource utilization, the proposed
heuristic approach efficiently allocates resources with high utility. We
obtained the maximum utilization result for computing resources such as
CPU, memory and bandwidth. Most existing systems consider only two
resources, CPU and memory, in evaluating their performance the proposed
system adds bandwidth as a resource. Future work will focus on more
effective scheduling algorithms in which turnaround time and response
time will be improved.
REFERENCES
[1] C.-W. Tsai, W.-C. Huang, M.-H. Chiang, M.-C. Chiang, and C.-S. Yang, “A
hyper-heuristic scheduling algorithm for cloud,” IEEE Transactions on Cloud
Computing, vol. 2, no. 2, pp. 236–250, 2014.
[2] Y. Wang and W. Shi, “Budget-driven scheduling algorithms for batches of
mapreduce jobs in heterogeneous clouds,” IEEE Transactions on Cloud
Computing, vol. 2, no. 3, pp. 306–319, 2014.
[3] N. G. Duffield, P. Goyal, A. Greenberg, P. Mishra, K. Ramakrishnan, and J.
E. Van der Merwe, “Resource management with hoses: point-to-cloud services
for virtual private networks,” IEEE/ACM Transactions on Networking, vol. 10,
no. 5, pp. 679–692, 2002.
[4] S. Yu, C. Wang, K. Ren, and W. Lou, “Achieving secure, scalable, and fine-
grained data access control in cloud computing,” in Infocom, 2010 proceedings
IEEE. Ieee, 2010, pp. 1–9.
[5] Z. Xiao, W. Song, and Q. Chen, “Dynamic resource allocation using virtual
machines for cloud computing environment,” IEEE Transactions on parallel and
distributed systems, vol. 24, no. 6, pp. 1107–1117, 2013.
[6] B. Guan, J. Wu, Y. Wang, and S. U. Khan, “Civsched: a communication-
aware inter-vm scheduling technique for decreased network latency between co-
located vms,” IEEE Transactions on Cloud Computing, vol. 2, no. 3, pp. 320–
332, 2014.
[7] Q. Zhang, M. F. Zhani, R. Boutaba, and J. L. Hellerstein, “Dynamic
heterogeneity-aware resource provisioning in the cloud,” IEEE transactions on
cloud computing, vol. 2, no. 1, pp. 14–28, 2014.
[8] A. J. Younge, G. Von Laszewski, L. Wang, S. Lopez-Alarcon, and W.
Carithers, “Efficient resource management for cloud computing environments,”
in Green Computing Conference, 2010 International. IEEE, 2010, pp. 357–364.
[9] M. Polverini, A. Cianfrani, S. Ren, and A. V. Vasilakos, “Thermalaware
scheduling of batch jobs in geographically distributed data centers,” IEEE
Transactions on cloud computing, vol. 2, no. 1, pp. 71–84, 2014.
[10] K. Al Nuaimi, N. Mohamed, M. Al Nuaimi, and J. Al-Jaroodi, “A survey
of load balancing in cloud computing: Challenges and algorithms,” in Network
Cloud Computing and Applications (NCCA),2012 Second Symposium on.
IEEE, 2012, pp. 137–142.