Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
ABSTRACT
The advent of cloud computing, data owners are motivated to outsource their complex
data management systems from local sites to commercial public cloud for great flexibility and
economic savings. But for protecting data privacy, sensitive data has to be encrypted before
outsourcing, which obsoletes traditional data utilization based on plaintext keyword search.
Thus, enabling an encrypted cloud data search service is of paramount importance. Considering
the large number of data users and documents in cloud, it is crucial for the search service to
allow multi-keyword query and provide result similarity ranking to meet the effective data
retrieval need. Related works on searchable encryption focus on single keyword search or
Boolean keyword search, and rarely differentiate the search results. We define and solve the
challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud
data (MRSE), and establish a set of strict privacy requirements for such a secure cloud data
utilization system to become a reality. Among various multi-keyword semantics, we choose the
efficient principle of “coordinate matching”, its many matches as possible, to capture the
similarity between search query and data documents, and further use “inner product similarity”
to quantitatively formalize such principle for similarity measurement. We first propose a basic
MRSE scheme using secure inner product computation, and then significantly improve it to meet
different privacy requirements in two levels of threat models. Thorough analysis investigating
privacy and efficiency guarantees of proposed schemes is given, and experiments on the real-
world dataset further show proposed schemes indeed introduce low overhead on computation
and communication.
1
CHAPTER 1
INTRODUCTION
Cloud computing is a type of computing that relies on sharing computing resources rather
than having local servers or personal devices to handle applications.
In cloud computing, the word cloud (also phrased as "the cloud") is used as a metaphor
for "the Internet," so the phrase cloud computing means "a type of Internet-based computing,"
where different services -- such as servers, storage and applications -- are delivered to an
organization's computers and devices through the Internet.
Cloud computing is an on-demand service that is obtaining mass appeal in corporate data
centers. The cloud enables the data center to operate like the Internet and computing resources to
be accessed and shared as virtual resources in a secure and scalable manner. Like most
technologies, trends start in the enterprise and shift to adoption by small business owners.
In its most simple description, cloud computing is taking services ("cloud services") and
moving them outside an organizations firewall on shared systems. Applications and services are
accessed via the Web, instead of your hard drive. In cloud computing, the services are delivered
and used over the Internet and are paid for by cloud customer (your business) -- typically on an
"as-needed, pay-per-use" business model. The cloud infrastructure is maintained by the cloud
provider, not the individual cloud customer.
2
Cloud computing networks are large groups of servers and cloud service providers that
usually take advantage of low-cost computing technology, with specialized connections to spread
data-processing chores across them. This shared IT infrastructure contains large pools of systems
that are linked together. Virtualization techniques are often used to maximize the power of cloud
computing. Currently, the standards for connecting the computer systems and the software
needed to make cloud computing work are not fully defined at present time, leaving many
companies to define their own cloud computing technologies.
1.2 HISTORY
The origin of the term cloud computing is unclear. The expression cloud is commonly
used in science to describe a large agglomeration of objects that visually appear from a distance
as a cloud and describes any set of things whose details are not inspected further in a given
context.
3
In analogy to above usage the word cloud was used as a metaphor for the Internet and a
standardized cloud-like shape was used to denote a network on telephony schematics and later to
depict the Internet in computer network diagrams. With this simplification, the implication is that
the specifics of how the end points of a network are connected are not relevant for the purposes
of understanding the diagram. The cloud symbol was used to represent the Internet as early as
1994,in which servers were then shown connected to, but external to, the cloud.References to
cloud computing in it modern sense appeared early as 1996, with the earliest known mention in a
Compaq internal document. The popularization of the term can be traced to 2006 when
Amazon.com introduced the Elastic Compute Cloud.
The 1950s
The underlying concept of cloud computing dates to the 1950s, when large-scale
mainframe computers were seen as the future of computing, and became available in academia
and corporations, accessible via thin clients/terminal computers, often referred to as "static
terminals", because they were used for communications but had no internal processing
capacities. To make more efficient use of costly mainframes, a practice evolved that allowed
multiple users to share both the physical access to the computer from multiple terminals as well
as the CPU time. This eliminated periods of inactivity on the mainframe and allowed for a
greater return on the investment. The practice of sharing CPU time on a mainframe became
known in the industry as time-sharing. During the mid 70s, time-sharing was popularly known as
RJE (Remote Job Entry); this nomenclature was mostly associated with large vendors such as
IBM and DEC.
The 1990s
4
what users were responsible for. Cloud computing extends this boundary to cover all servers as
well as the network infrastructure.
As computers became more prevalent, scientists and technologists explored ways to make
large-scale computing power available to more users through time-sharing. They experimented
with algorithms to optimize the infrastructure, platform, and applications to prioritize CPUs and
increase efficiency for end users.
Since 2000
In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform
for deploying private clouds. In early 2008, Open Nebula, enhanced in the RESERVOIR
European Commission-funded project, became the first open-source software for deploying
private and hybrid clouds, and for the federation of clouds. In the same year, efforts were focused
on providing quality of service guarantees (as required by real-time interactive applications) to
cloud-based infrastructures, in the framework of the IRMOS European Commission-funded
project, resulting in a real-time cloud environment. By mid-2008, Gartner saw an opportunity for
cloud computing "to shape the relationship among consumers of IT services, those who use IT
services and those who sell them" and observed that "organizations are switching from company-
owned hardware and software assets to per-use service-based models" so that the "projected shift
to computing ... will result in dramatic growth in IT products in some areas and significant
reductions in other areas."
In July 2010, Rack space Hosting and NASA jointly launched an open-source cloud-
software initiative known as Open Stack. The Open Stack project intended to help organizations
offer cloud-computing services running on standard hardware. The early code came from
NASA's Nebula platform as well as from Rack space’s Cloud Files platform.
The standards for connecting the computer systems and the software needed to make
cloud computing work are not fully defined at present time, leaving many companies to define
their own cloud computing technologies. Cloud computing systems offered by companies, like
5
IBM's "Blue Cloud" technologies for example, are based on open standards and open source
software which link together computers that are used to to deliver Web 2.0 capabilities like
mash-ups or mobile commerce.
Cloud computing has started to obtain mass appeal in corporate data centers as it enables
the data center to operate like the Internet through the process of enabling computing resources
to be accessed and shared as virtual resources in a secure and scalable manner. For a small and
medium size business (SMB), the benefits of cloud computing is currently driving adoption. In
the SMB sector there is often a lack of time and financial resources to purchase, deploy and
maintain an infrastructure (e.g. the software, server and storage).
In cloud computing, small businesses can access these resources and expand or shrink
services as business needs change. The common pay-as-you-go subscription model is designed to
let SMBs easily add or remove services and you typically will only pay for what you do use.
A private cloud is designed to offer the same features and benefits of public cloud
systems, but removes a number of objections to the cloud computing model including control
over corporate and customer data, worries about security and issues connected to regulatory
compliance.
6
computer desktop) facilitates interaction between humans and computers. Cloud
computing systems typically use Representational State Transfer (REST)-based APIs.
Virtualization technology allows sharing of servers and storage devices and increased
utilization. Applications can be easily migrated from one physical server to another.
Multitenancy enables sharing of resources and costs across a large pool of users thus
allowing for:
o peak-load capacity increases (users need not engineer for highest possible load-
levels)
o utilisation and efficiency improvements for systems that are often only 10–20%
utilised.
Reliability improves with the use of multiple redundant sites, which makes well-designed
cloud computing suitable for business continuity and disaster recovery.
The National Institute of Standards and Technology's definition of cloud computing identifies
"five essential characteristics":
7
On-demand self-service: A consumer can unilaterally provision computing capabilities, such as
server time and network storage, as needed automatically without requiring human interaction
with each service provider.
Broad network access: Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, tablets, laptops, and workstations).
Resource pooling: The provider's computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand.
Rapid elasticity: Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand. To the
consumer, the capabilities available for provisioning often appear unlimited and can be
appropriated in any quantity at any time.
Measured service: Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the type of service
(e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be
monitored, controlled, and reported, providing transparency for both the provider and consumer
of the utilized service.
On-demand self-service
See also: Self-service provisioning for cloud computing services and Service catalogues for
cloud computing services
On-demand self-service allows users to obtain, configure and deploy cloud services themselves
using cloud service catalogues, without requiring the assistance of IT. This feature is listed by the
National Institute of Standards and Technology (NIST) as a characteristic of cloud computing.
Cloud computing consumers use cloud templates to move applications between clouds
through a self-service portal. The predefined blueprints define all that an application requires to
run in different environments. For example, a template could define how the same application
8
could be deployed in cloud platforms based on Amazon Web Service, VMware or Red Hat. The
user organization benefits from cloud templates because the technical aspects of cloud
configurations reside in the templates, letting users to deploy cloud services with a push of a
button. Developers can use cloud templates to create a catalogue of cloud services.
Cloud computing providers offer their services according to several fundamental models:
infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS)
where IaaS is the most basic and each higher model abstracts from the details of the lower
models. Other key components in anything as a service (XaaS) are described in a comprehensive
taxonomy model published in 2009,such as Strategy-as-a-Service, Collaboration-as-a-Service,
Business Process-as-a-Service, Database-as-a-Service, etc. In 2012, network as a service (NaaS)
and communication as a service (CaaS) were officially included by ITU (International
Telecommunication Union) as part of the basic cloud computing models, recognized service
categories of a telecommunication-centric cloud ecosystem.
In the most basic cloud-service model, providers of IaaS offer computers – physical or
(more often) virtual machines – and other resources. (A hypervisor, such as Hyper-V or Xen or
KVM or VMware ESX/ESXi, runs the virtual machines as guests. Pools of hypervisors within
the cloud operational support-system can support large numbers of virtual machines and the
ability to scale services up and down according to customers' varying requirements.) IaaS clouds
often offer additional resources such as a virtual-machine disk image library, raw (block) and
file-based storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs),
and software bundles.IaaS-cloud providers supply these resources on-demand from their large
pools installed in data centers. For wide-area connectivity, customers can use either the Internet
or carrier clouds (dedicated virtual private networks).
To deploy their applications, cloud users install operating-system images and their
application software on the cloud infrastructure. In this model, the cloud user patches and
maintains the operating systems and the application software. Cloud providers typically bill IaaS
9
services on a utility computing basis[cost reflects the amount of resources allocated and
consumed.
Cloud communications and cloud telephony, rather than replacing local computing
infrastructure, replace local telecommunications infrastructure with Voice over IP and other off-
site Internet services.
In the business model using software as a service (SaaS), users are provided access to
application software and databases. Cloud providers manage the infrastructure and platforms that
run the applications. SaaS is sometimes referred to as "on-demand software" and is usually
priced on a pay-per-use basis. SaaS providers generally price applications using a subscription
fee.
In the SaaS model, cloud providers install and operate application software in the cloud
and cloud users access the software from cloud clients. Cloud users do not manage the cloud
infrastructure and platform where the application runs. This eliminates the need to install and run
the application on the cloud user's own computers, which simplifies maintenance and support.
Cloud applications are different from other applications in their scalability—which can be
achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work
demand.[60]Load balancers distribute the work over the set of virtual machines. This process is
transparent to the cloud user, who sees only a single access point. To accommodate a large
10
number of cloud users, cloud applications can be multitenant, that is, any machine serves more
than one cloud user organization. It is common to refer to special types of cloud based
application software with a similar naming convention: desktop as a service, business process as
a service, test environment as a service, communication as a service.
A category of cloud services where the capability provided to the cloud service user is to
use network/transport connectivity services and/or inter-cloud network connectivity services.
Naas involves the optimization of resource allocations by considering network and computing
resources as a unified whole. Traditional Naas services include flexible and extended VPN, and
bandwidth on demand. Naas concept materialization also includes the provision of a virtual
network service by the owners of the network infrastructure to a third party (VNP – VNO).
Private cloud
Private cloud is cloud infrastructure operated solely for a single organization, whether
managed internally or by a third-party and hosted internally or externally. Undertaking a private
11
cloud project requires a significant level and degree of engagement to virtualize the business
environment, and requires the organization to re-evaluate decisions about existing resources.
When done right, it can improve business.
Public cloud
A cloud is called a "public cloud" when the services are rendered over a network that is
open for public use. Technically there may be little or no difference between public and private
cloud architecture, however, security consideration may be substantially different for services
(applications, storage, and other resources) that are made available by a service provider for a
public audience and when communication is effected over a non-trusted network. Generally,
public cloud service providers like Amazon AWS, Microsoft and Google own and operate the
infrastructure and offer access only via Internet (direct connectivity is not offered).
Community cloud
12
Hybrid cloud
Hybrid cloud is a composition of two or more clouds (private, community or public) that
remain unique entities but are bound together, offering the benefits of multiple deployment
models. Gartner, Inc. defines a hybrid cloud service as a cloud computing service that is
composed of some combination of private, public and community cloud services, from different
service providers. A hybrid cloud service crosses isolation and provider boundaries so that it
can’t be simply put in one category of private, public, or community cloud service. It allows one
to extend either the capacity or the capability of a cloud service, by aggregation, integration or
customization with another cloud service.
Varied use cases for hybrid cloud composition exist. For example, an organization may
store sensitive client data in house on a private cloud application, but interconnect that
application to a billing application provided on a public cloud as a software service. This
example of hybrid cloud extends the capabilities of the enterprise to deliver a specific business
service through the addition of externally available public cloud services.
Another example of hybrid cloud is one where IT organizations use public cloud
computing resources to meet temporary capacity needs that cannot be met by the private cloud.
This capability enables hybrid clouds to employ cloud bursting for scaling across clouds.
Cloud bursting enables data centres to create an in-house IT infrastructure that supports
average workloads, and use cloud resources from public or private clouds, during spikes in
processing demands.By utilizing "hybrid cloud" architecture, companies and individuals are able
to obtain degrees of fault tolerance combined with locally immediate usability without
dependency on internet connectivity. Hybrid cloud architecture requires both on-premises
resources and off-site (remote) server-based cloud infrastructure.
13
1.5 PROJECT OVERVIEW
Cloud computing is the long dreamed vision of computing as a utility, where cloud
customers can remotely store their data into the cloud so as to enjoy the on-demand high quality
applications and services from a shared pool of configurable computing resources [1]. Its great
flexibility and economic savings are motivating both individuals and enterprises to outsource
their local complex data management system into the cloud. To protect data privacy and combat
unsolicited accesses in the cloud and beyond, sensitive data, e.g., emails, personal health records,
photo albums, tax documents, financial transactions, etc., may have to be encrypted by data
owners before outsourcing to the commercial public cloud [2]; this, however, obsoletes the
traditional data utilization service based on plaintext keyword search. The trivial solution of
downloading all the data and decrypting locally is clearly impractical, due to the huge amount of
bandwidth cost in cloud scale systems. Moreover, aside from eliminating the local storage
management, storing data into the cloud serves no purpose unless they can be easily searched
and utilized. Thus, exploring privacy-preserving and effective search service over encrypted
cloud data is of paramount importance. Considering the potentially large number of on-demand
data users and huge amount of outsourced data documents in the cloud, this problem is
particularly challenging as it is extremely difficult to meet also the requirements of performance,
system usability and scalability. On the one hand, to meet the effective data retrieval need, the
large amount of documents demand the cloud server to perform result relevance ranking, instead
of returning undifferentiated results. Such ranked search system enables data users to find the
most relevant information quickly, rather than burdensomely sorting through every match in the
content collection [3]. Ranked search can also elegantly eliminate unnecessary network traffic by
sending back only the most relevant data, which is highly desirable in the “pay-as-youuse” cloud
paradigm. For privacy protection, such ranking operation, however, should not leak any keyword
related information. On the other hand, to improve the search result accuracy as well as to
enhance the user searching experience, it is also necessary for such ranking system to support
multiple keywords search, as single keyword search often yields far too coarse results. As a
common practice indicated by today’s web search engines (e.g., Google search), data users may
tend to provide a set of keywords instead of only one as the indicator of their search interest to
retrieve the most relevant data. And each keyword in the search request is able to help narrow
down the search result further. “Coordinate matching” [4], i.e., as many matches as possible, is
14
an efficient similarity measure among such multi-keyword semantics to refine the result
relevance, and has been widely used in the plaintext information retrieval (IR) community.
However, how to apply it in the encrypted cloud data search system remains a very challenging
task because of inherent security and privacy obstacles, including various strict requirements like
the data privacy, the index privacy, the keyword privacy, and many others
15
CHAPTER 2
SYSTEM REQUIREMENTS:
16
2.3 SOFTWARE ENVIRONMENT
“.NET” is also the collective name given to various software components built
upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so on).
The CLR is described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are
17
Loading and executing programs, with version control and other such features.
The following features of the .NET framework are also worth description:
Managed Code
The code that targets .NET, and which contains certain extra
Information - “metadata” - to describe itself. Whilst both managed and unmanaged code can run
in the runtime, only managed code contains the information that allows the CLR to guarantee,
for instance, safe execution and interoperability.
Managed Data
With Managed Code comes Managed Data. CLR provides memory allocation
and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by
default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not.
Targeting CLR can, depending on the language you’re using, impose certain constraints on the
features available. As with managed and unmanaged code, one can have both managed and
unmanaged data in .NET applications - data that doesn’t get garbage collected but instead is
looked after by unmanaged code.
The CLR uses something called the Common Type System (CTS) to strictly enforce
type-safety. This ensures that all classes are compatible with each other, by describing types in a
common way. CTS define how types work within the runtime, which enables types in one
language to interoperate with types in another language, including cross-language exception
handling. As well as ensuring that types are only used in appropriate ways, the runtime also
ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.
The CLR provides built-in support for language interoperability. To ensure that you can
develop managed code that can be fully used by developers using any programming language, a
18
set of language features and rules for using them called the Common Language Specification
(CLS) has been defined. Components that follow these rules and expose only CLS features are
considered CLS-compliant.
The set of classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.
The class library is subdivided into a number of sets (or namespaces), each
providing distinct areas of functionality, with dependencies between the namespaces kept to a
minimum.
The multi-language capability of the .NET Framework and Visual Studio .NET
enables developers to use their existing programming skills to build all types of applications and
XML Web services. The .NET framework supports new versions of Microsoft’s old favorites
Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new
additions to the family.
Visual Basic .NET has been updated to include many new and improved language
features that make it a powerful object-oriented programming language. These features include
inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured
exception handling, custom attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-compliant
language can use the classes, objects, and components you create in Visual Basic .NET.
19
Managed Extensions for C++ and attributed programming are just some of the
enhancements made to the C++ language. Managed Extensions simplify the task of migrating
existing C++ applications to the new .NET Framework.
Active State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be integrated into the Visual
Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit.
FORTRAN
COBOL
Eiffel
20
Fig1 .Net Framework
Operating System
C#.NET is also compliant with CLS (Common Language Specification) and supports
structured exception handling. CLS is set of rules and constructs that are supported by the
CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET
Framework; it manages the execution of the code and also makes the development process
easier by providing services.
Constructors are used to initialize objects, whereas destructors are used to destroy them.
In other words, destructors are used to release the resources allocated to the object. In
C#.NET the sub finalize procedure is available. The sub finalize procedure is used to
complete the tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the sub finalize
procedure can be called only from the class it belongs to or from derived classes.
21
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The .NET Framework monitors
allocated resources, such as objects and variables. In addition, the .NET Framework
automatically releases memory for reuse by destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently in use by
applications. When the garbage collector comes across an object that is marked for garbage
collection, it releases the memory occupied by the object.
OVERLOADING
MULTITHREADING:
C#.NET also supports multithreading. An application that supports multithreading can handle
multiple tasks simultaneously, we can use multithreading to decrease the time taken by an
application to respond to user interaction.
22
1. To provide a consistent object-oriented programming environment whether object codes is
stored and executed locally on Internet-distributed, or executed remotely.
There are different types of application, such as Windows-based applications and Web-based
applications.
The OLAP Services feature available in SQL Server version 7.0 is now called
SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component. The
Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server
2000 Meta Data Services. References to the component now use the term Meta Data Services.
The term repository is used only in reference to the repository engine within Meta Data Services
They are,
1. TABLE
2. QUERY
3. FORM
4. REPORT
5. MACRO
TABLE:
23
A database is a collection of data about a specific topic.
VIEWS OF TABLE:
1. Design View
2. Datasheet View
Design View
To build or modify the structure of a table we work in the table design view.
We can specify what kind of data will be hold.
Datasheet View
To add, edit or analyses the data itself we work in tables datasheet view
mode.
QUERY:
A query is a question that has to be asked the data. Access gathers data that answers the
question from one or more table. The data that make up the answer is either dynaset (if you edit
it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the
dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it,
such as deleting or updating.
CHAPTER 3
24
SYSETM ANALYSIS
3.1 EXISTING SYSTEM
The encryption is a helpful technique that treats encrypted data as documents and allows
a user to securely search over it through single keyword and retrieve documents of interest. The
direct application of these approaches to deploy secure large scale cloud data utilization system
would not be necessarily suitable, as they are developed as crypto primitives and cannot
accommodate such high service-level requirements like system usability, user searching
experience, and easy information discovery in mind.
3.2 DISADVANTAGE:
26
CHAPTER 4
PROJECT DESCRIPTION
4.1 MODULES
Data Owner Module
Encryption Module
Encryption Module:
This module is used to help the server to encrypt the document using RSA Algorithm and
to convert the encrypted document to the Zip file with activation code and then activation code
send to the user for download.
27
These modules ensure the user to search the files that are searched frequently using rank
search. This module allows the user to download the file using his secret key to decrypt the
downloaded data. This module allows the Owner to view the uploaded files and downloaded files
CHAPTER 5
SYSTEM DIAGRAM
28
5.1 SYSTEM FLOW DIAGRAM:
The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to put
transaction data in to a usable form for processing can be achieved by inspecting the computer to
read data from a written or printed document or it can occur by having people keying the data
directly into the system. The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The
input is designed in such a way so that it provides security and ease of use with retaining the
privacy. Input Design considered the following things:
Methods for preparing input validations and steps to follow when error occur.
OBJECTIVES
29
1.Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large volume of
data. The goal of designing input is to make data entry easier and to be free from errors. The data
entry screen is designed in such a way that all the data manipulates can be performed. It also
provides record viewing facilities.
3.When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user will not be in maize
of instant. Thus the objective of input design is to create an input layout that is easy to follow
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.
1. Designing computer output should proceed in an organized, well thought out manner; the right
output must be developed while ensuring that each output element is designed so that people will
find the system can use easily and effectively. When analysis design computer output, they
should Identify the specific output that is needed to meet the requirements.
3.Create document, report, or other formats that contain information produced by the system.
The output form of an information system should accomplish one or more of the following
objectives.
30
Future.
Trigger an action.
Confirm an action.
CHAPTER 6
SYSTEM TESTING
31
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.
TYPES OF TESTS
Unit testing involves the design of test cases that validate that the internal program logic
is functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.
32
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.
System testing ensures that the entire integrated software system meets requirements. It
tests a configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.
White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It is
used to test areas that cannot be reached from a black box level.
33
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it.
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.
Field testing will be performed manually and functional tests will be written in detail.
Test objectives
Features to be tested
34
defects. The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.
Test Results:
All the test cases mentioned above passed successfully. No defects encountered.
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
35
CHAPTER 7
CONCLUSION AND FUTUREWORK
We define and solve the problem of multi-keyword ranked search over encrypted cloud
data, and begin a variety of privacy requirements. Among different multi-keyword semantics, we
choose the efficient principle of “coordinate matching”, as many matches as possible, to
effectively capture similarity between query keywords and outsourced documents, and use “inner
product similarity” to quantitatively formalize such a principle for similarity measurement. For
meeting the challenge of supporting multi-keyword semantic without privacy breaches, the
propose a basic MRSE scheme using secure inner product computation, and significantly
improve it to achieve privacy requirements in two levels of threat models. Thorough analysis
investigating privacy and efficiency guarantees of proposed schemes is given, and experiments
on the real-world dataset show our proposed schemes introduce low overhead on both
computation and communication.
CHAPTER 8
36
REFERENCES
[2] S. Kamara and K. Lauter, “Cryptographic cloud storage,” in RLCPS, January 2010, LNCS.
Springer, Heidelberg.
[3] A. Singhal, “Modern information retrieval: A brief overview,” IEEE Data Engineering
Bulletin, vol. 24, no. 4, pp. 35–43, 2001.
[4] I. H. Witten, A. Moffat, and T. C. Bell, “Managing gigabytes: Compressing and indexing
documents and images,” Morgan Kaufmann Publishing, San Francisco, May 1999.
[5] D. Song, D. Wagner, and A. Perrig, “Practical techniques for searches on encrypted data,” in
Proc. of S&P, 2000.
[6] E.-J. Goh, “Secure indexes,” Cryptology ePrint Archive, 2003, http://
eprint.iacr.org/2003/216.
[7] Y.-C. Chang and M. Mitzenmacher, “Privacy preserving keyword searches on remote
encrypted data,” in Proc. of ACNS, 2005.
[9] D. Boneh, G. D. Crescenzo, R. Ostrovsky, and G. Persiano, “Public key encryption with
keyword search,” in Proc. of EUROCRYPT, 2004.
[12] J. Li, Q. Wang, C. Wang, N. Cao, K. Ren, and W. Lou, “Fuzzy keyword search over
encrypted data in cloud computing,” in Proc. of IEEE INFOCOM’10 Mini-Conference, San
Diego, CA, USA, March 2010.
CHAPTER 9
37
APPENDIX
using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Xml;
using OWC;
using System.IO;
using System.IO.Compression;
using System.Text;
38
Panel2.Visible = false;
Panel3.Visible = false;
Panel4.Visible = true;
Panel5.Visible = false;
Panel6.Visible = false;
u = 0;
}
}
protected void ImageButton1_Click(object sender, ImageClickEventArgs e)
{
Panel2.Visible = false;
Panel3.Visible = false;
Panel4.Visible = true;
Panel5.Visible = false;
Panel6.Visible = false;
}
protected void ImageButton2_Click(object sender, ImageClickEventArgs e)
{
Panel2.Visible = true;
Panel3.Visible = false;
Panel4.Visible = false;
Panel5.Visible = false;
Panel6.Visible = false;
}
protected void ImageButton3_Click(object sender, ImageClickEventArgs e)
{
Panel2.Visible = false;
Panel3.Visible = true;
Panel4.Visible = false;
Panel5.Visible = false;
Panel6.Visible = false;
}
protected void ImageButton4_Click(object sender, ImageClickEventArgs e)
{
Panel2.Visible = false;
Panel3.Visible = false;
Panel4.Visible = false;
Panel5.Visible = false;
Panel6.Visible = true;
Image4.Visible = false;
GridView2.Visible = false;
}
protected void ImageButton5_Click(object sender, ImageClickEventArgs e)
{
Panel2.Visible = false;
Panel3.Visible = false;
39
Panel4.Visible = false;
Panel5.Visible = true;
Panel6.Visible = false;
}
protected void ImageButton7_Click(object sender, ImageClickEventArgs e)
{
con.Open();
string clearText = TextBox4.Text.Trim();
string cipherText = encryption.Encrypt(clearText, true);
byte[] filebytes = new byte[FileUpload1.PostedFile.InputStream.Length + 1];
a= System.IO.Path.GetExtension(FileUpload1.PostedFile.FileName);
b = FileUpload1.PostedFile.FileName;
c = FileUpload1.PostedFile.ContentType;
FileUpload1.PostedFile.InputStream.Read(filebytes, 0, filebytes.Length);
string paths = Request.PhysicalApplicationPath + "Files\\" +
System.IO.Path.GetFileName(FileUpload1.FileName);
FileUpload1.SaveAs(Request.PhysicalApplicationPath + "Files\\" +
System.IO.Path.GetFileName(FileUpload1.FileName));
/////////// ~ Zip File ~ ///////////////
string s1 = FileUpload1.FileName;
//string s2 = "C:\\Documents and Settings\\Administrator\\My Documents\\Visual Studio
2005\\WebSites\\files\\";
string s2 = Request.PhysicalApplicationPath + "Files" + "\\";
string srcFile = s2 + s1;
//string d1="C:\\Documents and Settings\\Administrator\\My Documents\\Visual Studio
2005\\WebSites\\files\\";
string d1 = Request.PhysicalApplicationPath + "ZipFiles" + "\\";
string d2 = FileUpload1.FileName;
string dstFile = d1 + d2 + ".zip";
Session["filen"] = dstFile;
//Important wordfile and compress files are saved on the same folder.
FileStream fsIn = null; // will open and read the srcFile
FileStream fsOut = null; // will be used by the GZipStream for output to the dstFile
GZipStream gzip = null;
byte[] buffer;
int count = 0;
try
{
fsOut = new FileStream(dstFile, FileMode.Create, FileAccess.Write, FileShare.None);
gzip = new GZipStream(fsOut, CompressionMode.Compress, true);
GridView2.Visible = false;
Image4.Visible = true;
40
////////////////////////////////////////////////////////////////
con.Open();
DataSet ds1 = new DataSet();
SqlDataAdapter da = new SqlDataAdapter("select fname,counts from fileusage", con);
da.Fill(ds1);
OWC.ChartSpaceClass oChartSpace = new OWC.ChartSpaceClass();
oChartSpace.Charts[0].SeriesCollection[0].SetData(OWC.ChartDimensionsEnum.chDimCatego
ries,
Convert.ToInt32(OWC.ChartSpecialDataSourcesEnum.chDataLiteral), names);
oChartSpace.Charts[0].SeriesCollection[0].SetData(OWC.ChartDimensionsEnum.chDimValues,
Convert.ToInt32(OWC.ChartSpecialDataSourcesEnum.chDataLiteral), totals);
string strFullPathAndName = Server.MapPath("~/graphs/" +
System.DateTime.Now.Ticks.ToString() + ".gif");
oChartSpace.ExportPicture(strFullPathAndName, "gif", 800, 600);
string[] arr = new string[] { };
arr = strFullPathAndName.Split('\\');
41
Image4.ImageUrl = "~/" + arr[arr.Length - 2] + "/" + arr[arr.Length - 1];
}
protected void ImageButton10_Click(object sender, ImageClickEventArgs e)
{
GridView2.Visible = true;
Image4.Visible = false;
con.Open();
SqlDataAdapter sda = new SqlDataAdapter("select * from downloads", con);
DataSet ds = new DataSet();
sda.Fill(ds);
GridView2.DataSource = ds;
GridView2.DataBind();
con.Close();
}
protected void TextBox7_TextChanged(object sender, EventArgs e)
{
string clearText = TextBox7.Text.Trim();
string cipherText = encryption.Encrypt(clearText, true);
Label14.Text = cipherText;
}
protected void ImageButton6_Click(object sender, ImageClickEventArgs e)
{
con.Open();
SqlCommand cmd11 = new SqlCommand("update Admins set pass='" + TextBox2.Text +
"'", con);
cmd11.ExecuteNonQuery();
con.Close();
RegisterStartupScript("msg", "<script>alert('Password Successfully Changed...')</script>");
TextBox1.Text = "";
TextBox2.Text = "";
TextBox3.Text = "";
Panel2.Visible = false;
Panel3.Visible = false;
Panel4.Visible = true;
Panel5.Visible = false;
Panel6.Visible = false;
}
}
using System;
using System.Collections;
using System.Configuration;
using System.Data;
42
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Xml;
using OWC;
using System.IO;
using System.IO.Compression;
using System.Text;
string a1=(string)Session["fnam"];
43
sda.Fill(ds);
try
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.ContentType = "application/zip";
if (Response.IsClientConnected)
// Read the data into the buffer and write into the
// output stream.
44
Response.OutputStream.Write(buffer, 0, length);
Response.Flush();
else
bytesToRead = -1;
Response.Write(ex.Message);
// An error occurred..
finally
if (stream != null)
stream.Close();
45
}
} using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Xml;
using OWC;
using System.IO;
using System.IO.Compression;
using System.Text;
using System.Net;
using System.Net.Mail;
46
string counts;
int lo;
string z1,z2,x1,x2,x3,x4;
string to;
string message;
//string Securitykey;
//string ranno;
Label3.Text=Request.Params["id"];
Session["tex"] = Label3.Text;
Label8.Text = Convert.ToString(en.userid1());
Response.Redirect("Search.aspx");
47
{
string yz=Convert.ToString(en.userid2());
con.Open();
cmd.ExecuteNonQuery();
sda.Fill(ds);
if (ds.Tables[0].Rows.Count > 0)
counts = ds.Tables[0].Rows[za]["fname"].ToString();
if (Label3.Text == counts)
lo = Convert.ToInt32(ds.Tables[0].Rows[za]["counts"].ToString()) + 1;
cmd3.ExecuteNonQuery();
break;
else
48
cmd4.ExecuteNonQuery();
break;
else
cmd1.ExecuteNonQuery();
con.Close();
x1 = TextBox1.Text;
x2 = TextBox2.Text;
x3 = TextBox3.Text;
x4 = TextBox4.Text;
TextBox1.Text = "";
TextBox2.Text = "";
TextBox3.Text = "";
TextBox4.Text = "";
con.Open();
sda10.Fill(ds10);
z1 = ds10.Tables[0].Rows[0]["efname"].ToString();
49
message = "<hr><br>Hai " + "<b>" + x1 + " ! </b><br><br>" + "Your Activation
Code is : " + "<b>" + z1 + "</b>";
to = x2;
msg.To.Add(new MailAddress(to));
msg.Subject = subject;
msg.Body = message;
msg.IsBodyHtml = true;
try
client.EnableSsl = true;
client.UseDefaultCredentials = false;
client.Credentials = loginInfo;
client.Send(msg);
Console.WriteLine(ex);
Label9.Visible = true;
Label10.Visible = true;
50
TextBox5.Visible = true;
ImageButton3.Visible = true;
sda10.Fill(ds10);
z2 = ds10.Tables[0].Rows[0]["efname"].ToString();
if (z2 == TextBox5.Text)
TextBox5.Text = "";
Response.Redirect("Downloading.aspx");
else
9.2 screenshots
51
52
53
54