Sei sulla pagina 1di 54

A SECURE AND DYNAMIC MULTI-KEYWORD RANKED SEARCH

OVER ENCRYPTED CLOUD COMPUTING

ABSTRACT
The advent of cloud computing, data owners are motivated to outsource their complex
data management systems from local sites to commercial public cloud for great flexibility and
economic savings. But for protecting data privacy, sensitive data has to be encrypted before
outsourcing, which obsoletes traditional data utilization based on plaintext keyword search.
Thus, enabling an encrypted cloud data search service is of paramount importance. Considering
the large number of data users and documents in cloud, it is crucial for the search service to
allow multi-keyword query and provide result similarity ranking to meet the effective data
retrieval need. Related works on searchable encryption focus on single keyword search or
Boolean keyword search, and rarely differentiate the search results. We define and solve the
challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud
data (MRSE), and establish a set of strict privacy requirements for such a secure cloud data
utilization system to become a reality. Among various multi-keyword semantics, we choose the
efficient principle of “coordinate matching”, its many matches as possible, to capture the
similarity between search query and data documents, and further use “inner product similarity”
to quantitatively formalize such principle for similarity measurement. We first propose a basic
MRSE scheme using secure inner product computation, and then significantly improve it to meet
different privacy requirements in two levels of threat models. Thorough analysis investigating
privacy and efficiency guarantees of proposed schemes is given, and experiments on the real-
world dataset further show proposed schemes indeed introduce low overhead on computation
and communication.

1
CHAPTER 1

INTRODUCTION

1.1 CLOUD COMPUTING

Cloud computing is a type of computing that relies on sharing computing resources rather
than having local servers or personal devices to handle applications.

In cloud computing, the word cloud (also phrased as "the cloud") is used as a metaphor
for "the Internet," so the phrase cloud computing means "a type of Internet-based computing,"
where different services -- such as servers, storage and applications -- are delivered to an
organization's computers and devices through the Internet.

Cloud computing is comparable to grid computing, a type of computing where unused


processing cycles of all computers in a network are harnesses to solve problems too intensive for
any stand-alone machine. In cloud computing, the word "cloud" (also phrased as "the cloud") is
used as a metaphor for "the Internet," so the phrase cloud computing means a type of Internet-
based computing, where different services (such as servers, storage and applications) are
delivered to an organization's computers and devices through the Internet.

Cloud computing is an on-demand service that is obtaining mass appeal in corporate data
centers. The cloud enables the data center to operate like the Internet and computing resources to
be accessed and shared as virtual resources in a secure and scalable manner. Like most
technologies, trends start in the enterprise and shift to adoption by small business owners.

In its most simple description, cloud computing is taking services ("cloud services") and
moving them outside an organizations firewall on shared systems. Applications and services are
accessed via the Web, instead of your hard drive. In cloud computing, the services are delivered
and used over the Internet and are paid for by cloud customer (your business) -- typically on an
"as-needed, pay-per-use" business model. The cloud infrastructure is maintained by the cloud
provider, not the individual cloud customer.

2
Cloud computing networks are large groups of servers and cloud service providers that
usually take advantage of low-cost computing technology, with specialized connections to spread
data-processing chores across them. This shared IT infrastructure contains large pools of systems
that are linked together. Virtualization techniques are often used to maximize the power of cloud
computing. Currently, the standards for connecting the computer systems and the software
needed to make cloud computing work are not fully defined at present time, leaving many
companies to define their own cloud computing technologies.

Fig:1.1 cloud computing

1.2 HISTORY

Origin of the term

The origin of the term cloud computing is unclear. The expression cloud is commonly
used in science to describe a large agglomeration of objects that visually appear from a distance
as a cloud and describes any set of things whose details are not inspected further in a given
context.

3
In analogy to above usage the word cloud was used as a metaphor for the Internet and a
standardized cloud-like shape was used to denote a network on telephony schematics and later to
depict the Internet in computer network diagrams. With this simplification, the implication is that
the specifics of how the end points of a network are connected are not relevant for the purposes
of understanding the diagram. The cloud symbol was used to represent the Internet as early as
1994,in which servers were then shown connected to, but external to, the cloud.References to
cloud computing in it modern sense appeared early as 1996, with the earliest known mention in a
Compaq internal document. The popularization of the term can be traced to 2006 when
Amazon.com introduced the Elastic Compute Cloud.

The 1950s

The underlying concept of cloud computing dates to the 1950s, when large-scale
mainframe computers were seen as the future of computing, and became available in academia
and corporations, accessible via thin clients/terminal computers, often referred to as "static
terminals", because they were used for communications but had no internal processing
capacities. To make more efficient use of costly mainframes, a practice evolved that allowed
multiple users to share both the physical access to the computer from multiple terminals as well
as the CPU time. This eliminated periods of inactivity on the mainframe and allowed for a
greater return on the investment. The practice of sharing CPU time on a mainframe became
known in the industry as time-sharing. During the mid 70s, time-sharing was popularly known as
RJE (Remote Job Entry); this nomenclature was mostly associated with large vendors such as
IBM and DEC.

The 1990s

In the 1990s, telecommunications companies, who previously offered primarily dedicated


point-to-point data circuits, began offering virtual private network (VPN) services with
comparable quality of service, but at a lower cost. By switching traffic as they saw fit to balance
server use, they could use overall network bandwidth more effectively. They began to use the
cloud symbol to denote the demarcation point between what the provider was responsible for and

4
what users were responsible for. Cloud computing extends this boundary to cover all servers as
well as the network infrastructure.

As computers became more prevalent, scientists and technologists explored ways to make
large-scale computing power available to more users through time-sharing. They experimented
with algorithms to optimize the infrastructure, platform, and applications to prioritize CPUs and
increase efficiency for end users.

Since 2000

In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform
for deploying private clouds. In early 2008, Open Nebula, enhanced in the RESERVOIR
European Commission-funded project, became the first open-source software for deploying
private and hybrid clouds, and for the federation of clouds. In the same year, efforts were focused
on providing quality of service guarantees (as required by real-time interactive applications) to
cloud-based infrastructures, in the framework of the IRMOS European Commission-funded
project, resulting in a real-time cloud environment. By mid-2008, Gartner saw an opportunity for
cloud computing "to shape the relationship among consumers of IT services, those who use IT
services and those who sell them" and observed that "organizations are switching from company-
owned hardware and software assets to per-use service-based models" so that the "projected shift
to computing ... will result in dramatic growth in IT products in some areas and significant
reductions in other areas."

In July 2010, Rack space Hosting and NASA jointly launched an open-source cloud-
software initiative known as Open Stack. The Open Stack project intended to help organizations
offer cloud-computing services running on standard hardware. The early code came from
NASA's Nebula platform as well as from Rack space’s Cloud Files platform.

Cloud Computing Standards

The standards for connecting the computer systems and the software needed to make
cloud computing work are not fully defined at present time, leaving many companies to define
their own cloud computing technologies. Cloud computing systems offered by companies, like

5
IBM's "Blue Cloud" technologies for example, are based on open standards and open source
software which link together computers that are used to to deliver Web 2.0 capabilities like
mash-ups or mobile commerce.

Cloud Computing in the Data Center and for Small Business:

Cloud computing has started to obtain mass appeal in corporate data centers as it enables
the data center to operate like the Internet through the process of enabling computing resources
to be accessed and shared as virtual resources in a secure and scalable manner. For a small and
medium size business (SMB), the benefits of cloud computing is currently driving adoption. In
the SMB sector there is often a lack of time and financial resources to purchase, deploy and
maintain an infrastructure (e.g. the software, server and storage).

In cloud computing, small businesses can access these resources and expand or shrink
services as business needs change. The common pay-as-you-go subscription model is designed to
let SMBs easily add or remove services and you typically will only pay for what you do use.

Public Cloud versus Private Cloud Explained

Cloud computing denotes a cloud computing platform that is outside of an organizations'


firewall on shared systems. In this scenario, your cloud service provider is in control of the
infrastructure. In contrast, a private cloud is the same platform; however it is implemented within
the corporate firewall, under the control of the organization's IT department.

A private cloud is designed to offer the same features and benefits of public cloud
systems, but removes a number of objections to the cloud computing model including control
over corporate and customer data, worries about security and issues connected to regulatory
compliance.

Cloud computing exhibits the following key characteristics:

 Agility improves with users' ability to re-provision technological infrastructure resources.


 Application programming interface (API) accessibility to software that enables machines
to interact with cloud software in the same way that a traditional user interface (e.g., a

6
computer desktop) facilitates interaction between humans and computers. Cloud
computing systems typically use Representational State Transfer (REST)-based APIs.

 Virtualization technology allows sharing of servers and storage devices and increased
utilization. Applications can be easily migrated from one physical server to another.

 Multitenancy enables sharing of resources and costs across a large pool of users thus
allowing for:

o centralization of infrastructure in locations with lower costs (such as real estate,


electricity, etc.)

o peak-load capacity increases (users need not engineer for highest possible load-
levels)

o utilisation and efficiency improvements for systems that are often only 10–20%
utilised.

 Reliability improves with the use of multiple redundant sites, which makes well-designed
cloud computing suitable for business continuity and disaster recovery.

 Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a


fine-grained, self-service basis near real-time(Note, the VM startup time varies by VM
type, location, os and cloud providers), without users having to engineer for peak loads.

 Performance is monitored, and consistent and loosely coupled architectures are


constructed using web services as the system interface.

 Maintenance of cloud computing applications is easier, because they do not need to be


installed on each user's computer and can be accessed from different places.

The National Institute of Standards and Technology's definition of cloud computing identifies
"five essential characteristics":

7
On-demand self-service: A consumer can unilaterally provision computing capabilities, such as
server time and network storage, as needed automatically without requiring human interaction
with each service provider.

Broad network access: Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, tablets, laptops, and workstations).

Resource pooling: The provider's computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand.

Rapid elasticity: Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand. To the
consumer, the capabilities available for provisioning often appear unlimited and can be
appropriated in any quantity at any time.

Measured service: Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the type of service
(e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be
monitored, controlled, and reported, providing transparency for both the provider and consumer
of the utilized service.

On-demand self-service
See also: Self-service provisioning for cloud computing services and Service catalogues for
cloud computing services

On-demand self-service allows users to obtain, configure and deploy cloud services themselves
using cloud service catalogues, without requiring the assistance of IT. This feature is listed by the
National Institute of Standards and Technology (NIST) as a characteristic of cloud computing.

Cloud computing consumers use cloud templates to move applications between clouds
through a self-service portal. The predefined blueprints define all that an application requires to
run in different environments. For example, a template could define how the same application

8
could be deployed in cloud platforms based on Amazon Web Service, VMware or Red Hat. The
user organization benefits from cloud templates because the technical aspects of cloud
configurations reside in the templates, letting users to deploy cloud services with a push of a
button. Developers can use cloud templates to create a catalogue of cloud services.

1.3 SERVICE MODELS

Cloud computing providers offer their services according to several fundamental models:
infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS)
where IaaS is the most basic and each higher model abstracts from the details of the lower
models. Other key components in anything as a service (XaaS) are described in a comprehensive
taxonomy model published in 2009,such as Strategy-as-a-Service, Collaboration-as-a-Service,
Business Process-as-a-Service, Database-as-a-Service, etc. In 2012, network as a service (NaaS)
and communication as a service (CaaS) were officially included by ITU (International
Telecommunication Union) as part of the basic cloud computing models, recognized service
categories of a telecommunication-centric cloud ecosystem.

Infrastructure as a service (IaaS)

In the most basic cloud-service model, providers of IaaS offer computers – physical or
(more often) virtual machines – and other resources. (A hypervisor, such as Hyper-V or Xen or
KVM or VMware ESX/ESXi, runs the virtual machines as guests. Pools of hypervisors within
the cloud operational support-system can support large numbers of virtual machines and the
ability to scale services up and down according to customers' varying requirements.) IaaS clouds
often offer additional resources such as a virtual-machine disk image library, raw (block) and
file-based storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs),
and software bundles.IaaS-cloud providers supply these resources on-demand from their large
pools installed in data centers. For wide-area connectivity, customers can use either the Internet
or carrier clouds (dedicated virtual private networks).

To deploy their applications, cloud users install operating-system images and their
application software on the cloud infrastructure. In this model, the cloud user patches and
maintains the operating systems and the application software. Cloud providers typically bill IaaS
9
services on a utility computing basis[cost reflects the amount of resources allocated and
consumed.

Cloud communications and cloud telephony, rather than replacing local computing
infrastructure, replace local telecommunications infrastructure with Voice over IP and other off-
site Internet services.

Platform as a service (PaaS)


In the PaaS model, cloud providers deliver a computing platform, typically including
operating system, programming language execution environment, database, and web server.
Application developers can develop and run their software solutions on a cloud platform without
the cost and complexity of buying and managing the underlying hardware and software layers.
With some PaaS offers like Windows Azure, the underlying computer and storage resources scale
automatically to match application demand so that the cloud user does not have to allocate
resources manually. The latter has also been proposed by an architecture aiming to facilitate real-
time in cloud environments.

Software as a service (SaaS)

In the business model using software as a service (SaaS), users are provided access to
application software and databases. Cloud providers manage the infrastructure and platforms that
run the applications. SaaS is sometimes referred to as "on-demand software" and is usually
priced on a pay-per-use basis. SaaS providers generally price applications using a subscription
fee.

In the SaaS model, cloud providers install and operate application software in the cloud
and cloud users access the software from cloud clients. Cloud users do not manage the cloud
infrastructure and platform where the application runs. This eliminates the need to install and run
the application on the cloud user's own computers, which simplifies maintenance and support.
Cloud applications are different from other applications in their scalability—which can be
achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work
demand.[60]Load balancers distribute the work over the set of virtual machines. This process is
transparent to the cloud user, who sees only a single access point. To accommodate a large

10
number of cloud users, cloud applications can be multitenant, that is, any machine serves more
than one cloud user organization. It is common to refer to special types of cloud based
application software with a similar naming convention: desktop as a service, business process as
a service, test environment as a service, communication as a service.

Network as a service (Naas)

A category of cloud services where the capability provided to the cloud service user is to
use network/transport connectivity services and/or inter-cloud network connectivity services.
Naas involves the optimization of resource allocations by considering network and computing
resources as a unified whole. Traditional Naas services include flexible and extended VPN, and
bandwidth on demand. Naas concept materialization also includes the provision of a virtual
network service by the owners of the network infrastructure to a third party (VNP – VNO).

FIG :1.2. CLOUD CILENT

1.4 DEPLOYMENT MODELS

Private cloud

Private cloud is cloud infrastructure operated solely for a single organization, whether
managed internally or by a third-party and hosted internally or externally. Undertaking a private

11
cloud project requires a significant level and degree of engagement to virtualize the business
environment, and requires the organization to re-evaluate decisions about existing resources.
When done right, it can improve business.

Public cloud

A cloud is called a "public cloud" when the services are rendered over a network that is
open for public use. Technically there may be little or no difference between public and private
cloud architecture, however, security consideration may be substantially different for services
(applications, storage, and other resources) that are made available by a service provider for a
public audience and when communication is effected over a non-trusted network. Generally,
public cloud service providers like Amazon AWS, Microsoft and Google own and operate the
infrastructure and offer access only via Internet (direct connectivity is not offered).

Community cloud

Community cloud shares infrastructure between several organizations from a specific


community with common concerns (security, compliance, jurisdiction, etc.), whether managed
internally or by a third-party and hosted internally or externally. The costs are spread over fewer
users than a public cloud (but more than a private cloud), so only some of the cost savings
potential of cloud computing are realized.

FIG: 1.3.TYPES OF CLOUD COMPUTING

12
Hybrid cloud

Hybrid cloud is a composition of two or more clouds (private, community or public) that
remain unique entities but are bound together, offering the benefits of multiple deployment
models. Gartner, Inc. defines a hybrid cloud service as a cloud computing service that is
composed of some combination of private, public and community cloud services, from different
service providers. A hybrid cloud service crosses isolation and provider boundaries so that it
can’t be simply put in one category of private, public, or community cloud service. It allows one
to extend either the capacity or the capability of a cloud service, by aggregation, integration or
customization with another cloud service.

Varied use cases for hybrid cloud composition exist. For example, an organization may
store sensitive client data in house on a private cloud application, but interconnect that
application to a billing application provided on a public cloud as a software service. This
example of hybrid cloud extends the capabilities of the enterprise to deliver a specific business
service through the addition of externally available public cloud services.

Another example of hybrid cloud is one where IT organizations use public cloud
computing resources to meet temporary capacity needs that cannot be met by the private cloud.
This capability enables hybrid clouds to employ cloud bursting for scaling across clouds.

Cloud bursting is an application deployment model in which an application runs in a


private cloud or data centre and "bursts" to a public cloud when the demand for computing
capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an
organization only pays for extra compute resources when they are needed.

Cloud bursting enables data centres to create an in-house IT infrastructure that supports
average workloads, and use cloud resources from public or private clouds, during spikes in
processing demands.By utilizing "hybrid cloud" architecture, companies and individuals are able
to obtain degrees of fault tolerance combined with locally immediate usability without
dependency on internet connectivity. Hybrid cloud architecture requires both on-premises
resources and off-site (remote) server-based cloud infrastructure.

13
1.5 PROJECT OVERVIEW

Cloud computing is the long dreamed vision of computing as a utility, where cloud
customers can remotely store their data into the cloud so as to enjoy the on-demand high quality
applications and services from a shared pool of configurable computing resources [1]. Its great
flexibility and economic savings are motivating both individuals and enterprises to outsource
their local complex data management system into the cloud. To protect data privacy and combat
unsolicited accesses in the cloud and beyond, sensitive data, e.g., emails, personal health records,
photo albums, tax documents, financial transactions, etc., may have to be encrypted by data
owners before outsourcing to the commercial public cloud [2]; this, however, obsoletes the
traditional data utilization service based on plaintext keyword search. The trivial solution of
downloading all the data and decrypting locally is clearly impractical, due to the huge amount of
bandwidth cost in cloud scale systems. Moreover, aside from eliminating the local storage
management, storing data into the cloud serves no purpose unless they can be easily searched
and utilized. Thus, exploring privacy-preserving and effective search service over encrypted
cloud data is of paramount importance. Considering the potentially large number of on-demand
data users and huge amount of outsourced data documents in the cloud, this problem is
particularly challenging as it is extremely difficult to meet also the requirements of performance,
system usability and scalability. On the one hand, to meet the effective data retrieval need, the
large amount of documents demand the cloud server to perform result relevance ranking, instead
of returning undifferentiated results. Such ranked search system enables data users to find the
most relevant information quickly, rather than burdensomely sorting through every match in the
content collection [3]. Ranked search can also elegantly eliminate unnecessary network traffic by
sending back only the most relevant data, which is highly desirable in the “pay-as-youuse” cloud
paradigm. For privacy protection, such ranking operation, however, should not leak any keyword
related information. On the other hand, to improve the search result accuracy as well as to
enhance the user searching experience, it is also necessary for such ranking system to support
multiple keywords search, as single keyword search often yields far too coarse results. As a
common practice indicated by today’s web search engines (e.g., Google search), data users may
tend to provide a set of keywords instead of only one as the indicator of their search interest to
retrieve the most relevant data. And each keyword in the search request is able to help narrow
down the search result further. “Coordinate matching” [4], i.e., as many matches as possible, is

14
an efficient similarity measure among such multi-keyword semantics to refine the result
relevance, and has been widely used in the plaintext information retrieval (IR) community.
However, how to apply it in the encrypted cloud data search system remains a very challenging
task because of inherent security and privacy obstacles, including various strict requirements like
the data privacy, the index privacy, the keyword privacy, and many others

15
CHAPTER 2

SYSTEM REQUIREMENTS:

2.1 HARDWARE REQUIREMENTS:

• System : Pentium IV 2.4 GHz.


• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 512 Mb.

2.2 SOFTWARE REQUIREMENTS:

• Operating system : - Windows XP.


• Coding Language : C#.NET
• Data Base : SQL SERVER 2005

16
2.3 SOFTWARE ENVIRONMENT

Features OF. Net

Microsoft .NET is a set of Microsoft software technologies for rapidly building


and integrating XML Web services, Microsoft Windows-based applications, and Web solutions.
The .NET Framework is a language-neutral platform for writing programs that can easily and
securely interoperate. There’s no language barrier with .NET: there are numerous languages
available to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET
framework provides the foundation for components to interact seamlessly, whether locally or
remotely on different platforms. It standardizes common data types and communications
protocols so that components created in different languages can easily interoperate.

“.NET” is also the collective name given to various software components built
upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so on).

THE .NET FRAMEWORK

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are

 Conversion from a low-level assembler-style language, called Intermediate


Language (IL), into code native to the platform being executed on.

 Memory management, notably including garbage collection.

 Checking and enforcing security restrictions on the running code.

17
 Loading and executing programs, with version control and other such features.

 The following features of the .NET framework are also worth description:

Managed Code

The code that targets .NET, and which contains certain extra

Information - “metadata” - to describe itself. Whilst both managed and unmanaged code can run
in the runtime, only managed code contains the information that allows the CLR to guarantee,
for instance, safe execution and interoperability.

Managed Data

With Managed Code comes Managed Data. CLR provides memory allocation
and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by
default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not.
Targeting CLR can, depending on the language you’re using, impose certain constraints on the
features available. As with managed and unmanaged code, one can have both managed and
unmanaged data in .NET applications - data that doesn’t get garbage collected but instead is
looked after by unmanaged code.

Common Type System

The CLR uses something called the Common Type System (CTS) to strictly enforce
type-safety. This ensures that all classes are compatible with each other, by describing types in a
common way. CTS define how types work within the runtime, which enables types in one
language to interoperate with types in another language, including cross-language exception
handling. As well as ensuring that types are only used in appropriate ways, the runtime also
ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.

Common Language Specification

The CLR provides built-in support for language interoperability. To ensure that you can
develop managed code that can be fully used by developers using any programming language, a

18
set of language features and rules for using them called the Common Language Specification
(CLS) has been defined. Components that follow these rules and expose only CLS features are
considered CLS-compliant.

THE CLASS LIBRARY

.NET provides a single-rooted hierarchy of classes, containing over 7000 types.


The root of the namespace is called System; this contains basic types like Byte, Double, Boolean,
and String, as well as Object. All objects derive from System. Object. As well as objects, there
are value types. Value types can be allocated on the stack, which can provide useful flexibility.
There are also efficient means of converting value types to object types if and when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.

The class library is subdivided into a number of sets (or namespaces), each
providing distinct areas of functionality, with dependencies between the namespaces kept to a
minimum.

LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual Studio .NET
enables developers to use their existing programming skills to build all types of applications and
XML Web services. The .NET framework supports new versions of Microsoft’s old favorites
Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new
additions to the family.

Visual Basic .NET has been updated to include many new and improved language
features that make it a powerful object-oriented programming language. These features include
inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured
exception handling, custom attributes and also supports multi-threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-compliant
language can use the classes, objects, and components you create in Visual Basic .NET.

19
Managed Extensions for C++ and attributed programming are just some of the
enhancements made to the C++ language. Managed Extensions simplify the task of migrating
existing C++ applications to the new .NET Framework.

C# is Microsoft’s new language. It’s a C-style language that is essentially “C++


for Rapid Application Development”. Unlike other languages, its specification is just the
grammar of the language. It has no standard library of its own, and instead has been designed
with the intention of using the .NET libraries as its own.

Microsoft Visual J# .NET provides the easiest transition for Java-language


developers into the world of XML Web Services and dramatically improves the interoperability
of Java-language programs with existing software written in a variety of other programming
languages.

Active State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be integrated into the Visual
Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit.

Other languages for which .NET compilers are available include

 FORTRAN

 COBOL

 Eiffel

20
Fig1 .Net Framework

ASP.NET Windows Forms

XML WEB SERVICES

Base Class Libraries

Common Language Runtime

Operating System

C#.NET is also compliant with CLS (Common Language Specification) and supports
structured exception handling. CLS is set of rules and constructs that are supported by the
CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET
Framework; it manages the execution of the code and also makes the development process
easier by providing services.

C#.NET is a CLS-compliant language. Any objects, classes, or components that created in


C#.NET can be used in any other CLS-compliant language. In addition, we can use objects,
classes, and components created in other CLS-compliant languages in C#.NET .The use of
CLS ensures complete interoperability among applications, regardless of the languages used
to create the application.

CONSTRUCTORS AND DESTRUCTORS:

Constructors are used to initialize objects, whereas destructors are used to destroy them.
In other words, destructors are used to release the resources allocated to the object. In
C#.NET the sub finalize procedure is available. The sub finalize procedure is used to
complete the tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the sub finalize
procedure can be called only from the class it belongs to or from derived classes.

21
GARBAGE COLLECTION

Garbage Collection is another new feature in C#.NET. The .NET Framework monitors
allocated resources, such as objects and variables. In addition, the .NET Framework
automatically releases memory for reuse by destroying objects that are no longer in use.

In C#.NET, the garbage collector checks for the objects that are not currently in use by
applications. When the garbage collector comes across an object that is marked for garbage
collection, it releases the memory occupied by the object.

OVERLOADING

Overloading is another feature in C#. Overloading enables us to define multiple procedures


with the same name, where each procedure has a different set of arguments. Besides using
overloading for procedures, we can use it for constructors and properties in a class.

MULTITHREADING:

C#.NET also supports multithreading. An application that supports multithreading can handle
multiple tasks simultaneously, we can use multithreading to decrease the time taken by an
application to respond to user interaction.

STRUCTURED EXCEPTION HANDLING

C#.NET supports structured handling, which enables us to detect and remove


errors at runtime. In C#.NET, we need to use Try…Catch…Finally statements to create
exception handlers. Using Try…Catch…Finally statements, we can create robust and
effective exception handlers to improve the performance of our application.

THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies application


development in the highly distributed environment of the Internet.

OBJECTIVES OF. NET FRAMEWORK

22
1. To provide a consistent object-oriented programming environment whether object codes is
stored and executed locally on Internet-distributed, or executed remotely.

2. To provide a code-execution environment to minimizes software deployment and


guarantees safe execution of code.

3. Eliminates the performance problems.

There are different types of application, such as Windows-based applications and Web-based
applications.

4.3 Features of SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now called
SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term
Analysis Services. Analysis Services also includes a new data mining component. The
Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server
2000 Meta Data Services. References to the component now use the term Meta Data Services.
The term repository is used only in reference to the repository engine within Meta Data Services

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

TABLE:

23
A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

To build or modify the structure of a table we work in the table design view.
We can specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables datasheet view
mode.

QUERY:

A query is a question that has to be asked the data. Access gathers data that answers the
question from one or more table. The data that make up the answer is either dynaset (if you edit
it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the
dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it,
such as deleting or updating.

CHAPTER 3

24
SYSETM ANALYSIS
3.1 EXISTING SYSTEM
The encryption is a helpful technique that treats encrypted data as documents and allows
a user to securely search over it through single keyword and retrieve documents of interest. The
direct application of these approaches to deploy secure large scale cloud data utilization system
would not be necessarily suitable, as they are developed as crypto primitives and cannot
accommodate such high service-level requirements like system usability, user searching
experience, and easy information discovery in mind.

3.2 DISADVANTAGE:

 Large scale cloud utilization gets less security

 Service level is not ell for users

3.3 PROPOSED SYSTEM


In this project, define and solve the problem of multi-keyword ranked search over
encrypted cloud data (MRSE) while preserving strict system-wise privacy in cloud computing
paradigm. Among various multi-keyword semantics, we choose the efficient principle of
“coordinate matching”, it many matches as possible, to capture the similarity between search
query and data documents. Specifically, we use “inner product similarity”, the number of query
keywords appearing in a document, to quantitatively evaluate the similarity of that document to
the search query in “coordinate matching” principle.
To improve various privacy requirements in two levels of threat models. The first time,
we explore the problem of multi keyword ranked search over encrypted cloud data, and establish
a set of strict privacy requirements for such a secure cloud data utilization system to become a
reality. The propose two MRSE schemes following the principle of “coordinate matching” while
meeting different privacy requirements in two levels of threat models. Thorough analysis
investigating privacy and efficiency guarantees of proposed schemes is given, and experiments
on the real-world dataset further show proposed schemes indeed introduce low overhead on
computation and communication.
25
3.4 ADVANTAGE:

 Multi key word ranking for secure the cloud data

 Searching on the encrypted data will give an expected data

26
CHAPTER 4
PROJECT DESCRIPTION
4.1 MODULES
 Data Owner Module

 Data User Module

 Encryption Module

 Rank Search Module

4.2 MODULES DESCRIPTION

Data Owner Module


This module helps the owner to register those details and also include login details. This
module helps the owner to upload his file with encryption using RSA algorithm. This ensures the
files to be protected from unauthorized user.

Data User Module


This module includes the user registration login details. This module is used to help the
client to search the file using the multiple key words concept and get the accurate result list
based on the user query. The user is going to select the required file and register the user details
and get activation code in mail email before enter the activation code. After user can download
the Zip file and extract that file.

Encryption Module:

This module is used to help the server to encrypt the document using RSA Algorithm and
to convert the encrypted document to the Zip file with activation code and then activation code
send to the user for download.

Rank Search Module

27
These modules ensure the user to search the files that are searched frequently using rank
search. This module allows the user to download the file using his secret key to decrypt the
downloaded data. This module allows the Owner to view the uploaded files and downloaded files

CHAPTER 5
SYSTEM DIAGRAM

28
5.1 SYSTEM FLOW DIAGRAM:

Fig 5.1 System Flow Diagram


5.2 INPUT DESIGN

The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to put
transaction data in to a usable form for processing can be achieved by inspecting the computer to
read data from a written or printed document or it can occur by having people keying the data
directly into the system. The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The
input is designed in such a way so that it provides security and ease of use with retaining the
privacy. Input Design considered the following things:

 What data should be given as input?

 How the data should be arranged or coded?

 The dialog to guide the operating personnel in providing input.

 Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES

29
1.Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle large volume of
data. The goal of designing input is to make data entry easier and to be free from errors. The data
entry screen is designed in such a way that all the data manipulates can be performed. It also
provides record viewing facilities.

3.When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user will not be in maize
of instant. Thus the objective of input design is to create an input layout that is easy to follow

5.3 OUTPUT DESIGN

A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.

1. Designing computer output should proceed in an organized, well thought out manner; the right
output must be developed while ensuring that each output element is designed so that people will
find the system can use easily and effectively. When analysis design computer output, they
should Identify the specific output that is needed to meet the requirements.

2.Select methods for presenting information.

3.Create document, report, or other formats that contain information produced by the system.

The output form of an information system should accomplish one or more of the following
objectives.

 Convey information about past activities, current status or projections of the

30
 Future.

 Signal important events, opportunities, problems, or warnings.

 Trigger an action.

 Confirm an action.

CHAPTER 6

SYSTEM TESTING

31
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.

TYPES OF TESTS

6.1 Unit testing

Unit testing involves the design of test cases that validate that the internal program logic
is functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.

6.2 Integration testing

Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.

6.3 Functional testing

32
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures : interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key


functions, or special test cases. In addition, systematic coverage pertaining to identify Business
process flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.

6.4 System Testing

System testing ensures that the entire integrated software system meets requirements. It
tests a configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.

6.5 White Box Testing

White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It is
used to test areas that cannot be reached from a black box level.

6.6 Black Box Testing

33
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it.

6.7 Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.

Test strategy and approach

Field testing will be performed manually and functional tests will be written in detail.

Test objectives

All field entries must work properly.

Pages must be activated from the identified link.

The entry screen, messages and responses must not be delayed.

Features to be tested

Verify that the entries are of the correct format

No duplicate entries should be allowed

All links should take the user to the correct page.

6.8 Integration Testing

Software integration testing is the incremental integration testing of two or more


integrated software components on a single platform to produce failures caused by interface

34
defects. The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.

Test Results:

All the test cases mentioned above passed successfully. No defects encountered.

6.9 Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

35
CHAPTER 7
CONCLUSION AND FUTUREWORK
We define and solve the problem of multi-keyword ranked search over encrypted cloud
data, and begin a variety of privacy requirements. Among different multi-keyword semantics, we
choose the efficient principle of “coordinate matching”, as many matches as possible, to
effectively capture similarity between query keywords and outsourced documents, and use “inner
product similarity” to quantitatively formalize such a principle for similarity measurement. For
meeting the challenge of supporting multi-keyword semantic without privacy breaches, the
propose a basic MRSE scheme using secure inner product computation, and significantly
improve it to achieve privacy requirements in two levels of threat models. Thorough analysis
investigating privacy and efficiency guarantees of proposed schemes is given, and experiments
on the real-world dataset show our proposed schemes introduce low overhead on both
computation and communication.

CHAPTER 8

36
REFERENCES

[1] L. M. Vaquero, L. Rodero-Merino, J. Caceres, and M. Lindner, “A break in the clouds:


towards a cloud definition,” ACM SIGCOMM Comput. Commun. Rev., vol. 39, no. 1, pp. 50–
55, 2009.

[2] S. Kamara and K. Lauter, “Cryptographic cloud storage,” in RLCPS, January 2010, LNCS.
Springer, Heidelberg.

[3] A. Singhal, “Modern information retrieval: A brief overview,” IEEE Data Engineering
Bulletin, vol. 24, no. 4, pp. 35–43, 2001.

[4] I. H. Witten, A. Moffat, and T. C. Bell, “Managing gigabytes: Compressing and indexing
documents and images,” Morgan Kaufmann Publishing, San Francisco, May 1999.

[5] D. Song, D. Wagner, and A. Perrig, “Practical techniques for searches on encrypted data,” in
Proc. of S&P, 2000.

[6] E.-J. Goh, “Secure indexes,” Cryptology ePrint Archive, 2003, http://
eprint.iacr.org/2003/216.

[7] Y.-C. Chang and M. Mitzenmacher, “Privacy preserving keyword searches on remote
encrypted data,” in Proc. of ACNS, 2005.

[8] R. Curtmola, J. A. Garay, S. Kamara, and R. Ostrovsky, “Searchable symmetric encryption:


improved definitions and efficient constructions,” in Proc. of ACM CCS, 2006.

[9] D. Boneh, G. D. Crescenzo, R. Ostrovsky, and G. Persiano, “Public key encryption with
keyword search,” in Proc. of EUROCRYPT, 2004.

[10] M. Bellare, A. Boldyreva, and A. ONeill, “Deterministic and efficiently searchable


encryption,” in Proc. of CRYPTO, 2007.

[11] M. Abdalla, M. Bellare, D. Catalano, E. Kiltz, T. Kohno, T. Lange, J. Malone-Lee, G.


Neven, P. Paillier, and H. Shi, “Searchable encryption revisited: Consistency properties, relation
to anonymous ibe, and extensions,” J. Cryptol., vol. 21, no. 3, pp. 350–391, 2008.

[12] J. Li, Q. Wang, C. Wang, N. Cao, K. Ren, and W. Lou, “Fuzzy keyword search over
encrypted data in cloud computing,” in Proc. of IEEE INFOCOM’10 Mini-Conference, San
Diego, CA, USA, March 2010.

CHAPTER 9

37
APPENDIX

9.1 source code

using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Xml.Linq;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Xml;
using OWC;
using System.IO;
using System.IO.Compression;
using System.Text;

public partial class Working : System.Web.UI.Page


{
SqlConnection con = new SqlConnection(ConfigurationManager.AppSettings["connection"]);
id id1 = new id();
string a, b, c;
int u;
protected void Page_Load(object sender, EventArgs e)
{
con.Open();
SqlDataAdapter sda = new SqlDataAdapter("select sno,fname,catagorys from uploads",
con);
DataSet ds = new DataSet();
sda.Fill(ds);
if (ds.Tables[0].Rows.Count > 0)
{
GridView1.DataSource = ds;
GridView1.DataBind();
}
con.Close();
if (!IsPostBack)
{
Label8.Text = Convert.ToString(id1.idgeneration());

38
Panel2.Visible = false;
Panel3.Visible = false;
Panel4.Visible = true;
Panel5.Visible = false;
Panel6.Visible = false;
u = 0;
}
}
protected void ImageButton1_Click(object sender, ImageClickEventArgs e)
{
Panel2.Visible = false;
Panel3.Visible = false;
Panel4.Visible = true;
Panel5.Visible = false;
Panel6.Visible = false;
}
protected void ImageButton2_Click(object sender, ImageClickEventArgs e)
{
Panel2.Visible = true;
Panel3.Visible = false;
Panel4.Visible = false;
Panel5.Visible = false;
Panel6.Visible = false;
}
protected void ImageButton3_Click(object sender, ImageClickEventArgs e)
{
Panel2.Visible = false;
Panel3.Visible = true;
Panel4.Visible = false;
Panel5.Visible = false;
Panel6.Visible = false;
}
protected void ImageButton4_Click(object sender, ImageClickEventArgs e)
{
Panel2.Visible = false;
Panel3.Visible = false;
Panel4.Visible = false;
Panel5.Visible = false;
Panel6.Visible = true;
Image4.Visible = false;
GridView2.Visible = false;
}
protected void ImageButton5_Click(object sender, ImageClickEventArgs e)
{
Panel2.Visible = false;
Panel3.Visible = false;

39
Panel4.Visible = false;
Panel5.Visible = true;
Panel6.Visible = false;
}
protected void ImageButton7_Click(object sender, ImageClickEventArgs e)
{
con.Open();
string clearText = TextBox4.Text.Trim();
string cipherText = encryption.Encrypt(clearText, true);
byte[] filebytes = new byte[FileUpload1.PostedFile.InputStream.Length + 1];
a= System.IO.Path.GetExtension(FileUpload1.PostedFile.FileName);
b = FileUpload1.PostedFile.FileName;
c = FileUpload1.PostedFile.ContentType;
FileUpload1.PostedFile.InputStream.Read(filebytes, 0, filebytes.Length);
string paths = Request.PhysicalApplicationPath + "Files\\" +
System.IO.Path.GetFileName(FileUpload1.FileName);
FileUpload1.SaveAs(Request.PhysicalApplicationPath + "Files\\" +
System.IO.Path.GetFileName(FileUpload1.FileName));
/////////// ~ Zip File ~ ///////////////

string s1 = FileUpload1.FileName;
//string s2 = "C:\\Documents and Settings\\Administrator\\My Documents\\Visual Studio
2005\\WebSites\\files\\";
string s2 = Request.PhysicalApplicationPath + "Files" + "\\";
string srcFile = s2 + s1;
//string d1="C:\\Documents and Settings\\Administrator\\My Documents\\Visual Studio
2005\\WebSites\\files\\";
string d1 = Request.PhysicalApplicationPath + "ZipFiles" + "\\";
string d2 = FileUpload1.FileName;
string dstFile = d1 + d2 + ".zip";
Session["filen"] = dstFile;
//Important wordfile and compress files are saved on the same folder.
FileStream fsIn = null; // will open and read the srcFile
FileStream fsOut = null; // will be used by the GZipStream for output to the dstFile
GZipStream gzip = null;
byte[] buffer;
int count = 0;

try
{
fsOut = new FileStream(dstFile, FileMode.Create, FileAccess.Write, FileShare.None);
gzip = new GZipStream(fsOut, CompressionMode.Compress, true);

GridView2.Visible = false;
Image4.Visible = true;

40
////////////////////////////////////////////////////////////////

con.Open();
DataSet ds1 = new DataSet();
SqlDataAdapter da = new SqlDataAdapter("select fname,counts from fileusage", con);
da.Fill(ds1);
OWC.ChartSpaceClass oChartSpace = new OWC.ChartSpaceClass();

System.IO.StringWriter sw = new System.IO.StringWriter();


XmlDocument xDoc = new XmlDocument();
ds1.WriteXml(sw);
// clean up
con.Close();
da.Dispose();
ds1.Dispose();
xDoc.LoadXml(sw.ToString());
sw.Close();
System.Xml.XmlNodeList nodes;
nodes = xDoc.ChildNodes.Item(0).ChildNodes;
int nCount = nodes.Count;
string[] aNames = new string[nCount];
string[] aTotals = new string[nCount];
string names = String.Empty;
string totals = String.Empty;
int i = 0;
for (i = 0; i < nCount; i++)
{
aNames[i] = nodes.Item(i).ChildNodes.Item(0).InnerText;
aTotals[i] = nodes.Item(i).ChildNodes.Item(1).InnerText;
}

names = String.Join("\t", aNames);


totals = String.Join("\t", aTotals);
oChartSpace.Charts.Add(0);
oChartSpace.Charts[0].SeriesCollection.Add(0);

oChartSpace.Charts[0].SeriesCollection[0].SetData(OWC.ChartDimensionsEnum.chDimCatego
ries,
Convert.ToInt32(OWC.ChartSpecialDataSourcesEnum.chDataLiteral), names);

oChartSpace.Charts[0].SeriesCollection[0].SetData(OWC.ChartDimensionsEnum.chDimValues,
Convert.ToInt32(OWC.ChartSpecialDataSourcesEnum.chDataLiteral), totals);
string strFullPathAndName = Server.MapPath("~/graphs/" +
System.DateTime.Now.Ticks.ToString() + ".gif");
oChartSpace.ExportPicture(strFullPathAndName, "gif", 800, 600);
string[] arr = new string[] { };
arr = strFullPathAndName.Split('\\');
41
Image4.ImageUrl = "~/" + arr[arr.Length - 2] + "/" + arr[arr.Length - 1];
}
protected void ImageButton10_Click(object sender, ImageClickEventArgs e)
{
GridView2.Visible = true;
Image4.Visible = false;
con.Open();
SqlDataAdapter sda = new SqlDataAdapter("select * from downloads", con);
DataSet ds = new DataSet();
sda.Fill(ds);
GridView2.DataSource = ds;
GridView2.DataBind();
con.Close();
}
protected void TextBox7_TextChanged(object sender, EventArgs e)
{
string clearText = TextBox7.Text.Trim();
string cipherText = encryption.Encrypt(clearText, true);
Label14.Text = cipherText;
}
protected void ImageButton6_Click(object sender, ImageClickEventArgs e)
{
con.Open();
SqlCommand cmd11 = new SqlCommand("update Admins set pass='" + TextBox2.Text +
"'", con);
cmd11.ExecuteNonQuery();
con.Close();
RegisterStartupScript("msg", "<script>alert('Password Successfully Changed...')</script>");
TextBox1.Text = "";
TextBox2.Text = "";
TextBox3.Text = "";
Panel2.Visible = false;
Panel3.Visible = false;
Panel4.Visible = true;
Panel5.Visible = false;
Panel6.Visible = false;
}
}

using System;

using System.Collections;

using System.Configuration;

using System.Data;

42
using System.Linq;

using System.Web;

using System.Web.Security;

using System.Web.UI;

using System.Web.UI.HtmlControls;

using System.Web.UI.WebControls;

using System.Web.UI.WebControls.WebParts;

using System.Xml.Linq;

using System.Collections.Generic;

using System.Data.SqlClient;

using System.Xml;

using OWC;

using System.IO;

using System.IO.Compression;

using System.Text;

public partial class Downloading : System.Web.UI.Page

SqlConnection con = new


SqlConnection(ConfigurationManager.AppSettings["connection"]);

protected void Page_Load(object sender, EventArgs e)

string a1=(string)Session["fnam"];

SqlDataAdapter sda=new SqlDataAdapter("select * from uploads where


fname='"+a1+"'",con);

DataSet ds = new DataSet();

43
sda.Fill(ds);

string filepath = ds.Tables[0].Rows[0]["zips"].ToString(); ;

string filename = Path.GetFileName(filepath);

System.IO.Stream stream = null;

try

stream = new FileStream(filepath, System.IO.FileMode.Open,


System.IO.FileAccess.Read, System.IO.FileShare.Read);

//// Total bytes to read:

long bytesToRead = stream.Length;

Response.Cache.SetCacheability(HttpCacheability.NoCache);

Response.ContentType = "application/zip";

Response.AddHeader("Content-Disposition", "attachment; filename=" +


filename);

// Read the bytes from the stream in small portions.

while (bytesToRead > 0)

// Make sure the client is still connected.

if (Response.IsClientConnected)

// Read the data into the buffer and write into the

// output stream.

byte[] buffer = new Byte[1000];

int length = stream.Read(buffer, 0, 1000);

44
Response.OutputStream.Write(buffer, 0, length);

Response.Flush();

// We have already read some bytes.. need to read

// only the remaining.

bytesToRead = bytesToRead - length;

else

// Get out of the loop, if user is not connected anymore..

bytesToRead = -1;

catch (Exception ex)

Response.Write(ex.Message);

// An error occurred..

finally

if (stream != null)

stream.Close();

45
}

} using System;

using System.Collections;

using System.Configuration;

using System.Data;

using System.Linq;

using System.Web;

using System.Web.Security;

using System.Web.UI;

using System.Web.UI.HtmlControls;

using System.Web.UI.WebControls;

using System.Web.UI.WebControls.WebParts;

using System.Xml.Linq;

using System.Collections.Generic;

using System.Data.SqlClient;

using System.Xml;

using OWC;

using System.IO;

using System.IO.Compression;

using System.Text;

using System.Net;

using System.Net.Mail;

public partial class registration : System.Web.UI.Page

46
string counts;

int lo;

string z1,z2,x1,x2,x3,x4;

//string owrid, owrpwd, yes;

string gMailAccount = "customerservice404@gmail.com";

string password = "custom404@service";

string to;

string subject = "Activation Code For Download";

string message;

//string Securitykey;

//string ranno;

encryption en=new encryption();

SqlConnection con = new


SqlConnection(ConfigurationManager.AppSettings["connection"]);

protected void Page_Load(object sender, EventArgs e)

Label3.Text=Request.Params["id"];

Session["tex"] = Label3.Text;

Label8.Text = Convert.ToString(en.userid1());

protected void ImageButton2_Click(object sender, ImageClickEventArgs e)

Response.Redirect("Search.aspx");

protected void ImageButton1_Click(object sender, ImageClickEventArgs e)

47
{

string yz=Convert.ToString(en.userid2());

con.Open();

SqlCommand cmd=new SqlCommand("insert into downloads


values('"+Label8.Text+"','"+Label3.Text+"','"+TextBox1.Text+"','"+TextBox2.Text+"','"+
TextBox4.Text+"')",con);

cmd.ExecuteNonQuery();

SqlDataAdapter sda = new SqlDataAdapter("select * from fileusage", con);

DataSet ds = new DataSet();

sda.Fill(ds);

if (ds.Tables[0].Rows.Count > 0)

for (int za = 0; za < ds.Tables[0].Rows.Count; za++)

counts = ds.Tables[0].Rows[za]["fname"].ToString();

if (Label3.Text == counts)

lo = Convert.ToInt32(ds.Tables[0].Rows[za]["counts"].ToString()) + 1;

SqlCommand cmd3 = new SqlCommand("update fileusage set counts='" + lo +


"' where fname='" + Label3.Text + "'", con);

cmd3.ExecuteNonQuery();

break;

else

SqlCommand cmd4 = new SqlCommand("insert into fileusage values('" + yz +


"','" + Label3.Text + "','1')", con);

48
cmd4.ExecuteNonQuery();

break;

else

SqlCommand cmd1 = new SqlCommand("insert into fileusage values('" + yz + "','"


+ Label3.Text + "','1')", con);

cmd1.ExecuteNonQuery();

con.Close();

x1 = TextBox1.Text;

x2 = TextBox2.Text;

x3 = TextBox3.Text;

x4 = TextBox4.Text;

TextBox1.Text = "";

TextBox2.Text = "";

TextBox3.Text = "";

TextBox4.Text = "";

con.Open();

SqlDataAdapter sda10 = new SqlDataAdapter("select efname from uploads where


fname='" + Label3.Text + "'", con);

DataSet ds10 = new DataSet();

sda10.Fill(ds10);

z1 = ds10.Tables[0].Rows[0]["efname"].ToString();

49
message = "<hr><br>Hai " + "<b>" + x1 + " ! </b><br><br>" + "Your Activation
Code is : " + "<b>" + z1 + "</b>";

to = x2;

NetworkCredential loginInfo = new NetworkCredential(gMailAccount, password);

MailMessage msg = new MailMessage();

msg.From = new MailAddress(gMailAccount);

msg.To.Add(new MailAddress(to));

msg.Subject = subject;

msg.Body = message;

msg.IsBodyHtml = true;

try

SmtpClient client = new SmtpClient("smtp.gmail.com");

client.EnableSsl = true;

client.UseDefaultCredentials = false;

client.Credentials = loginInfo;

client.Send(msg);

catch (Exception ex)

Console.WriteLine(ex);

Label9.Visible = true;

Label10.Visible = true;

50
TextBox5.Visible = true;

ImageButton3.Visible = true;

protected void ImageButton3_Click(object sender, ImageClickEventArgs e)

SqlDataAdapter sda10 = new SqlDataAdapter("select efname from uploads where


fname='" + Label3.Text + "'", con);

DataSet ds10 = new DataSet();

sda10.Fill(ds10);

z2 = ds10.Tables[0].Rows[0]["efname"].ToString();

if (z2 == TextBox5.Text)

TextBox5.Text = "";

Response.Redirect("Downloading.aspx");

else

RegisterStartupScript("msg", "<script>alert('Enter Correct Activation


Code..')</script>");

9.2 screenshots

51
52
53
54

Potrebbero piacerti anche