Sei sulla pagina 1di 47

SCHOOL OF TECHNOLOGY AND APPLIED SCIENCE

EDAPPALLY

SOFTWARE PIRACY DETECTION SYSTEM SNITCH

GUIDENCE AND SUPERVISION:

Ms.Anju Sebastian

BY:
JINTO T.K
SETHU RAMAN. O
SIDHARTH M SURENDRAN

1
DECLARATION

We, hereby declare that the Project report Snitch is submitted in partial fulfilment of
the requirements for the 5th semester of BSc Cyber Forensics and it is a report of the original
work done by ourselves.

PLACE : Edappally Jinto T.K

DATE : Sethu Raman O

Sidharth M.S

2
ACKNOWLEDGEMENTS

It is our privilege to extend sincere gratitude to all those who helped us in the planning
and development of this project .First of all, We would like to thank the God almighty for
showering his blessings upon us.

We extend our sincere and heartfelt thanks to Ms.Anju Sebastian, Project Guide for 5th
semester for providing us an opportunity to present the project at its best.

We also express our deep gratitude to the faculty member Mrs.Suma for her valuable
guidance, timely suggestions and help in the completion of this Project.

We extend our sincere thanks to all the teaching and non-teaching staff for providing
the necessary facilities and help.

Last, but not the least. We would like to thank all our friends and well-wishers for
their support and prayers.

Jinto T.K

Sethu Raman O

Sidharth M.S

3
SYNOPSIS

Software piracy is a term that is frequently used to


describe the copying or use of computer software in violation of its license . With the
help of the internet it is quite easy to share and use pirated software. Given the vast ness
of internet it is hard to keep track of your software once released . Snitch is a program
that is used to track and identify pirated software in the internet.

Here the user is required to provide his\her software name and the IP
addresses of the authorised servers from where he/she wishes to distribute the software
. The snitch bot sacns the internet and checks all the available link and compares the
IP of the link with the given IPand authenticates the link . The information about the
links can also be send to the software designer via email.

Thus by implementing this project , we can reduce or prevent


the distribution of pirated software across the internet.

4
CONTENTS

1. Introduction
1.1 Project purpose 2.Problem Identification

2.1 Existing System


2.2 Proposed System
2.3 Module Description 3. System Study

3.1 Hardware Specification

3.2 Software Specification

3.3 Feasibility Study

3.3.1 Technical Feasibility

3.3.2 Operational Feasibility

3.3.3 Economic Feasibility


3.3.4 Behavioural Feasibility

4. System requirements

4.1 Requirement Specification 5. System design

5.1 Input Design

5.2 Output Design

5.3 Dataflow Diagram


5.4 Database Structure 6. System Testing

6.1 Unit Testing

6.2 Integration Testing

6.3 Validation Testing

6.4 Output Testing

6.5 User Acceptance Testing


6.6 System Testing

5
7. Conclusion

8. Forms

9. Bibliography

6
INTRODUCTION

7
INTRODUCTION

According to the increased usage of the internet, the is a rapid increasing


in the distribution of pirated softwares via the internet .The user downloads a software from
the authorised site and distributes it via software sharing or peer to peer networks. The snitch
helps in tracking and preventing the illegal distribution of a software

If the user tries to distribute the software via an unautharised channel the snitch bot can track
the link and validate the source IP and it will alert the authority about the pirating of the
software.

1.1 PROJECT PURPOSE


The main purpose of the software is to prevent software piracy via the internet.

8
PROBLEM IDENTIFICATION

9
2. PROBLEM IDENTIFICATION

2.1 EXISTING SYSTEM

Today the reach of the internet has increased exponentially; this makes it easier to distribute the
software and makes it much hard to keep track of the software once it is distributed. The
current measure against piracy includes direct installation of the software from the site to the
users computer. But even this system is flawed.

2.2 PROPOSED SYSTEM

Due to the above mentioned flaws in the existing system it has been proposed for a new
system, which will overcome these difficulties. The proposed system called Snitch developed
using .net frame work and uses a bit of java to provide flexibility while searching the internet.
This project provides a method to track the link and IP of the software using its name and
validates it with the give authorised IP. If the link exists in an unauthorised server it will alert
the authority

Advantages

An effective method to keep track of your software in the web.

Helps prevent unauthorised users from distributing pirated versions of the software

2.3 MODULE DESCRIPTION

a) USER INPUT

This module collects the details of the software from the user.

The user is asked to give the name of the software and the IP of the authorised
server

b) AUTHARISATION AND ALERT

Searches the internet for all the available link to that software

Tracks the IP of the link and verifies it with the users IP.

Shows the result and alerts the user if verification fails.

10
SYSTEM STUDY

11
3. SYSTEM STUDY

3.1 HARDWARE SPECIFICATION

Processor : Intel Pentium

RAM : 64 MB RAM

Hard Disk Drive : 4 GB

Key Board : Standards 101/102 or Microsoft Natural Keyboard

CD Drive : Optional

Monitor : 14VI5 colour

Mouse : Microsoft Standard Mouse

3.2 SOFTWARE SPECIFICATION

Operating System Windows 8.

Language C# , java

Front End. ASP.NET

BOT Java

Browser Chrome

Here the user can also track pirated movies by providing the name of the movie and the
Snitch will find all the links that hosts the movie.

12
Windows 8

Windows 8 is a personal computer operating system developed by Microsoft as part


of the Windows NT family of operating systems. Development of Windows 8 started before
the release of its predecessor, Windows 7, in 2009. It was announced at CES 2011, and
followed by the release of three pre-release versions from September 2011 to May 2012.
The operating system was released to manufacturing on August 1, 2012, and was released
for general availability on October 26, 2012.

Windows 8 introduced major changes to the operating system's platform and user interface
to improve its user experience on tablets, where Windows was now competing with mobile
operating systems, including Android and iOS. In particular, these changes included a
touch-
optimized Windows shell based on Microsoft's "Metro" design language, the Start screen
(which displays programs and dynamically updated content on a grid of tiles), a new
platform for developing apps with an emphasis on touch screen Input, integration with
online services (including the ability to sync apps and settings between devices), and
Windows Store, an online store for downloading and purchasing new software. Windows 8
added support for USB 3.0, Advance Format hard drives, near field communications, and
cloud computing. Additional security features were introduced, such as built-in antivirus
software, integration with Microsoft Smart Screen phishing filtering service and support
for UEFI Secure Boot on supported devices with UEFI firmware, to prevent malware from
infecting the boot process.

Windows 8 was released to a mixed critical reception. Although reaction towards its
performance improvements, security enhancements, and improved support for touch screen
devices was positive, the new user interface of the operating system was widely criticized
for being potentially confusing and difficult to learn (especially when used with a keyboard
and mouse instead of a touch screen). Despite these shortcomings, 60 million Windows 8
licenses have been sold through January 2013, a number which included both upgrades and
sales to OEMs for new PCs.

On October 17, 2013, Microsoft released Windows 8.1. ft addresses some aspects of
Windows 8 that were criticized by reviewers and early adopters and incorporates
additional improvements to various aspects of the operating system. Windows 8 was
ultimately succeeded by Windows 10 in July 2015. Support for Windows 8 RTM ended on
13
January 12, 2016; per Microsoft lifecycle policies regarding service packs, Windows
8.1 must be installed to maintain support and receive further updates.
3.3 FEASIBILITY STUDY

A feasibility studies main goal is to assess the economic viability of the proposed
Does the idea make economic sense? The study should provide a thorough
analysis of the system. The outcome of the feasibility study will indicate whether or
not to proceed with the proposed venture. If the results of the feasibility study are
positive, then the cooperative can proceed to develop a business plan. Feasibility
analysis is the procedure for identifying the candidate
system, evaluating and electing the most feasible system. Feasibility analysis is
initiated when users within an organization face the need for change or improvement
in the current system, which could be either manual or automated. A feasibility study
could be used to test a new working system, which could be because business. The
feasibility study needs to answer the question:

The current system may no longer suit its purpose,

Technological advancement may have rendered the current system obsolete,

When a new project proposed, it normally goes through feasibility assessment.


Feasibility study is carried out to determine whether the proposed system is possible to
develop with available resources and what should be the coat consideration.

Facts considered in the feasibility analysis were :

Technical feasibility

Operational feasibility

Economical feasibility

Behavioural feasibility

3.3.1 Technical Feasibility

Technical feasibility is frequently the most difficult area to assess at the system
development process stage. It centres on the existing computer system (hardware, software
etc) and to what extent it can support the proposed addition. This involves financial
consideration to accommodate technical enhancement. It is essential that the process of
analysis and definition be conducted in parallel with an assessment of technical feasibility.
14
In this way concrete specification may be judged, as they are determined. This system is
technically feasible.

The considerations that are normally associated with technical feasibility include;

1. Development risk: Can the system element be designed so that the necessary
function and performances are achieved within the constraints uncovered during
analysis?

2. Resource availability; Are competent staff available to develop the system


element in question? Are other necessary resources (hardware and software)
available to build the system?
3. Technology: Has the relevant technology progressed to a state that will support
the system?

3.3.2 Operational Feasibility

Proposed projects are beneficial only if they can be turned into information
systems that will meet the organizations operating requirements. Simply, there is no
difficulty in implementing the system if the user has knowledge. The purpose of the
operational feasibility study is to determine whether the new system will be used if it is to
be developed and implemented, and whether there will be resistance from users that will
undermine the possible application benefits.

There was no difficulty in implementing the system and the proposed system if
so effective, user friendly and functionally reliable that the users in the company will find
that the system reduce their hardshIPs. If the users of the system are fully aware of the
internal working of the system then users will not be facing any problem in running the
system.

3.3.3 Economic Feasibility

Economic analysis is the most important and frequently used method for
evaluating the effectiveness of the proposed system, it is very essential because the main
goal of the proposed system is to have economically better result along with the increased
efficiency. Economic analysis is the most commonly used method for evaluating

15
effectiveness of the system. Cost-benefit analysis is most important assessment of the
economic justification of the project.

Cost-benefit analysis delineates the cost for project development and weighs
them against tangible and intangible benefits of the system, the relative size of the project,
and the expected return on the investment desired as a part of companys strategic plan.
Benefits of a system are always determined relative to the existing mode of operation.

This system is economically feasible since it do not require any initial setup cost. It
does not need additional staffing requirements. Economic feasibility deals about the
economic impact faced by the organization to implement a new system. Not only cost of
hardware, software etc. is considered but also the form of reduced cost. The project, installed,
certainly be beneficial since there will be a reduction.

3.3.4 Behavioural Feasibility

The system does not require much maintenance once it is implemented. As the
system is fully GUI based so it would be easy for the user to get friendly with the system.
The user need not be a computer professional. The system is equIPped with various design
tools so that the user can make use of these as and when required, thus needing less help
from outside for maintenance the system.

Feasibility Study Process

Feasibility study comprises the following steps

1. Information assessment Identifies information about whether the system helps in


achieving the objectives of the organization. It also verifies that the system can be
implemented using new technology and within the budget, and whether the system can
be integrated with the existing system.

2. Information collection Specifies the sources from where information about software
canbe obtained. Generally, these sources include users, and the software development team,
3. Report writing Uses a feasibility report, which is the conclusion of the feasibility by
the software development team. In includes the recommendation whether the software
development should continue or not.

16
SYSTEM REQUIREMENTS

17
4. SYSTEM REQUIREMENTS

Requirements analysis, also called requirements engineering, is the process of


determining user expectations for a new or modified product. These features, called
requirements, must be quantifiable, relevant and detailed. In software engineering, such
requirements are often called functional specifications. Requirements analysis is an
important aspect of project management.

Requirements analysis involves frequent communication with system users to


determine specific feature expectations, resolution of conflict or ambiguity in requirements
as demanded by the various users or groups of users, avoidance of feature creep and
documentation of all aspects of the project development process from start to finish. Energy
should be directed towards ensuring that the final system or product conforms to client
needs rather than attempting to mould user expectations to fit the requirements.

Requirements analysis is a team effort that demands a combination of hardware,


software and human factors engineering expertise as well as skills in dealing with people.

Conceptually, requirements analysis includes three types of activity:

Eliciting requirements: The task of identifying the various types of requirements from
various sources including project documentation, (e.g. the project charter or
definition), business process documentation, and stakeholder interviews. This is
sometimes also called requirements gathering.
Analysing requirements: Determining whether the stated requirements are clear,
complete, consistent and unambiguous, and resolving any apparent conflicts.
Recording requirements: Requirements may be documented in various forms, usually
including a summary list and may include natural-language documents, use cases, user
stories, or process specifications.

18
Requirements analysis can be a long and arduous process during which many
delicate psychological skills are involved. New systems change the environment and
relationships between people, so it is important to identify all the stakeholders, take into
account all their needs and ensure they understand the implications of the new systems.
Analysts can employ several techniques to elicit the requirements from the customer.
These may include the development of scenarios (represented as user stories in agile
methods), the identification of use cases, the use of workplace observation or through
holding interviews, or focus groups (more aptly named in this context as requirements
workshops, or requirements review sessions) and creating requirements lists. Prototyping
may be used to develop an example system that can be demonstrated to stakeholders.
Where necessary, the analyst will employ a combination of these methods to establish the
exact requirements of the stakeholders, so that a system that meets the business needs is
produced.

4.1 REQUIREMENT SPECIFICATION:


The data produced during the fact-finding techniques. The data obtained are
collected and analysed to determine the output. These data mainly provide information
regarding the organizational demands and needs. Also users requirements can be known.

IDENTIFICATION OF ESSENTIAL REQUIREMENTS:

Once requirements are clearly identified greater accuracy can be achieved in the output. All
the facts should be given prior importance and from them the analysis should take accurate
facts.

19
SOFTWARE/SYSTEM REQUIREMENT SPECIFICATION (SRS)

No. Requirement Essential or Description of the Remarks


Desirable Requirement

RSI Essential The system needs to The internet is


The System access the internet to fundamental to
Should have a search and obtain the functioning
links of the system
good internet
connection

RS2 The system Essential Helps the user to Helps a serve as


should have provide date the user
various Fields to interface
retrieve data from
the user

RS3 The system Essential This feature will help To avail more
should show the the user to assess and data.
result of the analyse the current
verifications situation

20
SYSTEM DESIGN

21
5. SYSTEM DESIGN

5.1 INPUT DESIGN


The user interface design is very important for any application. The Interface design
describes how the software communicates within itself, to system that interpreted with it and
with humans who use it. The interface is a packaging for computer software if the interface
is easy to learn, simple to use. If the interface design is very good, the user will fall into an
interactive software application.

The input design is the process of converting the user-oriented inputs into the
computer based format. The data is fed into the system using simple interactive forms. The
forms have been supplied with messages so that user can enter data without facing any
difficulty. The data is validated wherever it requires in the projects. This ensures that only
the correct data have been incorporated into the system.

The goal of designing input data is to make the automation as easy and free from
errors as possible. For providing a good input design for the application easy data input and
selection features are adopted. The input design requirements such as user friendliness,
consistent format and interactive Dialogue for giving the right message and help for the user
at right time are also considered for the development of this project. Input design involves
determining the record media, method of input, speed of capture and entry to the system.
This software has the following inputs:

Only authorized users can access the system.

Guarantee that transactions are acceptable.

Valid data for accuracy

5.2 OUTPUT DESIGN


A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any systems results of processing are communicated to the user and
to other systems through outputs. In the output design it is determined how the
information is to be displayed for immediate need and also the hard copy output, It is the
most important and direct source information to the user and helps in decision-marking.

22
The objective of the output design is to convey the information of all the past activities,
current status and to emphasize important events. The output generally refers to the results
and information that is generated from the system. Outputs from computers are required
primarily to communicate the results of processing to the users. They are also used to
provide a permanent copy of results for laster consultation.

5.3 DATA FLOW DIAGRAM

A data-flow diagram (DFD) is a graphical representation of the "flow"


of data through an information system. DFDs can also be used for the visualization of
data processing

(structured design).

On a DFD, data items flow from an external data source or an internal data
store to an internal data store or an external data sink, via an internal process.

A DFD provides no information about the timing or ordering of processes, or


about whether processes will operate in sequence or in parallel. It is therefore quite different
from a flowchart, which shows the flow of control through an algorithm, allowing a reader
to determine what operations will be performed, in what order, and under what
circumstances, but not what kinds of data will be input to and output from the system, nor
where the data will come from and go to, nor where the data will be stored (all of which
are shown on a DFD).

When it comes to conveying how information data flows through systems


(and how that data is transformed in the process), data flow diagrams (DFDs) are the
method of choice over technical descriptions for three principal reasons.

(1) DFDs are easier to understand by technical and nontechnical audiences.

(2) DFDs can provide a high level system overview, complete with boundaries

and connections to other systems.

(3) DFDs can provide a detailed representation of system components.

DFDs help system designers and others during initial analysis stages visualize
a current system or one that may be necessary to meet new requirements. Systems analysts

23
prefer working with DFDs, particularly when they require a clear understanding of the
boundary between existing systems and postulated systems. DFDs represent the following:
1. External devices sending and receiving data

2. Processes that change that data

3. Data flows themselves

4. Data storage locations

The hierarchical DFD typically consists of a top-level diagram (Level 0)


underlain by cascading lower level diagrams (Level 1, Level 2...) that represent different parts of
the system. Data flow diagram (DFD) is used to show how data flows through the system and the
processes that transform the input data into output, Data flow diagrams are a way of expressing
system requirements in a graphical manner. DFD represents one of the most ingenious tools used for
structured analysis.

In the normal convention, logical DFD can be completed using only four notations. Function
Symbol:
A function is represented using a circle. This symbol is called a process
or a bubble. Bubbles are annotated with the names of corresponding functions.

External Entity Symbol:

An external entity such as a user, project manager etc. is represented by a


rectangle. The external entities are essentially those physical entities external to the
application system, which interact with the system by inputting data to the system or
by consuming the data produced by the system. In addition to the human users the
external entity symbols can be used to represent external hardware and software such
as application software.

Data Flow Symbol:

A directed arc or an arrow is used as a Data Flow Symbol, This


represents the data flow occurring between two processes or between an external entity
and a process; in direction of the Data Flow Arrow. Data flow Symbols are annotated
with corresponding data names,

24
Data Store Symbol:

A Data Store represents a logical file; it is represented using two parallel


lines. A logical file can represent either Data Store Symbol, which can represent
either data structure or a physical file on disk. Each data store is connected to a process
by means of a Data Flow Symbol. The direction of the Data Flow Arrow shows
whether data is being read from or written into a Data Store. An arrow flowing in or
out of a data store implicitly represents the entire area of the Data Store and hence
arrows connecting to a data store need not be annotated with the names of the
corresponding data items.

Output Symbol:

The output symbol is used when a hardcopy is produced and the user of
the copies cannot be clearly specified or there are several users of the output.

The DFD at the simplest level is referred to as the CONTEXT


ANALYSIS DIAGRAM. These are expanded by level, each explaining its process
in detail.

25
Puts software into
Owner authorised server for Authorised
distribution Server

Takes The Software


Authorised and Host it through Server
User
Server Unauthorised Server

Snitch Checks the hosted link,


Owner verify the IP and alert the Owner

Level 0

User
Owner

Provides Provides Link gets


Pirates the
Software Server IP detected by
software
Name snitch

Level 1

26
Snitch uses the info to Finds all the available
check for unauthorised links from the internet
internet
links to the software in the and validates its host IP
internet with the given IP

If the validation fails


Snitch

Alerts the owner by


displaying all the
unauthorised links or via
Gives the name of Email
the software and the
Authorised IP

Owner

Level 3

27
DATA TABLES
1. PANE Name: DETAILS

Field Name Data Type Size Description


Software name text 20 The name of the
software
Authorised server IP Number 12 To Validate other IPs

2. PANE Name: OUTPUT

Field Name Data Type Size Description


Valid link text 50 Gives the list of all
the authorised links
Invalid link text 50 Gives list of all the
Unauthorised Links

28
TESTING

29
6. SYSTEM TESTING

Testing enhances the integrity of a system by identifying deviations in design and


development of the expected end product. It should focus more on the error-prone areas of
the application. This help in the prevention of errors in a system and builds confidence that
the system will work without error after testing. It is the process of executing a program with
the intent of finding an error. Testing also adds value to the product by conforming to the
user requirements. Testing verifies that software deliverable conforms precisely and design
phases. A good test case is one that has a high probability of finding an as yet
undiscovered error.

Testing involves a series of operation of a system of application under controlled


conditions and subsequently evaluating the result. The controlled condition should include
both normal and abnormal conditions. It is planned and monitor for each testing level (e.g.,
unit, integration, system and acceptance)

Testing is the major quality measure employed during software development.


After the coding phase computer programs are available that can be executed for testing
purpose. Testing not only has to uncover errors introduced during coding, but also locates
error committed during the previous phase. Thus the aim of testing is to uncover
requirements, design or coding errors in the programs

System testing of software is testing conducted on a complete, integrated system


to evaluate the systems compliances with its specified requirements. System testing falls
within the scope of black box testing and such should require no knowledge of the inner
design of the code. As a rule, testing takes as its input, all of the integrated software
components that have successfully passed integration testing.

Testing is a process of executing a program with the interest of finding an error. A


good test is one that has high probability of finding the yet undiscovered errors. The primary
objective for the test case design is to drive a set of tests that has the highest likelihood for
systematically uncovering different classes of errors in the software. Testing begins at the
level and works outward to words the interaction of the entire software. A series of testing
are performed for this project before the system is ready for acceptance. Some of the testing
strategies applied for the system are listed here.
30
Testing Guidelines:

Some important guidelines for testing the software are given below.

Testers while testing the product must have a destructive attitude in order to do
effective testing.
Testing must start the moment requirement analysis phase starts in order to avoid
defective migration.
Both functional as well as non-functional requirements of the software product must
be tested. As far as possible testing must be supported by automated testing tools.
Full testing ie, starting from the requirement phase till acceptance testing must be
used for critical software.
Testing should also be conducted by a third party independently for effective results.
Test ware must be properly documented using software test standards and controlled
using configuration management system.
Quantities assessment of testing and their result must be done.
Testing is never 100% complete.

TESTING METHODS:
Testing is the phase where the bug in the programs was to be found and correct
One of the goals during dynamic testing is to produce a test suite , where the salary
calculated with the desired outputs such as reports in this case . This is applied to ensure
that the modification of the program does not have any side effects. This type of testing of
is called regression testing .Testing generally removes all the residual bugs and improves
the bugs and improves the reliability of the program .The basic types of testing are

> Unit Testing


> Integration Testing
> Validation Testing
> Output testing

> User Acceptance Testing

31
6.1 UNIT TESTING

Unit test comprises the set of tests performed by an individual programmer prior
to integration of unit to large systems.

Coding & debugging unit testing integration testing

Unit testing is done to testing the modules (classes) one by one in order to make
sure that work by themselves before it was put together with other modules. The tests are
very simple, at least for small modules with small interfaces to the out world. What was
done to test the classes is to use the different methods that are defined and make sure thy
return the result that should be expected. Each and every screen was put into testing by
giving random values as input. Breakage testing was done with the boundary conditions
too. The modules where checked to see that the methods return the expected result and that
the classes handle the wrong input in a correct way, For example, error messages whenever
needed and can handle exceptions effectively.

After coding each dialogue is tested and run individually. All unnecessary
coding were removed and it was ensured that all the modules worked, as the programmer
would expect. Logical errors found were corrected. So, by working all the modules
independently and verifying the output of each module in the presence of staff was
concluded that the program was functioning as expected

This testing focuses on each module and individual software unit ensuring that
they work properly. Unit testing checks for the changes made in the new system or any
program in it. Unit testing includes white box testing.

This is the first level of testing. In this different modules are tested against the
specification produced during the design of the modules. Unit testing is done for the
verification of the code produced during the coding of single program modules in an
isolated environment. Unit testing first focuses on the modules independently of one
another to locate errors.

The goal of unit testing is to isolate each part of the program and show that the
individual parts are correct. A unit test provides a strict, written contract that the piece of
code must satisfy. As a result, it affords several benefits. Unit tests find problems early in
the development cycle
32
Unit testing allows the programmer to refractor code at a later date, and make sure the
module still works correctly (i.e., regression testing). The procedure is to write test cases
for all functions and methods so that whenever a change causes a fault, it can be quickly
identified and fixed.

6.2 INTEGRATION TESTING

Integration testing is the phase in software testing in which individual software


modules are combined and tested as a group. It occurs after unit testing and before system
testing. Integration testing takes as its input modules that have been unit tested, groups them
in larger aggregates, applies tests defined in an integration test plan to those aggregates, and
delivers as its output the integrated system ready for system testing.

The purpose of integration testing is to verify functional, performance, and


reliability requirements placed on major design items. These "design items", i.e.
assemblages (or groups of units), are exercised through their interfaces using Black box
testing, success and error cases being simulated via appropriate parameter and data inputs.
Simulated usage of shared data areas and inter-process communication is tested and
individual subsystems are exercised through their input interface. Test cases are
constructed to test that all components within assemblages interact correctly, for example
across procedure calls or process activations, and this is done after testing individual
modules, i.e. unit testing. The overall idea is a "building block" approach, in which verified
assemblages are added to a verified base which is then used to support the integration
testing of further assemblages.

Data can be lost access an interface, one module can have as adverse effort on
another sub function when combined, may not produce the desired major functions.
Integration testing is a systematic testing for constructing the program structure, while at
the same time conducting tests to uncover errors associated within the interface. The
objective are to take unit tested as a whole. Here correction is difficult because the vast
expenses of the entire program complicate the isolation of causes. Thus in the integration
testing step all the errors uncovered are corrected for the next testing steps.

Some different types of integration testing are big bang, top-down, and bottom-up.

33
Big Bang

In this approach, all or most of the developed modules are coupled together to form a
complete software system or major part of the system and then used for integration testing.
The Big Bang method is very effective for saving time in the integration testing process.
However, if the test cases and their results are not recorded properly, the entire integration
process will be more complicated and may prevent the testing team from achieving the goal
of integration testing.

A type of Big Bang Integration testing is called Usage Model testing. Usage
Model testing can be used in both software and hardware integration testing. The basis
behind this type of integration testing is to run user-like workloads in integrated user-like
environments. In doing the testing in this manner, the environment is proofed, while the
individual components are proofed indirectly through their use. Usage Model testing takes
an optimistic approach to testing, because it expects to have few problems with the
individual components.

The strategy relies heavily on the component developers to do the isolated unit
testing for their product. The goal of the strategy is to avoid redoing the testing done by
the developers, and instead flesh out problems caused by the interaction of the components
in the environment. For integration testing, Usage Model testing can be more efficient and
provides better test coverage than traditional focused functional integration testing. To be
more efficient and accurate, care must be used in defining the user-like workloads for
creating realistic scenarios in exercising the environment. This gives that the integrated
environment will work as expected for the target customers.

Top-Down Integration

This method is an incremental approach to the construction of program structure.


Modules are integrated by moving downward through the control hierarchy beginning with
the main program module. The module subordinates to the main program module are
incorporated into the structure in either a depth first or breathe first manner.

34
Bottom-Up Integration

This method begins construction and testing with the module at the lowest level in
the program structure. Since the module are integrated from the bottom up, processing
required for module subordinate to a given level is always available and the need for the
stubs is eliminated.

6.3 VALIDATION TESTING

It provides the final assurance that the software meets all the functional,
behavioural and performance requirements. The software is completely assembled as a
package. Validation succeeds when the software functions in a manner in which the user
expects. Validation refers to the process of using software in a live environment in order
to find errors.

System validation checks the quality of the software in both simulation and
live put a lot of validation testing before finally implementing it. Thus the feedback from
the validation phase generally produces changes in the software. The system objective, the
functional performance, requirements were looked into see whether all these criteria are
satisfying the system needs.

The system is then presented before the manager along with the reports generated
the system then undergoes a testing phase with the sample test data provided by him.
System testing in this manner would verify that all the modules works together and
generate the intended results. All individual modules should be working in tandem so that
the overall system function or performance is achieved.

6.4 OUTPUT TESTING

After performing the validation testing, the next step is output testing of the
proposed system since no system could be useful if it does not produces the required output
generated or considered into two ways; one is on screen and another is printed formal.

The output format on the screen is found to be correct as the format was designed
in the system design phase according to the user needs. For the hard copy also the output
35
comes out as the specified requirements by the user. Hence output testing dose not result in
any correction in the system.

6.5 USER ACCEPTANCE TESTING

User acceptance testing is the key factor for the success of any system. The
system under consideration is tested for user acceptance by constantly keeping in touch
with prospective system at the time of developing and making changes whenever required.
This is done with regarding to the following points. Input screen design, Output screen
design and menu driven system.

It is formal testing conducted to determine whether or not the system satisfies its
acceptance criteria to enable the customer to determine whether or not to accept the
system. User acceptance of a system is the key factor for the success of any system. The
system under consideration is tested for user acceptance by constantly keeping in touch with
the prospective system users at the time of developing and making changes wherever
required. Preparation of data plays a valid role in the system testing After preparing the test
data the system under study is tested using the test data. While testing the system by using
test data errors are again uncovered and corrected and the corrections are also noted for
future use.

6.6 SYSTEM TESTING

System testing means testing all the different parts of the program together and
sees how it will react under certain conditions. Some areas should test for are recovery and
stress.

When testing the all system there are few questions to be answered.

1. How will you setup the system for testing?

2. How do you perform the testing?

3. What is the expected result?

4. Black box testing


The black box testing is a testing method in which test date are derived from the specified
functional requirements without regarding the final program structure, because only the

36
functionality of the software module is on concern in black box testing. Testing is mainly
refers to functional testing method emphasized on executing the functions and
examination of their input and output data .The tester treats the software under test as a
black box only the input, output and specification are visible and the functionality is
determined by observing the outputs to corresponding input .in testing various input are
exercised and the outputs are compared against specification to validate the correctness. All
test cases are derived from the specification. No implementation details of the code are
considered. Here the functions for checking all the input values are written.

While user is logged and changing the password, entering or editing user
information in the member profile, and member is added , if user forget to give any values
then appropriate message are displayed or that errors.

Testing performed on a complete, integrated system to verify that the system is


compliant with its specifications and requirements. System testing is normally done before
fully implementing if for the users. The testing done before implementing the system would
help the developer to perform any further operation on the system,

TYPES OF TESTING:

1. White Box Testing


it is reluctant to consider random testing as testing techniques. The test use selection
a simple and straight forward they are randomly chosen. One can also obtain reliability
estimate using Contrary to black box testing, software is viewed as white box or glass box,
in white box testing, as the structure and flow of the software under test is visible to the
tester. Testing plans are made according tom the details of the software implementation,
such as programming language, logic and styles. Test cases are derived from the software
structures, white box testing is also called glass box testing, logic driven testing or design
driven testing. There are many techniques available in white box testing because of the
problem f interactivity is eased by specific knowledge and attention on the structure of the
software under test. The intention of exhausting come aspect of the software is still strong
in white box testing and some degree of exhaustion can be achieved, such as executing each
line of code. At least once (statement coverage, traverse very branch statement branch
coverage), or cover all the possible combinations of true and false condition predicates
(multiple condition coverage) random testing result based on operational profiles. Test of
data flow across a module interface are required before any other testing is initiated.

37
Here checking is done to find whether the proper username and password are passed
between sub modules and also checked whether the correct input is given as the input for
the processing.

Boundary conditions are tested to ensure that the modules operate properly at
boundaries established to limit or restrict processing, in this the result will be determine by
the correct boundary values. Exercise all logical decision in their true and false sides. In
login module, the users status been checked. If it is false appropriate error message will he
displayed. If it is true it allows the corresponding user to continue with next process.

2. Top Down Testing


Modules are integrated by moving downward through the control hierarchy
beginning with the main module. Depth first integration would integrate all modules on a
major control path of the structure selection of major is somewhat arbitrary and depends on
application specific characteristic. Check whether the appropriate operations are executing
with the correct order or not. For example, if clicking on the read specimen or process
operation then the corresponding task should be performed and should display the correct
result. If the cancel button on that page is clicked then the main page should be displayed.

3. Output Testing
Another important type of testing is output testing. It is important because no system
could be useful if it does not provide the required output in the format. Here the output
format is considered in two ways. One is on screen and another one is printed format. The
output format on the screen is found to be correct as the format was designed in the
system phase according to the user needs. For the hard copy also the output comes out the
specified requirements by the user. Hence the output test does not result in any correction
in the system.

Error Message

Error messages and warnings are Failure delivered the users of interactive system
when something has gone out of kilter. At their worst, error messages and warnings are
impact uses of misleading information and serve only to increase users information. The

38
error messages should be given to make any system is an interactive system. While user is
logged in if user gives invalid username then error message is Invalid Username, Try
Again, if user enters wrong password the error message will be Invalid Password, Tty
Again.

If user enters any special characters the message is Special Characters are Not Allowed!!
Give Valid Input! If the user tries to input images other than of type. BMP the appropriate
error message will be displayed.

39
CONCLUSION

40
7. CONCLUSION
The Creation and distribution of pirated software and
movies are at an all-time high. The vastness of the internet
makes it almost impossible to track the distribution of such
software. The Snitch provides a very convenient and
efficient way to keep track of your software and to detect
and act against unauthorised distribution and hosting.

Once the user enters his/her software name and the


authorised host server IP the snitch gathers all the
available links that hosts the software and tracks the links
IP and verifies it with the user given IP.

By implementing this system, the user can effectively


prevent his or her software or film from being pirated

For Identifying and eradicating pirated software is


essential.

41
FORMS

42
43
44
45
BIBLIOGRAPHY

46
BIBLIOGRAPHY

Web references:

1. www.tutorialspoint.com
2. www.w3school.com

Book Reference

1. Beginning ASP.NET 3.5 by Imar spaanjaars


2. Effective Java by Joshua Bloch

47

Potrebbero piacerti anche