Sei sulla pagina 1di 8

CHAPTER-7

TESTING
Tesing Can Be Defined As:
Testing is the process of executing a program with the intention of finding Errors.(or) Testing is the
process of verifying an application to check if the application is behaving as per the
Requirements(or)Testing is the process of executing test cases on the application to differentiate
between the Expected and Actual results(or)Testing is the process of verifying customer needs
with respect to the designed application(or)Testing is the process of finding errors, faults, defects, bugs,
issues in the application.
Testing is the process of requirements analysis, test cases designing, test cases executions, bug
reporting, test status updates, customer interactions, application maintenance(or)Testing is the process
of V & V (Verification and Validation) .

Reasons for Testing:


To find defects.
1.To ensure that application meets customer expectations / requirements.
2.To reduce cost of quality by delivering defect free product on time, within the budget.
3.To achieve customer satisfaction.
4.To continuously improve testing process in the SDLC.
5.To reduce risk of software failure. To determine the state of software
.6.To deliver a quality and bug free application to customer
7.To retain the company name and fame

.8.If there is no testing, then there is no quality products/projects/applications.


Few other reasons:
1.Developer thinks in the point of code, structures, loops, statements, code integrations and so on
2. There should be someone else who could think in the point of functional behavior.
3.There should be someone who could think in the view of customers / end users.
4.And thus, tester comes into picture and testing is needed for sure.
Testing begins right from the requirements analysis.
To check:
If the requirements are clear and If the requirements are understandable and If the requirements are
testable and If the requirements are transformable into software/program .

Requirements for Testing:


1.clients / Customers (Grey Box Testing > Code , Functional , UAT)
2.Testers (Black Box Testing > Smoke , Functional, Integration , System , Regression Testing)
3.Users (Functional , UI Testing)
4.Developers (White Box testing > Unit Testing , Integration Testing)
5.Tester is the right MAN of all to validate Application.
6.Testing also Need:

7.Requirements/ Specifications / wire frames


8.Good understanding of BRS/SRS/ wire frames
9.QATP (Test Plan), QATC (Test cases)
10.Test Set up (Environment, Software & Tools)
11.Resources (Machines, Testers, Lab).

Types of Testing :
Manual Testing:
1.Sanity Testing
2.Functional Testing
3.Integration Testing
4.System Testing
5.UI Testing
6.Regression Testing
7.Bug Verification / Bug Tracking
8.Compatibility Testing
9.UAT ( Alpha and Beta)

Automation Testing:
1.Sanity Testing
2.Functional Testing
3.Regression Testing
4.Load Testing
5.Performance Testing
6.Stress Testing

Reason For Bugs In Softwares:


1.Poor Requirements (Unclear Requirements)
2.Programming errors
3.Software complexity
4.Changes in Requirements
5.Time pressure
6.Inefficient Dev (Poorly coded modules)
7.Poor Management
8.Poor Documentation

9.Poor Quality Assurance

The Flow Of Testing Process Is:

Various Testing Strategies Are:


Unit Testing:
Unit testing focuses on the building blocks of the software system, that is, objects and subsystems.
There are three motivations behind focusing on components. First, unit testing reduces the complexity
of overall test activities, allowing us to focus on smaller units of the system.
Second, unit testing makes it easier to pinpoint and correct faults, given that few components are
involved in the test. Third, unit testing allows parallelism in the testing activities that is each
component can be tested independently of the others..

Test case

Input

Expected

Valid

Behavior
Pass

email, valid PIN


Invalid username, Valid

Fail

email, valid PIN


Valid username, Invalid

Fail

email, valid PIN


Valid username,

Fail

id
username,

Valid

Valid

email, Invalid PIN

Integration Testing:
Integration testing detects faults that have not been detected during unit testing by focusing onsmall
groups of components. Two or more components are integrated and tested, and when no new faults
are revealed, additional components are added to the group.
This procedure allows the testing of increasingly more complex parts of the system while keeping the
location of potential faults relatively small.

System Testing:
Unit and integration testing focus on finding faults in individual components and the interfaces
between the components .Once components have been integrated, system testing ensures that the
complete system compiles with the functional and non-functional requirements.

WHITE BOX TESTING:


White Box testing is a process of uncovering errors in the internal logic of the application. It
uncovers the errors like, Syntax errors and logical errors.

BLACK BOX TESTING:


Black Box testing focus on testing external functionality of the software. It enables the testers
to derive a set of inputs that will fully exercise all the functions of software. It uncovers the errors like,
missing functions, interface errors, errors in database access, initialization and termination errors.

TEST CRITERIA AND TEST CASES:


A realistic goal for testing is to select a set of test cases that is close to ideal. An ideal test case
set is the one that succeeds only if there are no errors in the program. We define a criterion that we
believe should be satisfied during testing. The criterion then becomes the basis for test case selection
and a set of test cases is selected that satisfy that criterion.
For example, the criterion can be that all the statements in the program should be executed at
least once. For this test criterion, we have to select a set of test cases that will ensure that each
statement of the program is executed by at least one of the test cases.
There are two aspects of test case selection: specifying a test criterion for evaluation a set of test
cases, and a procedure for generating a set of test cases that satisfy the given criterion. A test criterion is
to be specified. And based on the criterion, we have to generate all the possible test cases.
There are two fundamental properties for a testing criterion: reliability and validity. A criterion is
reliable if all the sets of test cases that satisfy the criterion detect the same errors. A criterion is valid if
for any error in the program, there is some set satisfying the criterion that will reveal the error. As
getting an ideal test criterion is not generally possible, other more practical properties of test criteria
have been proposed.
The following are the some of the test criteria.

Coverage Criteria:
Coverage criteria are based on the number of statements, branches or paths that are exercised by the
test cases.

Statement Coverage Criteria:


Perhaps the simplest and weakest coverage criterion is statement coverage, which requires that
each statement of the program be executed at least once. This coverage criterion is not very strong and
can leave the errors undetected.

Branch Coverage Criteria:


This requires that each branch in a program be traversed at least once during testing. In other
words, the branch coverage requires that each decision in the program be evaluated to true and false
values at least once during testing.

Path Coverage Criteria:


This requires that each logical path in the program be tested during testing. A logical path is the
sequence of branches that are executed when the program is executed from the start to the end.

Potrebbero piacerti anche