Sei sulla pagina 1di 28

Software Testing

Strategy
General characteristics of testing
strategy
Testing begins at the component level and
works outward toward the integration of the
entire computer-based system.
Different testing techniques are appropriate at
different points in time.
The developer of the software conducts testing
and may be assisted by independent test groups
for large projects.
Testing and debugging are different activities.
Debugging must be accommodated in any
testing strategy.
General characteristics of
testing strategy
Software testing is part of a broader group of activities called
verification and validation.
Verification
◦ The set of activities that ensure that software correctly
implements a specific function or algorithm
Validation
◦ The set of activities that ensure that the software that has been
built is traceable to customer requirements.

Verification
Are we building the product right?

Validation
Are we building the right product?
A software Testing Strategy

System
Testing
Validation
Testing
Integration
Testing
Unit
Testing
Cod
e
Desig
n
Requireme
nts
System
Engineering
Testing Strategy
Testing Strategy : A set of steps which define the testing
process.
Steps
Unit testing
◦ Concentrates on each component/function of the software as
implemented in the source code.
◦ Makes heavy use of white Box Testing
Integration testing
◦ Focuses on construction of the software.
◦ Checks for interface errors.
◦ Makes heavy use of Black Box Testing
Validation testing
◦ Requirements are validated against the constructed software
◦ Black box testing used here.
System testing
◦ The software and other system elements are tested as a whole
Unit Testing
1. Unit Testing :
It is the testing of different units/modules of a system in
isolation
Unit Testing focuses on :
Module interface
◦ Ensure that information flows properly into and out of the
module
Local data structures
◦ Ensure that data stored temporarily maintains its integrity
during all steps in an algorithm execution
Boundary conditions
◦ Ensure that the module operates properly at boundary values
established to limit or restrict processing
Independent paths (basis paths)
◦ Paths are exercised to ensure that all statements in a module
have been executed at least once
Error handling paths
◦ All error handling paths are exercised to ensure that error
messages are flashed whenever necessary.
Unit Testing
Unit Testing

Module

Interfaces
Local Data Structures
Boundary conditions
Independent paths
Error handling Paths

Test Cases
Unit Testing
Drivers and stubs :
Drivers and stubs are dummy sub-programs
Drivers accepts test case data and passes
this data to the module to be tested.
Stubs replace the modules sub-ordinate to
the module to be tested.
It can have 1/2 output statements which
prints the results of the test case data and
return control to the module undergoing test.
Unit Test Environment

driver
interface
local data structures

Module boundary conditions


independent paths
error handling paths

stub stub

test cases

RESULTS
9
Integration Testing
2. Integration Testing :
Focuses on
1. Construction of program structure.

2. At the same time conducting tests to uncover errors


associated with interfacing.
Can be done in 2 ways :
1. Non-Incremental Integration :

All modules are combined in advance.


The entire program is tested as a whole.
Correction of errors becomes difficult since one cannot find
out which module has an error.
Integration Testing
2. Incremental Integration
The program is constructed and tested in small segments.
Errors are more easier to isolate and correct.
Two Types ---- 1. Top-Down Integration
2. Bottom Up Integration
1. Top-Down Integration :
Begin integration from the main control module.
Modules sub-ordinate to the main control module are
incorporated into the structure in either Depth-First or
Breadth-First manner
Integration Testing
Integration
A

B C D

E F G

Depth-first – A,B,E,H,F,G,C,D
H Breadth-First-A,B,C,D,E,F,G,H
Integration Testing
Top-Down Approach
The main control module is used as a test driver and stubs
are substituted for all modules directly subordinate to the
main control module.
Subordinate stubs are replaced one at a time by actual
modules based on either DFS/BFS.
Tests are conducted as each module is integrated.
On completion of each set of tests, another stub is replaced
with the real module.
Regression testing may be conducted to ensure that new
errors have not been introduced.
Integration Testing
Top-Down Approach :
◦ Advantages:
● Verifies major control or decision points early in the test
process.
● With the use of depth-first integration testing, a
complete function of the software can be demonstrated.
-- Confidence builder for developer/customer.
◦ Disadvantages:
● Since stubs replace lower level modules, no significant
data can flow upwards to the main module.
Integration Testing
Integration Testing
Bottom Up Approach :
◦ This approach begins construction and testing with
modules at the lowest levels in the program structure.
● Low-level modules are combined into clusters.
● A driver is written to coordinate test case input and
output.
● The cluster is tested.
● Drivers are removed and clusters are combined moving
upward in the program hierarchy.
Integration Testing
Bottom Up Approach
◦ Advantages:
● Easier test case design and lack of stubs.
◦ Disadvantages:
● The program as an entity is does not exist until the last
module is added.
Regression Testing
Regression Testing
◦ Re-execution of some subset of tests already conducted to
ensure that the new changes do not have unintended side
effects.
The Regression test suite should contain three different
classes of test cases :
◦ A representative sample of tests that will exercise all
software functions
◦ Additional tests that focus on functions that are likely to
be affected by the change.
◦ Tests that focus on software components that have
changed.
Validation Testing
It provides final assurance that software meets all functional,
behavioral, and performance requirements.
-- Exclusive use of Black-box testing techniques.
After each validation test case either software conforms to
specs or a deviation from specs is detected and a deficiency
list needs to be worked.
How is validation testing performed?
Conduct a series of acceptance tests to validate all
customer requirements i.e use
1. Alpha testing
2. Beta testing
Alpha testing
◦ Conducted at the developer’s site by end users
◦ Customer is made to use the software in the presence of the
developer with the developer recording errors and usage
problems.
◦ Testing is conducted in a controlled environment
Beta testing
◦ Conducted at end-user sites
◦ Developer is generally not present
◦ It serves as a live application of the software in an environment
that cannot be controlled by the developer
◦ The end-user records all problems that are encountered and
reports these to the developers at regular intervals
After beta testing is complete, software engineers make
software modifications and prepare for release of the
software product to the entire customer base
System Testing
Series of different tests whose primary purpose is to fully
exercise the computer based system,
Verifies that system elements have been properly integrated
and perform allocated functions.
Types --------
1. Recovery Testing:
Forces the system to fail and a recovery is made from the
backup copy to check if the system recovers successfully
from failures using the backup copy.
i.e data recovery, check pointing mechanisms are evaluated
for correctness.
System Testing
2. Security Testing
▪ Verifies the protection mechanisms built into a system.

▪ Tester plays the role of the individual who desires to


penetrate the system.
▪ Given enough time and resources any individual can
ultimately penetrate the system.
▪ The role of the designer is to make the penetration cost
more than the value of the information that will be obtained.
3. Stress Testing
▪ Confronts the application with abnormal situation.

▪ Tests applications under heavy loads to determine at what


point the system degrades.
System Testing
▪ Eg : 1. Input a very large numerical value.
2. Complex queries fired to a database.
3. Use test cases that exercise maximum memory/other
resource.

4. Performance Testing
▪ Tests the runtime performance of the software.

▪ i.e 1. How long it takes to receive a response to an inquiry?

2. Send a transmission and receive a response under


peak and normal conditions.
Debugging
Debugging occurs as a consequence of successful testing
Debugging
There are three main debugging strategies
◦ Brute force
◦ Backtracking
◦ Cause elimination
1. Brute force
Most commonly used and least efficient method
Used when all else fails
Involves the use of memory dumps, run-time traces, and put
print statements
Mass information produced which may lead to success.
Leads many times to wasted effort and time
Debugging
2. Backtracking
Can be used successfully in small programs
The method starts at the location where a symptom has been
uncovered
The source code is then traced backward (manually) until the
location of the cause is found
In large programs, the number of potential backward paths
may become unmanageably large
Debugging
3. Cause elimination
Use deduction to list all possible causes
Devise tests to eliminate them one by one

Eg:
boolean palindrome (char *s)
/* effects: returns true if s reads the
same reversed as it does forward */
Debugging
Test cases:
s=“able was I ere I saw elba” returns false
s=“deed” returns true
Hypothesis 1:
maybe it fails on odd length strings?
simple refutation case: s=“r” returns true
Hypothesis 2:
maybe it fails on strings with spaces in them?
simple refutation case: s=“ ” returns true
Hypothesis 3:
maybe it fails on odd length strings longer than 1?
simple refutation case: s=“ere” returns false

Potrebbero piacerti anche