Sei sulla pagina 1di 174

29-Dec-2008 SoItware Testing

1
Software Testing
From Lord's Kitchen
29-Dec-2008 SoItware Testing 2
ontent
- Essence
- Terminology
- lassification
- Unit, System .
- BlackBox, WhiteBox
- Debugging
- !EEE Standards
29-Dec-2008 SoItware Testing 3
Definition
- Clen Nyers
- Testing is the process of executing a
program with the intent of finding errors
29-Dec-2008 SoItware Testing 4
"bjective explained
- Paul ]orgensen
- Testing is obviously concerned with errors,
faults, failures and incidents. A test is the
act of exercising software with test cases
with an objective of
- Finding failure
- Demonstrate correct execution
29-Dec-2008 SoItware Testing 5
A Testing Life ycle
Requirement
Specs
Design
Coding
Testing
Fault
Resolution
Fault
Isolation
Fault
ClassiIication
Error
Fault
Fault
Fault
Error
Error
incident
Fix
29-Dec-2008 SoItware Testing 6
Terminology
- Error
- Represents mistakes made by people
- Fault
- !s result of error. Nay be categorized as
- Fault of ommission - we enter something into
representation that is incorrect
- Fault of "mission - Designer can make error of
omission, the resulting fault is that something
is missing that should have been present in the
representation
29-Dec-2008 SoItware Testing 7
ont.
- Failure
- "ccurs when fault executes.
- !ncident
- Behavior of fault. An incident is the
symptom(s) associated with a failure that
alerts user to the occurrence of a failure
- Test case
- Associated with program behavior. !t carries
set of input and list of expected output
29-Dec-2008 SoItware Testing 8
ont.
- verification
- Process of determining whether output of
one phase of development conforms to its
previous phase.
- validation
- Process of determining whether a fully
developed system conforms to its SRS
document
29-Dec-2008 SoItware Testing 9
verification versus validation
- verification is concerned with phase
containment of errors
- validation is concerned about the final
product to be error free
29-Dec-2008 SoItware Testing 10
Relationship - program behaviors
Program Behaviors
SpeciIied
(expected)
Behavior
Programmed
(observed)
Behavior
Fault
OI
Omission
Fault
OI
Commission
Correct portion
29-Dec-2008 SoItware Testing 11
lassification of Test
- There are two levels of classification
- "ne distinguishes at granularity level
- Unit level
- System level
- !ntegration level
- "ther classification (mostly for unit level) is
based on methodologies
- Black box (Functional) Testing
- White box (Structural) Testing
29-Dec-2008 SoItware Testing 12
Relationship - Testing wrt Behavior
Program Behaviors
SpeciIied
(expected)
Behavior
Programmed
(observed)
Behavior
Test Cases
(VeriIied behavior)
8
7
5 6
1
4
3
2
29-Dec-2008 SoItware Testing 13
ont.
- 2, S
- Specified behavior that are not tested
- 1, 4
- Specified behavior that are tested
- 3, 7
- Test cases corresponding to unspecified
behavior
29-Dec-2008 SoItware Testing 14
ont.
- 2, 6
- Programmed behavior that are not tested
- 1, 3
- Programmed behavior that are tested
- 4, 7
- Test cases corresponding to un
programmed behaviors
29-Dec-2008 SoItware Testing 15
!nferences
- !f there are specified behaviors for which
there are no test cases, the testing is
incomplete
- !f there are test cases that correspond to
unspecified behaviors
- Either such test cases are unwarranted
- Specification is deficient (also implies that
testers should participate in specification and
design reviews)
29-Dec-2008 SoItware Testing 16
Test methodologies
- Functional (Black box) inspects
specified behavior
- Structural (White box) inspects
programmed behavior
29-Dec-2008 SoItware Testing 17
Functional Test cases
SpeciIied
Programmed
Test
Cases
29-Dec-2008 SoItware Testing 18
Structural Test cases
SpeciIied
Programmed
Test
Cases
29-Dec-2008 SoItware Testing 19
When to use what
- Few set of guidelines available
- A logical approach could be
- Prepare functional test cases as part of
specification. However they could be used
only after unit and/or system is available.
- Preparation of Structural test cases could
be part of implementation/code phase.
- Unit, !ntegration and System testing are
performed in order.
29-Dec-2008 SoItware Testing 20
Unit testing - essence
- Applicable to modular design
- Unit testing inspects individual modules
- Locate error in smaller region
- !n an integrated system, it may not be
easier to determine which module has
caused fault
- Reduces debugging efforts
29-Dec-2008 SoItware Testing 21
Test cases and Test suites
- Test case is a triplet |!, S, "* where
- ! is input data
- S is state of system at which data will be
input
- " is the 70;1 output
- Test suite is set of all test cases
- Test cases are not randomly selected.
!nstead even they need to be designed.
29-Dec-2008 SoItware Testing 22
!eed for designing test cases
- Almost every nontrivial system has an
extremely large input data domain
thereby making exhaustive testing
impractical
- !f randomly selected then test case may
loose significance since it may expose
an already detected error by some
other test case
29-Dec-2008 SoItware Testing 23
Design of test cases
- !umber of test cases do not determine the
effectiveness
- To detect error in following code
if(x>y) max = x, else max = x,
- {(x=3, y=2), (x=2, y=3)} will suffice
- {(x=3, y=2), (x=4, y=3), (x=S, y = 1)} will
falter
- Each test case should detect different errors
29-Dec-2008 SoItware Testing 24
Black box testing
- Equivalence class partitioning
- Boundary value analysis
- omparison testing
- "rthogonal array testing
- Decision Table based testing
- ause Effect Craph
29-Dec-2008 SoItware Testing 25
Equivalence lass Partitioning
- !nput values to a program are
partitioned into equivalence classes.
- Partitioning is done such that:
-program behaves in similar ways to
every input value belonging to an
equivalence class.
29-Dec-2008 SoItware Testing 26
Why define equivalence classes?
- Test the code with just one
representative value from each
equivalence class:
- as good as testing using any other values
from the equivalence classes.
29-Dec-2008 SoItware Testing 27
Equivalence lass Partitioning
- How do you determine the equivalence
classes?
- examine the input data.
- few general guidelines for determining the
equivalence classes can be given
29-Dec-2008 SoItware Testing 28
Equivalence lass Partitioning
- !f the input data to the program is
specified by a range of values:
- e.g. numbers between 1 to S000.
- one valid and two invalid equivalence
classes are defined.


29-Dec-2008 SoItware Testing 29
Equivalence lass Partitioning
- !f input is an enumerated set of values:
- e.g. {a,b,c}
- one equivalence class for valid input values
- another equivalence class for invalid input
values should be defined.
29-Dec-2008 SoItware Testing 30
Example
- A program reads an input value in the
range of 1 and S000:
- computes the square root of the input
number
$"#%
29-Dec-2008 SoItware Testing 31
Example (cont.)
- There are three equivalence classes:
- the set of negative integers,
- set of integers in the range of 1 and S000,
- integers larger than S000.


29-Dec-2008 SoItware Testing 32
Example (cont.)
- The test suite must include:
- representatives from each of the three
equivalence classes:
- a possible test suite can be:
{S,S00,6000}.


29-Dec-2008 SoItware Testing 33
Boundary value Analysis
- Some typical programming errors occur:
- at boundaries of equivalence classes
- might be purely due to psychological
factors.
- Programmers often fail to see:
- special processing required at the
boundaries of equivalence classes.
29-Dec-2008 SoItware Testing 34
Boundary value Analysis
- Programmers may improperly use <
instead of <=
- Boundary value analysis:
- select test cases at the boundaries of
different equivalence classes.
29-Dec-2008 SoItware Testing 35
Example
- For a function that computes the square
root of an integer in the range of 1 and
S000:
- test cases must include the values:
{0,1,S000,S001}.


29-Dec-2008 SoItware Testing 36
ause and Effect Craphs
- Testing would be a lot easier:
- if we could automatically generate test
cases from requirements.
- Work done at !BN:
- an requirements specifications be
systematically used to design functional
test cases?
29-Dec-2008 SoItware Testing 37
ause and Effect Craphs
- Examine the requirements:
- restate them as logical relation between
inputs and outputs.
- The result is a Boolean graph representing
the relationships
- called a causeeffect graph.
29-Dec-2008 SoItware Testing 38
ause and Effect Craphs
- onvert the graph to a decision table:
- each column of the decision table
corresponds to a test case for functional
testing.
29-Dec-2008 SoItware Testing 39
Steps to create causeeffect graph
- Study the functional requirements.
- Nark and number all causes and
effects.
- !umbered causes and effects:
- become nodes of the graph.
29-Dec-2008 SoItware Testing 40
Steps to create causeeffect graph
- Draw causes on the LHS
- Draw effects on the RHS
- Draw logical relationship between
causes and effects
- as edges in the graph.
- Extra nodes can be added
- to simplify the graph
29-Dec-2008 SoItware Testing 41
Drawing auseEffect Craphs
A B
If A then B
A
C
If (A and B)then C
B
29-Dec-2008 SoItware Testing 42
Drawing auseEffect Craphs
A
C
If (A or B) then C
B
A
C
If (not(A and B)) then C
B
~
29-Dec-2008 SoItware Testing 43
Drawing auseEffect Craphs
A
C
If (not (A or B))then C
B
A B
If (not A) then B
~
~
29-Dec-2008 SoItware Testing 44
Example
- Refer "n the Experience of Using
auseEffect Craphs for Software
Specification and Test Ceneration" by
Amit Paradkar. AN Publications
29-Dec-2008 SoItware Testing 45
Partial Specification
- "... System Test and !nitialization Node:
"perational requirements: "perating
requirements for this mode are as follows:
- await the start of the boiler on standby signal
from the instrumentation system, then
- test the boiler water content device for normal
behavior and calibration constant consistency,
then
- check whether the steaming rate measurement
device is providing a valid output and indicating
zero steaming rate (taking into account its error
performance), then
29-Dec-2008 SoItware Testing 46
ont.
- if the boiler water content exceeds 60,000 lb.,
send the boiler content high signal to the
instrumentation system and wait until the water
content has been adjusted to 60,000 lb. by the
instrumentation system (using a dump valve), else
- if the boiler water content is below 40,000 lb.,
start any feedpump to bring it to 40,000 lb., then
- turn on all the feedpumps simultaneously for at
least 30 s and no more than 40 s and check that
the boiler content rises appropriately, that the
feedpump monitors register correctly, and that the
feedpump running indications register correctly,
then
29-Dec-2008 SoItware Testing 47
ont.
- turn feedpumps off and on if needed to
determine which feedpumps, feedpump
monitors, or feedpump running indications
are faulty.
29-Dec-2008 SoItware Testing 48
Exit ondition:
- if the water content measuring device is
not serviceable, go to shutdown mode,else
- if the steaming rate measurement device is
not serviceable, go to shutdown mode,
else
- if less than three feedpump/feedpump
monitor combinations are working
correctly, go to shutdown mode, else
...
29-Dec-2008 SoItware Testing 49
causes:
- 221 externally initiated (Either "perator or
!nstrumentation system)
- 220 internally initiated
- 202 operator initiated
- 203 instrumentation system initiated
- 201 bad startup
- 200 operational failure
- 137 confirmed keystroke entry
- 138 confirmed "shutnow" message
29-Dec-2008 SoItware Testing 50
ont.
- 136 multiple pumps failure (more than
one)
- 13S water level meter failure during
startup
- 134 steam rate meter failure during
startup
- 133 communication link failure
- 132 instrumentation system failure
- 131 180 and 181
29-Dec-2008 SoItware Testing 51
ont.
- 130 water level out of range
- 180 water level meter failure during
operation
- 181 steam rate meter failure during
operation
- !ote that some of the causes listed above are
used as dummies, and exist only for classification
purpose. These causes and their relationships
leading to the boiler shutdown are illustrated in
the auseEffect Craph in Figure 1.
29-Dec-2008 SoItware Testing 52
ause Effect
Craph
29-Dec-2008 SoItware Testing 53
Decision Table
- Two dimensional mapping of condition
against actions to be performed
- onditions evaluate to Boolean
- Action corresponds to expected activity
- They can be derived from ause Effect
graph too
- Nap cause as condition
- Nap effect as action
29-Dec-2008 SoItware Testing 54
ause effect graph Decision table
Cause
Cause 2
Cause 3
Cause 4
Cause
ffect
ffect 2
ffect 3
%est %est 2 %est 3 %est 4 %est
I I I
I
I
I I
$ I
X $
$
$ $
$
! !
$
I
$
A
A A
A A
!
! !
A
A
A
A A
X
X X
X
X
X
I
29-Dec-2008 SoItware Testing 55
ause effect graph Example
- Put a row in the decision table for each
cause or effect:
- in the example, there are five rows for
causes and three for effects.
29-Dec-2008 SoItware Testing 56
ause effect graph Example
- The columns of the decision table
correspond to test cases.
- Define the columns by examining each
effect:
- list each combination of causes that can
lead to that effect.
29-Dec-2008 SoItware Testing 57
ause effect graph Example
- We can determine the number of
columns of the decision table
- by examining the lines flowing into the
effect nodes of the graph.
29-Dec-2008 SoItware Testing 58
ause effect graph Example
- Theoretically we could have generated
2S=32 test cases.
- Using cause effect graphing technique
reduces that number to S.
29-Dec-2008 SoItware Testing 59
ause effect graph
- !ot practical for systems which:
- include timing aspects
- feedback from processes is used for some
other processes.
29-Dec-2008 SoItware Testing 60
WhiteBox Testing
- Statement coverage
- Branch coverage
- Path coverage
- ondition coverage
- Nutation testing
- Data flowbased testing
29-Dec-2008 SoItware Testing 61
Statement overage
- Statement coverage methodology:
- design test cases so that every statement
in a program is executed at least once.
- The principal idea:
- unless a statement is executed, we have
no way of knowing if an error exists in
that statement
29-Dec-2008 SoItware Testing 62
Statement coverage criterion
- "bserving that a statement behaves
properly for one input value:
- no guarantee that it will behave correctly
for all input values.
29-Dec-2008 SoItware Testing 63
Example
- int f1(int x, int y){
1. while (x != y){
2. if (x>y) then
3. x=xy,
4. else y=yx,
S. }
6. return x, }
29-Dec-2008 SoItware Testing 64
Euclid's CD computation algorithm
- By choosing the test set
{(x=3,y=3),(x=4,y=3), (x=3,y=4)}
- all statements are executed at least once.
29-Dec-2008 SoItware Testing 65
Branch overage
- Test cases are designed such that:
- different branch conditions is given true
and false values in turn.
- Branch testing guarantees statement
coverage:
- a stronger testing compared to the
statement coveragebased testing.
29-Dec-2008 SoItware Testing 66
Example
- Test cases for branch coverage can be:
{(x=3,y=3), (x=4,y=3), (x=3,y=4)}
29-Dec-2008 SoItware Testing 67
ondition overage
- Test cases are designed such that:
- each component of a composite conditional
expression given both true and false
values.
- Example
- onsider the conditional expression
((c1.and.c2).or.c3):
- Each of c1, c2, and c3 are exercised at
least once i.e. given true and false values.
29-Dec-2008 SoItware Testing 68
Branch testing
- Branch testing is the simplest condition
testing strategy
- compound conditions appearing in
different branch statements are given
true and false values.
29-Dec-2008 SoItware Testing 69
Branch testing
- ondition testing
- stronger testing than branch testing:
- Branch testing
- stronger than statement coverage testing.
29-Dec-2008 SoItware Testing 70
ondition coverage
- onsider a Boolean expression having n
components:
- for condition coverage we require 2n test
cases.
- practical only if n (the number of
component conditions) is small.
29-Dec-2008 SoItware Testing 71
Path overage
- Design test cases such that:
- all linearly independent paths in the
program are executed at least once.
- Defined in terms of
- control flow graph (FC) of a program.
29-Dec-2008 SoItware Testing 72
ontrol flow graph (FC)
- A control flow graph (FC) describes:
- the sequence in which different instructions
of a program get executed.
- the way control flows through the
program.
29-Dec-2008 SoItware Testing 73
How to draw ontrol flow graph?
- !umber all the statements of a
program.
- !umbered statements:
- represent nodes of the control flow graph.
- An edge from one node to another
node exists:
- if execution of the statement representing
the first node can result in transfer of
control to the other node.
29-Dec-2008 SoItware Testing 74
Example
int f1(int x,int y){
1. while (x != y){
2. if (x>y) then
3. x=xy,
4. else y=yx,
S. }
6. return x, }
29-Dec-2008 SoItware Testing 75
Example ontrol Flow Craph

2
3 4

29-Dec-2008 SoItware Testing 76


Path
- A path through a program:
- A node and edge sequence from the
starting node to a terminal node of the
control flow graph.
- There may be several terminal nodes for
program.
29-Dec-2008 SoItware Testing 77
!ndependent path
- Any path through the program:
- introducing at least one new node that is
not included in any other independent
paths.
- !t may be straight forward to identify
linearly independent paths of simple
programs. However For complicated
programs it is not so easy to determine
the number of independent paths.
29-Dec-2008 SoItware Testing 78
Ncabe's cyclomatic metric
- An upper bound:
- for the number of linearly independent
paths of a program
- Provides a practical way of determining:
- the maximum number of linearly
independent paths in a program.
29-Dec-2008 SoItware Testing 79
Ncabe's cyclomatic metric
- Civen a control flow graph C,
cyclomatic complexity v(C):
- v(C)= E!2
- ! is the number of nodes in C
- E is the number of edges in C
29-Dec-2008 SoItware Testing 80
Example
- yclomatic complexity =
7 - 6 2 = 3.
29-Dec-2008 SoItware Testing 81
yclomatic complexity
- Another way of computing cyclomatic
complexity:
- determine number of bounded areas in the
graph
- Any region enclosed by a nodes and edge
sequence.
- v(C) = Total number of bounded areas
1
29-Dec-2008 SoItware Testing 82
Example
- From a visual examination of the FC:
- the number of bounded areas is 2.
- cyclomatic complexity = 21=3.
29-Dec-2008 SoItware Testing 83
yclomatic complexity
- Ncabe's metric provides:
- a quantitative measure of estimating
testing difficulty
- Amenable to automation
- !ntuitively,
- number of bounded areas increases with
the number of decision nodes and loops.
29-Dec-2008 SoItware Testing 84
yclomatic complexity
- The cyclomatic complexity of a program
provides:
- a lower bound on the number of test cases
to be designed
- to guarantee coverage of all linearly
independent paths.
29-Dec-2008 SoItware Testing 85
yclomatic complexity
- Defines the number of independent
paths in a program.
- Provides a lower bound:
- for the number of test cases for path
coverage.
- only gives an indication of the minimum
number of test cases required.
29-Dec-2008 SoItware Testing 86
Path testing
- The tester proposes initial set of test
data using his experience and
judgement.
29-Dec-2008 SoItware Testing 87
Path testing
- A testing tool such as dynamic program
analyzer, then may be used:
- to indicate which parts of the program
have been tested
- the output of the dynamic analysis used to
guide the tester in selecting additional test
cases.
29-Dec-2008 SoItware Testing 88
Derivation of Test ases
- Draw control flow graph.
- Determine v(C).
- Determine the set of linearly
independent paths.
- Prepare test cases:
- to force execution along each path
29-Dec-2008 SoItware Testing 89
Example ontrol Flow Craph

2
3 4

29-Dec-2008 SoItware Testing 90


Derivation of Test ases
- !umber of independent paths: 4
- 1, 6 test case (x=1, y=1)
- 1, 2, 3, S, 1, 6 test case(x=1, y=2)
- 1, 2, 4, S, 1, 6 test case(x=2, y=1)
29-Dec-2008 SoItware Testing 91
An interesting application of
cyclomatic complexity
- Relationship exists between:
- Ncabe's metric
- the number of errors existing in the code,
- the time required to find and correct the
errors.
29-Dec-2008 SoItware Testing 92
yclomatic complexity
- yclomatic complexity of a program:
- also indicates the psychological complexity
of a program.
- difficulty level of understanding the
program.
29-Dec-2008 SoItware Testing 93
yclomatic complexity
- From maintenance perspective,
- limit cyclomatic complexity
- of modules to some reasonable value.
- Cood software development organizations:
- restrict cyclomatic complexity of functions to a
maximum of ten or so.
29-Dec-2008 SoItware Testing 94
Data FlowBased Testing
- Selects test paths of a program:
- according to the locations of definitions
and uses of different variables in a
program.
29-Dec-2008 SoItware Testing 95
Data FlowBased Testing
- For a statement numbered S,
- DEF(S) = {X/statement S contains a
definition of X}
- USES(S)= {X/statement S contains a use
of X}
- Example: 1: a=b, DEF(1)={a},
USES(1)={b}.
- Example: 2: a=ab, DEF(1)={a},
USES(1)={a,b}.
29-Dec-2008 SoItware Testing 96
Data FlowBased Testing
- A variable X is said to be live at
statement S1, if
- X is defined at a statement S:
- there exists a path from S to S1 not
containing any definition of X.
29-Dec-2008 SoItware Testing 97
DU hain Example
1 X()
2 a5; /* DeIines variable a */
3 While(C1)
4 iI (C2)
5 ba*a; /*Uses variable a */
6 aa-1; /* DeIines variable a */
7 }
8 print(a); } /*Uses variable a */
29-Dec-2008 SoItware Testing 98
Definitionuse chain (DU chain)
- |X,S,S1*,
- S and S1 are statement numbers,
- X in DEF(S)
- X in USES(S1), and
- the definition of X in the statement S is live
at statement S1.
29-Dec-2008 SoItware Testing 99
Data FlowBased Testing
- "ne simple data flow testing strategy:
- every DU chain in a program be covered at
least once.
29-Dec-2008 SoItware Testing 100
Data FlowBased Testing
- Data flow testing strategies:
- useful for selecting test paths of a program
containing nested if and loop statements
29-Dec-2008 SoItware Testing 101
1 X(){
2 B1, /* Defines variable a */
3 While(1) {
4 if (2)
S if(4) B4, /*Uses variable a */
6 else BS,
7 else if (3) B2,
8 else B3, }
3 B6 }
Data FlowBased Testing
29-Dec-2008 SoItware Testing 102
Data FlowBased Testing
- |a,1,S*: a DU chain.
- Assume:
- DEF(X) = {B1, B2, B3, B4, BS}
- USED(X) = {B2, B3, B4, BS, B6}
- There are 2S DU chains.
- However only S paths are needed to
cover these chains.
29-Dec-2008 SoItware Testing 103
Nutation Testing
- The software is first tested:
- using an initial testing method based on
whitebox strategies we already discussed.
- After the initial testing is complete,
- mutation testing is taken up.
- The idea behind mutation testing:
- make a few arbitrary small changes to a
program at a time.
29-Dec-2008 SoItware Testing 104
Nutation Testing
- Each time the program is changed,
- it is called a mutated program
- the change is called a mutant.
29-Dec-2008 SoItware Testing 105
Nutation Testing
- A mutated program:
- tested against the full test suite of the
program.
- !f there exists at least one test case in
the test suite for which:
- a mutant gives an incorrect result, then the
mutant is said to be dead.
29-Dec-2008 SoItware Testing 106
Nutation Testing
- !f a mutant remains alive:
- even after all test cases have been
exhausted, the test suite is enhanced to kill
the mutant.
- The process of generation and killing of
mutants:
- can be automated by predefining a set of
primitive changes that can be applied to
the program.
29-Dec-2008 SoItware Testing 107
Nutation Testing
- The primitive changes can be:
- altering an arithmetic operator,
- changing the value of a constant,
- changing a data type, etc.
29-Dec-2008 SoItware Testing 108
Nutation Testing
- A major disadvantage of mutation
testing:
- computationally very expensive,
- a large number of possible mutants can be
generated.
29-Dec-2008 SoItware Testing 109
Debugging
- "nce errors are identified:
- it is necessary identify the precise location
of the errors and to fix them.
- Each debugging approach has its own
advantages and disadvantages:
- each is useful in appropriate
circumstances.
29-Dec-2008 SoItware Testing 110
Bruteforce method
- This is the most common method of
debugging:
- least efficient method.
- program is loaded with print statements
- print the intermediate values
- hope that some of printed values will help
identify the error.
29-Dec-2008 SoItware Testing 111
Symbolic Debugger
- Brute force approach becomes more
systematic:
- with the use of a symbolic debugger,
- symbolic debuggers get their name for
historical reasons
- early debuggers let you only see values
from a program dump:
- determine which variable it corresponds to.
29-Dec-2008 SoItware Testing 112
Symbolic Debugger
- Using a symbolic debugger:
- values of different variables can be easily
checked and modified
- single stepping to execute one instruction
at a time
- break points and watch points can be set
to test the values of variables.
29-Dec-2008 SoItware Testing 113
Backtracking
- This is a fairly common approach.
- Beginning at the statement where an
error symptom has been observed:
- source code is traced backwards until the
error is discovered.
29-Dec-2008 SoItware Testing 114
Example
nt man()
nt s;
;
whe()
ss+;
++; ++;]
prntf(~ds);
]
29-Dec-2008 SoItware Testing 115
Backtracking
- Unfortunately, as the number of source
lines to be traced back increases,
- the number of potential backward paths
increases
- becomes unmanageably large for complex
programs.
29-Dec-2008 SoItware Testing 116
auseelimination method
- Determine a list of causes:
- which could possibly have contributed to
the error symptom.
- tests are conducted to eliminate each.
- A related technique of identifying error
by examining error symptoms:
- software fault tree analysis.
29-Dec-2008 SoItware Testing 117
Program Slicing
- This technique is similar to back
tracking.
- However, the search space is reduced
by defining slices.
- A slice is defined for a particular
variable at a particular statement:
- set of source lines preceding this statement
which can influence the value of the
variable.
29-Dec-2008 SoItware Testing 118
Example
nt man()
nt s;
; s;
whe()
ss+;
++;]
prntf(~ds);
prntf(~d);
]
29-Dec-2008 SoItware Testing 119
Debugging Cuidelines
- Debugging usually requires a thorough
understanding of the program design.
- Debugging may sometimes require full
redesign of the system.
- A common mistake novice programmers
often make:
- not fixing the error but the error
symptoms.
29-Dec-2008 SoItware Testing 120
Debugging Cuidelines
- Be aware of the possibility:
- an error correction may introduce new
errors.
- After every round of errorfixing:
- regression testing must be carried out.
29-Dec-2008 SoItware Testing 121
Program Analysis Tools
- An automated tool:
- takes program source code as input
- produces reports regarding several
important characteristics of the program,
- such as size, complexity, adequacy of
commenting, adherence to programming
standards, etc.
29-Dec-2008 SoItware Testing 122
Program Analysis Tools
- Some program analysis tools:
- produce reports regarding the adequacy of
the test cases.
- There are essentially two categories of
program analysis tools:
- Static analysis tools
- Dynamic analysis tools
29-Dec-2008 SoItware Testing 123
Static Analysis Tools
- Static analysis tools:
- assess properties of a program without
executing it.
- Analyze the source code
- provide analytical conclusions.
29-Dec-2008 SoItware Testing 124
Static Analysis Tools
- Whether coding standards have been
adhered to?
- ommenting is adequate?
- Programming errors such as:
- Uninitialized variables
- mismatch between actual and formal
parameters.
- variables declared but never used, etc.
29-Dec-2008 SoItware Testing 125
Static Analysis Tools
- ode walk through and inspection can
also be considered as static analysis
methods:
- however, the term static program analysis
is generally used for automated analysis
tools.
29-Dec-2008 SoItware Testing 126
Dynamic Analysis Tools
- Dynamic program analysis tools require
the program to be executed:
- its behaviour recorded.
- Produce reports such as adequacy of test
cases.
29-Dec-2008 SoItware Testing 127
!ntegration testing
- After different modules of a system
have been coded and unit tested:
- modules are integrated in steps according
to an integration plan
- partially integrated system is tested at
each integration step.
29-Dec-2008 SoItware Testing 128
System Testing
- System testing involves:
- validating a fully developed system against
its requirements.
29-Dec-2008 SoItware Testing 129
!ntegration Testing
- Develop the integration plan by
examining the structure chart :
- big bang approach
- topdown approach
- bottomup approach
- mixed approach
29-Dec-2008 SoItware Testing 130
Example Structured Design
root
Get-good-data Compute-souton Dspay-souton
Get-data
Vadate
-data
Vad-numbers
Vad-numbers
rms
rms
29-Dec-2008 SoItware Testing 131
Big bang !ntegration Testing
- Big bang approach is the simplest
integration testing approach:
- all the modules are simply put together
and tested.
- this technique is used only for very small
systems.
29-Dec-2008 SoItware Testing 132
Big bang !ntegration Testing
- Nain problems with this approach:
- if an error is found:
- it is very difficult to localize the error
- the error may potentially belong to any of the
modules being integrated.
- debugging errors found during big bang
integration testing are very expensive to
fix.
29-Dec-2008 SoItware Testing 133
Bottomup !ntegration Testing
- !ntegrate and test the bottom level
modules first.
- A disadvantage of bottomup testing:
- when the system is made up of a large
number of small subsystems.
- This extreme case corresponds to the big
bang approach.
29-Dec-2008 SoItware Testing 134
Topdown integration testing
- Topdown integration testing starts with
the main routine:
- and one or two subordinate routines in the
system.
- After the toplevel 'skeleton' has been
tested:
- immediate subordinate modules of the
'skeleton' are combined with it and tested.
29-Dec-2008 SoItware Testing 135
Nixed integration testing
- Nixed (or sandwiched) integration
testing:
- uses both topdown and bottomup testing
approaches.
- Nost common approach
29-Dec-2008 SoItware Testing 136
!ntegration Testing
- !n topdown approach:
- testing waits till all toplevel modules are
coded and unit tested.
- !n bottomup approach:
- testing can start only after bottom level
modules are ready.
29-Dec-2008 SoItware Testing 137
Phased versus !ncremental
!ntegration Testing
- !ntegration can be incremental or
phased.
- !n incremental integration testing,
- only one new module is added to the
partial system each time.
29-Dec-2008 SoItware Testing 138
Phased versus !ncremental
!ntegration Testing
- !n phased integration,
- a group of related modules are added to
the partially integrated system each time.
- Bigbang testing:
- a degenerate case of the phased
integration testing.
29-Dec-2008 SoItware Testing 139
Phased versus !ncremental
!ntegration Testing
- Phased integration requires less number
of integration steps:
- compared to the incremental integration
approach.
- However, when failures are detected,
- it is easier to debug if using incremental
testing
- since errors are very likely to be in the newly
integrated module.
29-Dec-2008 SoItware Testing 140
System Testing
- There are three main kinds of system
testing:
- Alpha Testing
- Beta Testing
- Acceptance Testing
29-Dec-2008 SoItware Testing 141
Alpha Testing
- System testing is carried out by the test
team within the developing
organization.
29-Dec-2008 SoItware Testing 142
Beta Testing
- System testing performed by a select
group of friendly customers.
29-Dec-2008 SoItware Testing 143
Acceptance Testing
- System testing performed by the
customer himself:
- to determine whether the system should
be accepted or rejected.
29-Dec-2008 SoItware Testing 144
Stress Testing
- Stress testing (aka endurance testing):
- impose abnormal input to stress the
capabilities of the software.
- !nput data volume, input data rate,
processing time, utilization of memory, etc.
are tested beyond the designed capacity.
29-Dec-2008 SoItware Testing 145
Performance Testing
- Addresses nonfunctional requirements.
- Nay sometimes involve testing hardware
and software together.
- There are several categories of
performance testing.
29-Dec-2008 SoItware Testing 146
Stress testing
- Evaluates system performance
- when stressed for short periods of time.
- Stress testing
- also known as endurance testing.
29-Dec-2008 SoItware Testing 147
Stress testing
- Stress tests are black box tests:
- designed to impose a range of abnormal
and even illegal input conditions
- so as to stress the capabilities of the
software.
29-Dec-2008 SoItware Testing 148
Stress Testing
- !f the requirements is to handle a
specified number of users, or devices:
- stress testing evaluates system
performance when all users or devices are
busy simultaneously.
29-Dec-2008 SoItware Testing 149
Stress Testing
- !f an operating system is supposed to
support 1S multiprogrammed jobs,
- the system is stressed by attempting to run
1S or more jobs simultaneously.
- A realtime system might be tested
- to determine the effect of simultaneous
arrival of several highpriority interrupts.
29-Dec-2008 SoItware Testing 150
Stress Testing
- Stress testing usually involves an
element of time or size,
- such as the number of records transferred
per unit time,
- the maximum number of users active at
any time, input data size, etc.
- Therefore stress testing may not be
applicable to many types of systems.
29-Dec-2008 SoItware Testing 151
volume Testing
- Addresses handling large amounts of
data in the system:
- whether data structures (e.g. queues,
stacks, arrays, etc.) are large enough to
handle all possible situations
- Fields, records, and files are stressed to
check if their size can accommodate all
possible data volumes.
29-Dec-2008 SoItware Testing 152
onfiguration Testing
- Analyze system behaviour:
- in various hardware and software
configurations specified in the
requirements
- sometimes systems are built in various
configurations for different users
- for instance, a minimal system may serve a
single user,
- other configurations for additional users.
29-Dec-2008 SoItware Testing 153
ompatibility Testing
- These tests are needed when the
system interfaces with other systems:
- check whether the interface functions as
required.
29-Dec-2008 SoItware Testing 154
ompatibility testing
Example
- !f a system is to communicate with a
large database system to retrieve
information:
- a compatibility test examines speed and
accuracy of retrieval.
29-Dec-2008 SoItware Testing 155
Recovery Testing
- These tests check response to:
- presence of faults or to the loss of data,
power, devices, or services
- subject system to loss of resources
- check if the system recovers properly.
29-Dec-2008 SoItware Testing 156
Naintenance Testing
- Diagnostic tools and procedures:
- help find source of problems.
- !t may be required to supply
- memory maps
- diagnostic programs
- traces of transactions,
- circuit diagrams, etc.
29-Dec-2008 SoItware Testing 157
Naintenance Testing
- verify that:
- all required artefacts for maintenance exist
- they function properly
29-Dec-2008 SoItware Testing 158
Documentation tests
- heck that required documents exist
and are consistent:
- user guides,
- maintenance guides,
- technical documents
29-Dec-2008 SoItware Testing 159
Documentation tests
- Sometimes requirements specify:
- format and audience of specific documents
- documents are evaluated for compliance
29-Dec-2008 SoItware Testing 160
Usability tests
- All aspects of user interfaces are tested:
- Display screens
- messages
- report formats
- navigation and selection problems
29-Dec-2008 SoItware Testing 161
Environmental test
- These tests check the system's ability to
perform at the installation site.
- Requirements might include tolerance for
- heat
- humidity
- chemical presence
- portability
- electrical or magnetic fields
- disruption of power, etc.
29-Dec-2008 SoItware Testing 162
Test Summary Report
- Cenerated towards the end of testing
phase.
- overs each subsystem:
- a summary of tests which have been
applied to the subsystem.
29-Dec-2008 SoItware Testing 163
Test Summary Report
- Specifies:
- how many tests have been applied to a
subsystem,
- how many tests have been successful,
- how many have been unsuccessful, and the
degree to which they have been unsuccessful,
- e.g. whether a test was an outright failure
- or whether some expected results of the test were
actually observed.
29-Dec-2008 SoItware Testing 164
Regression Testing
- Does not belong to either unit test,
integration test, or system test.
- !n stead, it is a separate dimension to
these three forms of testing.
29-Dec-2008 SoItware Testing 165
Regression testing
- Regression testing is the running of test
suite:
- after each change to the system or after
each bug fix
- ensures that no new bug has been
introduced due to the change or the bug
fix.
29-Dec-2008 SoItware Testing 166
Regression testing
- Regression tests assure:
- the new system's performance is at least
as good as the old system
- always used during phased system
development.
29-Dec-2008 SoItware Testing 167
How many errors are still remaining?
- Seed the code with some known errors:
- artificial errors are introduced into the
program.
- heck how many of the seeded errors are
detected during testing.
29-Dec-2008 SoItware Testing 168
Error Seeding
- Let:
- ! be the total number of errors in the
system
- n of these errors be found by testing.
- S be the total number of seeded errors,
- s of the seeded errors be found during
testing.
29-Dec-2008 SoItware Testing 169
Error Seeding
- n/! = s/S
- ! = S n/s
- remaining defects:
! n = n ((S s)/ s)
29-Dec-2008 SoItware Testing 170
Example
- 100 errors were introduced.
- 30 of these errors were found during
testing
- S0 other errors were also found.
- Remaining errors=
S0 (10030)/30 = 6
29-Dec-2008 SoItware Testing 171
Error Seeding
- The kind of seeded errors should
match closely with existing errors:
- However, it is difficult to predict the types
of errors that exist.
- ategories of remaining errors:
- can be estimated by analyzing historical
data from similar projects.
29-Dec-2008 SoItware Testing 172
!EEE Standard 823 1338
- Test plan identifier
- !ntroduction
- Test !tems
- Features to be tested
- Features not to be tested
- Approach
- !tem pass/fail criteria
- Suspension criteria and resumption
requirements
29-Dec-2008 SoItware Testing 173
ont.
- Test deliverables
- Testing tasks
- Environment needs
- Responsibilities
- Staffing and training needs
- Risk and contingencies
- Approvals
29-Dec-2008 SoItware Testing 174
References
- Software Testing, A craftsman's approach
- Paul ]orgensen
- Fundamental of Software Engineering
- Rajib Nall
- Software Engineering, A practitioner's
approach
- Roger Pressman
- ommunication of AN, Sep 1334 edition

Potrebbero piacerti anche