Sei sulla pagina 1di 80

UNIT 7

TESTING STRATEGIES AND TACTICS


Introduction
A strategy for software testing integrates the design of software test cases into a well-planned series
of steps that result in successful development of the software.
The strategy provides a road map that describes the steps to be taken, when, and how much effort,
time, and resources will be required.
The strategy incorporates
test planning,
test case design,
test execution, and
test result collection and evaluation.

2
CONTD…
What is it ? Software is tested to uncover errors that were made inadvertently as it was designed
and constructed.
Who does it? A strategy for software testing is developed by the project manager, software
engineers, and testing specialists.
Why is it important? Testing often accounts for more project effort than any other software
engineering action. If it is conducted haphazardly, time is wasted, unnecessary effort is expended,
and even worse, errors sneak through undetected. It would therefore seem reasonable to establish a
systematic strategy for testing software.
A Strategic Approach to Testing
General Characteristics of Strategic Testing

Testing is a set of activities that can be planned in advance and conducted systematically.
For this reason a template for software testing—a set of steps into which you can place specific test case design
techniques and testing methods—should be defined for the software process.
A number of software testing strategies have been proposed in the literature. All provide you with a template for
testing and all have the following generic characteristics:
To perform effective testing, a software team should conduct effective formal technical reviews
Testing begins at the component level and work outward toward the integration of the entire computer-based
system
Different testing techniques are appropriate at different points in time
Testing is conducted by the developer of the software and (for large projects) by an independent test group
Testing and debugging are different activities, but debugging must be accommodated in any testing strategy

5
Verification and Validation
Software testing is part of a broader group of activities called verification and validation that
are involved in software quality assurance
Verification (Are the algorithms coded correctly?)
◦ The set of activities that ensure that software correctly implements a specific function or
algorithm.
◦ Implementation point of view
◦ Are we building the product right
Validation (Does it meet user requirements?)
◦ The set of activities that ensure that the software that has been built is traceable to customer
requirements
◦ Customer point of view.
◦ Are we building the right product

6
ORGANIZING FOR SOFTWARE TESTING
The software developer is always responsible for testing the individual units (components) of the
program, ensuring that each performs the function or exhibits the behavior for which it was
designed.
In many cases, the developer also conducts integration testing—a testing step that leads to the
construction (and test) of the complete software architecture.
Only after the software architecture is complete does an independent test group become involved.
The ITG is part of the software development project team in the sense that it becomes involved
during analysis and design and stays involved (planning and specifying test procedures)
throughout a large project.
The developer and the ITG work closely throughout a software project to ensure that thorough
tests will be conducted. While testing is conducted, the developer must be available to correct
errors that are uncovered.
A Strategy for Testing Software

8
CONTD…
A strategy for software testing may also be viewed in the context of the spiral.
Unit testing begins at the vertex of the spiral and concentrates on each unit (e.g., component,
class, or WebApp content object) of the software as implemented in source code.
Testing progresses by moving outward along the spiral to integration testing, where the focus is
on design and the construction of the software architecture.
Taking another turn outward on the spiral, you encounter validation testing, where requirements
established as part of requirements modeling are validated against the software that has been
constructed.
Finally, you arrive at system testing, where the software and other system elements are tested as
a whole.
To test computer software, you spiral out in a clockwise direction along streamlines that broaden
the scope of testing with each turn.
Levels of Testing for Software
Unit testing
◦ Concentrates on each component/function of the software as implemented in the source code
◦ Exercises specific paths in a component's control structure to ensure complete coverage and maximum
error detection
◦ Components are then assembled and integrated
Integration testing
◦ Focuses on the design and construction of the software architecture
◦ Focuses on inputs and outputs, and how well the components fit together and work together
Validation testing
◦ Requirements are validated against the constructed software
◦ Provides final assurance that the software meets all functional, behavioral, and performance requirements

10
CONTD…
System testing
o The software and other system elements are tested as a whole, how the software is interacting with the
real world.
o Verifies that all system elements (software, hardware, people, databases) mesh properly and that overall
system function and performance is achieved

11
Test Strategies for Software
Unit Testing
 Unit testing focuses verification effort on the smallest unit of software design—the software component or
module.
 Concentrates on the internal processing logic and data structures within the boundaries of a component.
 Is simplified when a module is designed with high cohesion
◦ Reduces the number of test cases
◦ Allows errors to be more easily predicted and uncovered

13
TARGETS FOR UNIT TEST CASES
Targets for Unit Test Cases
Module interface
◦ Ensure that information flows properly into and out of the module
Local data structures
◦ Ensure that data stored temporarily maintains its integrity during all steps in an algorithm execution
Boundary conditions
◦ Ensure that the module operates properly at boundary values established to limit or restrict processing
Independent paths (basis paths)
◦ Paths are exercised to ensure that all statements in a module have been executed at least once
Error handling paths
◦ Ensure that the algorithms respond correctly to specific error conditions

15
Common Computational Errors in
Execution Paths

1. Misunderstood or incorrect arithmetic precedence


2. Mixed mode operations (e.g., int, float, char)
3. Incorrect initialization of values
4. Precision inaccuracy and round-off errors
5. Incorrect symbolic representation of an expression (int vs. float)

16
Other Errors to Uncover
1. Comparison of different data types
2. Incorrect logical operators or precedence
3. Expectation of equality when precision error makes equality unlikely (using == with float
types)
4. Incorrect comparison of variables
5. Improper or nonexistent loop termination
6. Improperly modified loop variables
7. Boundary value violations

17
Problems to uncover in Error Handling
1. Error description is unintelligible or ambiguous
2. Error noted does not correspond to error encountered
3. Error condition causes operating system intervention prior to error handling
4. Exception condition processing is incorrect
5. Error description does not provide enough information to assist in the location of the cause of
the error

18
UNIT TEST PROCEDURES
Unit testing is normally considered as an adjunct to the coding step.
The design of unit tests can occur before coding begins or after source code has been generated.
A review of design information provides guidance for establishing test cases that are likely to uncover errors
in each of the categories discussed earlier.
Each test case should be coupled with a set of expected results.
Drivers and Stubs for Unit Testing
Because a component is not a stand-alone program, driver and/or stub software must often be developed for
each unit test.
Driver
◦ A simple main program that accepts test case data, passes such data to the component being tested, and
prints the returned results
Stubs
◦ Serve to replace modules that are subordinate to (called by) the component to be tested
◦ It uses the module’s exact interface, may do minimal data manipulation, provides verification of entry,
and returns control to the module undergoing testing.
Drivers and stubs both represent overhead
◦ Both must be written but don’t constitute part of the installed software product
◦ Those are overheads with the unit tests……they are throw away prototypes.

20
CONTD…
Unit testing is simplified when a component with high cohesion is designed. When only one
function is addressed by a component, the number of test cases is reduced and errors can be more
easily predicted and uncovered.
CONTD…
Integration Testing
Defined as a systematic technique for constructing the software architecture
◦ At the same time integration is occurring, conduct tests to uncover errors associated with
interfaces
Objective is to take unit tested modules and build a program structure based on the prescribed
design
Two Approaches
◦ Non-incremental Integration Testing
◦ Incremental Integration Testing

23
Non-incremental Integration Testing
• Commonly called the “Big Bang” approach
• All components are combined in advance
• The entire program is tested as a whole
• Chaos results
• Many seemingly - unrelated errors are encountered
• Correction is difficult because isolation of causes is complicated
• Once a set of errors are corrected, more errors occur, and testing appears to enter an endless
loop(AS WE ARE UNAWARE OF THE ACTUAL CAUSE OF ERROR)

24
Incremental Integration Testing
The program is constructed and tested in small increments
Errors are easier to isolate and correct
Interfaces are more likely to be tested completely
A systematic test approach is applied

Three kinds
◦ Top-down integration
◦ Bottom-up integration
◦ Sandwich integration

25
Top-down Integration
Modules are integrated by moving downward through the control hierarchy, beginning with the main
module
Subordinate modules are incorporated in either a depth-first or breadth-first fashion
◦ DF: All modules on a major control path are integrated
◦ BF: All modules directly subordinate at each level are integrated

26
CONTD…
TOP DOWN INTEGRATION
CONTD…
depth-first integration integrates all components on a major control path of the program
structure.
Selection of a major path is somewhat arbitrary and depends on application-specific
characteristics.
For example, selecting the left-hand path, components M1, M2 , M5 would be integrated first.
Next, M8 or (if necessary for proper functioning of M2) M6 would be integrated.
Then, the central and right-hand control paths are built.
Breadth-first integration incorporates all components directly subordinate at each level, moving
across the structure horizontally. From the figure, components M2, M3, and M4 would be
integrated first. The next control level, M5, M6, and so on, follows.
CONTD…
The integration process is performed in a series of five steps:
1. The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate
stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real component.
5. Regression testing (discussed later in this section) may be conducted to ensure that new errors
have not been introduced.
CONTD…
Advantages
This approach verifies major control or decision points early in the test process. In a “well-
factored” program structure, decision making occurs at upper levels in the hierarchy and is
therefore encountered first.
Disadvantages
◦ Stubs need to be created to substitute for modules that have not been built or tested yet; this
code is later discarded
◦ Because stubs are used to replace lower level modules, no significant data flow can occur until
much later in the integration/testing process….lead to difficulty in determining the cause of
errors.
Bottom-up Integration
Integration and testing starts with the most atomic modules in the control hierarchy.
Steps:
1. Low level components are combined into clusters that perform a specific sub function.
2. A driver is written to coordinate input and output of the test cases.
3. The cluster is tested.
4. Drivers are removed and the clusters are combined moving upward.

31
CONTD…(BOTTOM UP INTEGRATION)
CONTD…
Advantages
◦ This approach verifies low-level data processing early in the testing process
◦ Need for stubs is eliminated
Disadvantages
◦ Driver modules need to be built to test the lower-level modules; this code is later discarded or expanded
into a full-featured version. As integration moves upwards, the need for test drivers lessens.
◦ Drivers inherently do not contain the complete algorithms that will eventually use the services of the
lower-level modules; consequently, testing may be incomplete or more testing may be needed later when
the upper level modules are available
Sandwich Integration
Consists of a combination of both top-down and bottom-up integration
Occurs both at the highest level modules and also at the lowest level modules
Proceeds using functional groups of modules
◦ High and low-level modules are grouped based on the control and data processing they provide for a
specific program feature
◦ Integration within the group progresses in alternating steps between the high and low level modules of
the group
◦ When integration for a certain functional group is complete, integration and testing moves onto the next
group
Reaps the advantages of both types of integration while minimizing the need for drivers and stubs
Requires a disciplined approach so that integration doesn’t tend towards the “big bang” scenario.

34
Regression Testing
Each new addition or change to base lined software may cause problems with functions that previously
worked flawlessly.
Each time a new module is added as part of integration testing, the software changes. New data flow paths
are established, new I/O may occur, and new control logic is invoked.
Regression testing re-executes a small subset of tests that have already been conducted
◦ Ensures that changes have not propagated unintended side effects
◦ Helps to ensure that changes do not introduce unintended behavior or additional errors
◦ May be done manually or through the use of automated capture/playback tools

35
CONTD…
Regression test suite contains three different classes of test cases
◦ A representative sample of tests that will exercise all software functions
◦ Additional tests that focus on software functions that are likely to be affected by the change
◦ Tests that focus on the actual software components that have been changed
Test cases must be properly chosen so that we do not do redundant testing.
Smoke Testing
Taken from the world of hardware
◦ Power is applied and a technician checks for sparks, smoke, or other dramatic signs of fundamental
failure
Designed as a pacing mechanism for time-critical projects
◦ Allows the software team to assess its project on a frequent basis
Includes the following activities
1) The software is compiled and linked into a build : A build includes all data files, libraries,
reusable modules, and engineered components that are required to implement one or more
product functions.
2) A series of tests is designed to expose errors that will keep the build from properly performing its
function
◦ The goal is to uncover “show stopper” errors that have the highest likelihood of throwing the software
project behind schedule

37
Contd…
3) The build is integrated with other builds and the entire product is smoke tested daily
◦ Daily testing gives managers and practitioners a realistic assessment of the progress of the integration
testing
◦ After a smoke test is completed, detailed test scripts are executed
Contd…
For example, a smoke test may ask basic questions like "Does the program run?", "Does it open
a window?", or "Does clicking the main button do anything?" The process aims to determine
whether the application is so badly broken as to make further immediate testing unnecessary. As
the book "Lessons Learned in Software Testing" puts it, "smoke tests broadly cover product
features in a limited time ... if key features don't work or if key bugs haven't yet been fixed, your
team won't waste further time installing or testing".
A frequent characteristic of a smoke test is that it runs quickly, often in the order of a few
minutes and thus provides much quicker feedback and faster turnaround than the running
of full test suites which can take hours or even days.
Benefits of Smoke Testing
Integration risk is minimized
◦ Daily testing uncovers incompatibilities and show-stoppers early in the testing process,
thereby reducing schedule impact
The quality of the end-product is improved
◦ Smoke testing is likely to uncover both functional errors and architectural and component-
level design errors
Error diagnosis and correction are simplified
◦ Smoke testing will probably uncover errors in the newest components that were integrated
Progress is easier to assess
◦ As integration testing progresses, more software has been integrated and more has been
demonstrated to work
◦ Managers get a good indication that progress is being made

40
Validation Testing
Validation testing follows integration testing
Focuses on user-visible actions and user-recognizable output from the system
Designed to ensure that
◦ All functional requirements are satisfied
◦ All behavioral characteristics are achieved
◦ All performance requirements are attained
◦ Documentation is correct
◦ Usability and other requirements are met (e.g., transportability, compatibility, error recovery, maintainability)
After each validation test
◦ The function or performance characteristic conforms to specification and is accepted
◦ A deviation from specification is uncovered and a deficiency list is created

41
Alpha and Beta Testing
They both are types of acceptance testing
Alpha testing
◦ Conducted at the developer’s site by end users
◦ Software is used in a natural setting with developers watching intently and recording errors and usage
problems.
◦ Testing is conducted in a controlled environment
Beta testing
◦ Conducted at end-user sites
◦ Developer is generally not present
◦ It serves as a live application of the software in an environment that cannot be controlled by the
developer
◦ The end-user records all problems that are encountered and reports these to the developers at regular
intervals

42
CONTD…
After beta testing is complete, software engineers make software modifications and prepare for
release of the software product to the entire customer base
System Testing
software is incorporated with other system elements (e.g., hardware, people, information), and a series of
system integration and validation tests are conducted.
These tests fall outside the scope of the software process and are not conducted solely by software
engineers.
A classic system testing problem is “finger pointing”.
occurs when an error is uncovered each system element developer blames the other for the
problem.
System testing is a series of tests whose primary purpose is to fully exercise the software.
CONTD…
Rather than indulging in such nonsense, you should anticipate potential interfacing
problems and
(1) design error-handling paths that test all information coming from other elements of the
system,
(2) conduct a series of tests that simulate bad data or other potential errors at the software
interface,
(3) record the results of tests to use as “evidence” if finger pointing does occur, and
(4) participate in planning and design of system tests to ensure that software is adequately tested.
System Testing: Different Types
Recovery testing
Many computer-based systems must recover from faults and resume processing with little or no
downtime.
In some cases, a system must be fault tolerant; that is, processing faults must not cause overall system
function to cease.
In other cases, a system failure must be corrected within a specified period of time or severe economic
damage will occur.

◦ Tests for recovery from system faults


◦ Forces the software to fail in a variety of ways and verifies that recovery is properly performed
◦ If Recovery is automatic : Tests reinitialization , check pointing mechanisms, data recovery, and restart are
evaluated for correctness
◦ If Recovery requires human Intervention : Mean- time – to – repair must be minimum.

46
CONTD…
Security testing
◦ Verifies that protection mechanisms built into a system will, in fact, protect it from improper access.

◦ The tester tries to penetrate the system,


◦ may attempt to acquire passwords,
◦ may attack the system with custom software designed to break down any defenses that have been
constructed;
◦ may overwhelm the system, thereby denying service to others;
◦ may browse through insecure data, hoping to find the key to system entry.

◦ Given enough time and resources, good security testing will ultimately penetrate a system. The role of
the system designer is to make penetration cost more than the value of the information that will be
obtained.
CONTD…
STRESS TESTING
Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or
volume.
For example,
(1) special tests may be designed that generate ten interrupts per second, when one or two is the average
rate,
(2) input data rates may be increased by an order of magnitude to determine how input functions will
respond,
(3) test cases that require maximum memory or other resources are executed,
(4) test cases that may cause thrashing in a virtual operating system are designed,
(5) test cases that may cause excessive hunting for disk-resident data are created. Essentially, the tester
attempts to break theprogram.
CONTD…
Performance testing

◦ Tests the run-time performance of software within the context of an integrated system
◦ Often coupled with stress testing and usually requires both hardware and software instrumentation
◦ Can uncover situations that lead to degradation and possible system failure

That is, it is often necessary to measure resource utilization (e.g., processor cycles) in an exacting fashion.
External instrumentation can monitor execution intervals, log events (e.g., interrupts) as they occur, and
sample machine states on a regular basis.
CONTD…
DEPLOYMENT TESTING
Deployment testing, sometimes called configuration testing, exercises the software in each
environment in which it is to operate.
In addition, deployment testing examines all installation procedures and specialized installation
software (e.g., “installers”) that will be used by customers, and all documentation that will be
used to introduce the software to end users.
As an example, consider the Internet-accessible version of SafeHome software that would allow
a customer to monitor the security system from remote locations. The SafeHome WebApp must
be tested using all Web browsers that are likely to be encountered. A more thorough deployment
test might encompass combinations of Web browsers with various operating systems (e.g.,
Linux, Mac OS, Windows). Because security is a major issue, a complete set of security tests
would be integrated with the deployment test.
Two Unit Testing Techniques
Black-box testing
◦ It takes an external view of the product to be tested.
◦ Knowing the specified function that a product has been designed to perform, test to see if that function is
fully operational and error free
◦ Includes tests that are conducted at the software interface
◦ Not concerned with internal logical structure of the software
◦ Also called functional testing

51
CONTD…
White-box testing
◦ It takes an internal view of the product to be tested.
◦ Knowing the internal workings of a product, test that all internal operations are performed according to
specifications and all internal components have been exercised
◦ Involves tests that concentrate on close examination of procedural detail
◦ Logical paths through the software are tested
◦ Test cases exercise specific sets of conditions and loops
◦ Also called structural testing
◦ Exhaustive testing is not possible.
◦ A limited number of important logical paths can be selected and exercised. Important data
structures can be probed for validity.
White-box Testing
Glass box testing
Uses the control structure part of component-level design to derive the test cases
These test cases
◦ Guarantee that all independent paths within a module have been exercised at least once
◦ Exercise all logical decisions on their true and false sides
◦ Execute all loops at their boundaries and within their operational bounds
◦ Exercise internal data structures to ensure their validity

53
Basis Path Testing
White-box testing technique proposed by Tom McCabe
Enables the test case designer to derive a logical complexity measure of a procedural design
Uses this measure as a guide for defining a basis set of execution paths
Test cases derived to exercise the basis set are guaranteed to execute every statement in the
program at least one time during testing

54
Flow Graph Notation
flow graph (or program graph) depicts logical control flow.
1) A circle in a graph represents a node, which stands for a sequence of one or more procedural statements.
2) A node containing a simple conditional expression is referred to as a predicate node
◦ Each compound condition in a conditional expression containing one or more Boolean operators (e.g.,
and, or) is represented by a separate predicate node
◦ A predicate node has two or more edges leading out from it.
3) An edge, or a link, is a an arrow representing flow of control in a specific direction
◦ An edge must start and terminate at a node
◦ An edge does not intersect or cross over another edge
4) Areas bounded by a set of edges and nodes are called regions
When counting regions, include the area outside the graph as a region, too

55
CONTD…
FLOW CHART AND FLOW GRAPH
Independent Program Paths
Defined as a path through the program from the start node until the end node that introduces at least one
new set of processing statements or a new condition (i.e., new nodes)
Must move along at least one edge that has not been traversed before by a previous path
Basis set for flow graph on previous slide
Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-10-1-11
Path 4: 1-2-3-6-7-9-10-1-11

58
Contd…
Note that each new path introduces a new edge. The path
1-2-3-4-5-10-1-2-3-6-8-9-10-1-11
is not considered to be an independent path because it is simply a combination of already
specified paths and does not traverse any new edges.
The number of paths in the basis set is determined by the cyclomatic complexity
Cyclomatic Complexity
Provides a quantitative measure of the logical complexity of a program
Defines the number of independent paths in the basis set.
Provides an upper bound for the number of tests that must be conducted to ensure all statements
have been executed at least once
Can be computed three ways
◦ The number of regions
◦ V(G) = E – N + 2, where E is the number of edges and N is the number of nodes in graph G
◦ V(G) = P + 1, where P is the number of predicate nodes in the flow graph G
Results in the following equations for the example flow graph
◦ Number of regions = 4
◦ V(G) = 11 edges – 9nodes + 2 = 4
◦ V(G) = 3 predicate nodes + 1 = 4

60
Deriving the Basis Set and Test Cases
1) Using the design or code as a foundation, draw a corresponding flow graph
2) Determine the cyclomatic complexity of the resultant flow graph
3) Determine a basis set of linearly independent paths
4) Prepare test cases that will force execution of each path in the basis set

61
EXAMPLE(STEP 1)
EXAMPLE(STEP 2)
CONTD…(STEP 3) INDEPENDENT PATHS
Path 1: 1-2-10-11-13
Path 2: 1-2-10-12-13
Path 3: 1-2-3-10-11-13
Path 4: 1-2-3-4-5-8-9-2-. . .
Path 5: 1-2-3-4-5-6-8-9-2-. . .
Path 6: 1-2-3-4-5-6-7-8-9-2-. . .

The ellipsis (. . .) following paths 4, 5, and 6 indicates that any path through the remainder of the
control structure is acceptable.
EXAMPLE 2
public double calculate(int amount)
{
-1- double rushCharge = 0;
-1- if (nextday.equals("yes") )
{
-2- rushCharge = 14.50;
}
-3- double tax = amount * .0725;
-3- if (amount >= 1000)
{
-4- shipcharge = amount * .06 + rushCharge;
}
-5- else if (amount >= 200)
{
-6- shipcharge = amount * .08 + rushCharge;
}
CONTD…
-7- else if (amount >= 100)
{
-8- shipcharge = 13.25 + rushCharge;
}
-9- else if (amount >= 50)
{
-10- shipcharge = 9.95 + rushCharge;
}
-11- else if (amount >= 25)
{
-12- shipcharge = 7.25 + rushCharge;
}
else
{
-13- shipcharge = 5.25 + rushCharge;
}
-14- total = amount + tax + shipcharge;
-14- return total;
} //end calculate
CONTD…
CONTD…
Step 2: Determine the cyclomatic complexity of the flow graph.
V(G) = E - N + 2
= 19 - 14 + 2
= 7
This tells us the upper bound on the size of the basis set. That is, it gives us the number of
independent paths we need to find.
CONTD…
Step 3: Determine the basis set of independent paths.
Path 1: 1 - 2 - 3 - 5 - 7 - 9 - 11 - 13 - 14
Path 2: 1 - 3 - 4 - 14
Path 3: 1 - 3 - 5 - 6 - 14
Path 4: 1 - 3 - 5 - 7 - 8 - 14
Path 5: 1 - 3 - 5 - 7 - 9 - 10 - 14
Path 6: 1 - 3 - 5 - 7 - 9 - 11 - 12 - 14
Path 7: 1 - 3 - 5 - 7 - 9 - 11 - 13 - 14
Note: This basis set is not unique. There are several different basis sets for the given
algorithm. You may have derived a different basis set.
The basis set "covers" all the nodes and edges in the algorithm.
CONTD…
Step 4: Prepare test cases that force execution of each path in the basis set.
GRAPH MATRICES
It is derived from flow graph……a tabular representation of a flow graph.
A data structure, called a graph matrix, can be quite useful for developing a software tool that
assists in basis path testing.
A graph matrix is a square matrix whose size is equal to the number of nodes on the flow graph
Each row or column corresponds to an identified node, matrix entries correspond to
connections(edges) between nodes.
Contd…
CONTD…
To this point, the graph matrix is nothing more than a tabular representation of a flow graph.
However, by adding a link weight to each matrix entry, the graph matrix can become a powerful
tool for evaluating program control structure during testing.
The link weight provides additional information about control flow.
In its simplest form, the link weight is 1 (a connection exists) or 0 (a connection does not exist).
But link weights can be assigned other, more interesting properties:
• The probability that a link (edge) will be execute.
• The processing time expended during traversal of a link
• The memory required during traversal of a link
• The resources required during traversal of a link.
Black-box Testing
Behavioral testing as in this we check how a module is interacting with its environment.
Complements white-box testing by uncovering different classes of errors.
Focuses on the functional requirements and the information domain of the software.
Used during the later stages of testing after white box testing has been performed.
The tester identifies a set of input conditions that will fully exercise all functional requirements for a
program
The test cases satisfy the following:
◦ Reduce, by a count greater than one, the number of additional test cases that must be designed to achieve
reasonable testing. [Test max things with min number of cases]
◦ Tell us something about the presence or absence of classes of errors, rather than an error associated only
with the specific task at hand

74
Black-box Testing Categories
Incorrect or missing functions
Interface errors
Errors in data structures or external data base access
Behavior or performance errors
Initialization and termination errors

75
Questions answered by Black-box Testing
How is functional validity tested?
How are system behavior and performance tested?
What classes of input will make good test cases?
Is the system particularly sensitive to certain input values?
How are the boundary values of a data class isolated?
What data rates and data volume can the system tolerate?
What effect will specific combinations of data have on system operation?

76
Equivalence Partitioning
A black-box testing method that divides the input domain of a program into classes of data from which
test cases are derived
An ideal test case single-handedly uncovers a complete class of errors, thereby reducing the total number
of test cases that must be developed
Test case design is based on an evaluation of equivalence classes for an input condition.
An equivalence class represents a set of valid or invalid states for input conditions.
An input condition is either a specific numeric value, a range of values, a set of related values, or a Boolean
condition.
From each equivalence class, test cases are selected so that the largest number of attributes of an
equivalence class are exercise at once

77
Guidelines for Defining Equivalence Classes
1) If an input condition specifies a range, one valid and two invalid equivalence classes are defined
◦ Input range: 1 – 10 Eq classes: {1..10}, {x < 1}, {x > 10}
2) If an input condition requires a specific value, one valid and two invalid equivalence classes are defined
◦ Input value: 250 Eq classes: {250}, {x < 250}, {x > 250}
3) If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined
◦ Input set: {-2.5, 7.3, 8.4} Eq classes: {-2.5, 7.3, 8.4}, {any other x}
4) If an input condition is a Boolean value, one valid and one invalid class are define
◦ Input: {true condition} Eq classes: {true condition}, {false condition}

By applying the guidelines for the derivation of equivalence classes, test cases for each
input domain data item can be developed and executed.

78
Boundary Value Analysis
A greater number of errors occur at the boundaries of the input domain rather than in the "center“.
Boundary value analysis leads to a selection of test cases that exercise bounding values.

Boundary value analysis is a test case design method that complements equivalence partitioning
◦ It selects test cases at the edges of a class
◦ It derives test cases from both the input domain and output domain

79
Guidelines for Boundary Value Analysis
1. If an input condition specifies a range bounded by values a and b, test cases should be designed with
values a and b as well as values just above and just below a and b
2. If an input condition specifies a number of values, test case should be developed that exercise the
minimum and maximum numbers. Values just above and just below the minimum and maximum are also
tested
3. Apply guidelines 1 and 2 to output conditions; produce output that reflects the minimum and the
maximum values expected; also test the values just below and just above.
FOR EXAMPLE : Assume that a temperature versus pressure table is required as output from an
engineering analysis program. Test cases should be designed to create an output report that produces the
maximum (and minimum) allowable number of table entries.
4. If internal program data structures have prescribed boundaries (e.g., an array or a table has a defined limit
of 100 entries), design a test case to exercise the data structure at its minimum and maximum boundaries

80

Potrebbero piacerti anche