Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Testing is a set of activities that can be planned in advance and conducted to find errors
within a document.
A strategy for software testing must accommodate low level tests that are necessary to
verify that a small source code segment has been correctly implemented as well as high
level tests that validate major system functions against consumer requirements.
Verification-Verification refers to the set of activities that ensure that software correctly
implements a specified function i.e. refers to code
Validation- Validation refers to different set of activities that ensure that the software that
has been built is traceable to customer requirements.
Testing provides the last bastion from which quality can be assessed and errors can be
uncovered. Quality cannot be tested only at the end; it should be incorporated at every
step in software testing.
Developer also conducts integration testing- a testing step that leads to the construction of
the complete software architecture.
The role of the independent test group (ITG) is to remove the inherent problems
associated with testing. Developer and ITG work closely throughout the software project
to ensure that through tests will be conducted.
Software testing(parts) can also be viewed as spiral model that can be divided into the
following four phases-
a) Unit Testing- Unit testing begins at the vertex of the spiral and concentrates each
unit ( i.e. component) of the software as implemented in software code
b) Integration Testing- The focus is on design and construction of the software
architecture.
c) Validation Testing- Requests established as part of the software requirements
analysis are validated against the software that has been constructed.
d) System Testing- The software and other system elements are tested as a whole.
e) Object Oriented Testing- It involves testing the classes for data and then classes
are integrated into an object-oriented architecture, a series of regression tests are
run to cover error due to connections and collaborations between classes and side
effects caused by addition of new classes.
Details of Testing
Unit Testing- Unit testing focuses verification effort on smallest unit of software design-
the software component or module.
Using the component-level design description as a guide, important control paths are
tested to uncover errors within boundary of the module. The unit test focuses on the
internal logic and data structure within boundaries of the component.
Local data structures are examined to ensure that data stored temporarily maintains its
integrity during all steps in an algorithm’s execution. All independent paths through the
control structure are exercised to ensure that all statements in a module have been
executed at least once.
For testing a unit a driver program must be developed. Driver is nothing but the main
program that accepts test data, passes such data to the component and prints relevant
results. Stubs serve to replace modules that are subordinate to the component to be tested.
Integration Testing-
Types of Integration-
5. Smoke testing- It is a testing approach that is mainly used while software products
are being developed. It can be applied to complex, time-critical software products.
The following steps are involved-
a) All software components are integrated into a build. A build includes all
data files, libraries, reusable modules and engineered components.
b) A series of tests are designed to expose errors that will keep the build
From properly performing it’s functions.
c) The build is integrated with other builds and the entire product is smoke
tested.
Integration testing must always identify critical modules which have high level of
control, definite performance requirements, are complex and are error-prone.
ii) Integration Testing- There are three different approaches for integration
testing an OO system.
a. Thread based testing- It integrates the set of classes required to
respond to one input or event of the system.
b. User based testing- First the independent classes called the server
classes are tested followed by the dependent classes called the stub
classes.
c. Cluster testing- A cluster of collaborating classes is exercised by
designing test cases that attempt to uncover errors in the
collaboration.
Validation testing tests the software for software functions in a manner that can be
reasonably expected by the customers. These expectations are defined in the software
requirements specification which contains the visible attributes of the software.
A test-plan outlines the classes of tests to be conducted and a test procedure defines
specific test cases. Both the plan and procedure are designed to ensure that all
performance requirements are attained, documentation is correct and usability
requirements are met.
Beta testing- The beta testing is conducted at end user’s site. Beta testing is a live
application of the software in an environment that cannot be controlled by the
developer.
System Testing-
a) Recovery testing- Recovery testing is a system test that forces the software to fail
in a variety of ways and verifies that recovery is properly performed. Automatic
recovery, reinitialization, checkpointing mechanisms, data recovery and restart
are evaluated for correctness. If recovery requires human intervention, the mean
time to restart (MTTR) is evaluated to determine whether it is within acceptance
limits.
b) Security testing- Security testing verifies that protection mechanisms built into the
system will infact protect it. The tester may attempt to acquire passwords through
external means, may attack the system with custom software designed to break-
down any defence that may have been constructed, may overwhelm the system
denying service to others, purposely cause system errors, may browse through
insecure data.
d) Performance testing- Performance testing is done to test the run time performance
of the software within context of the integrated system. It is important to monitor
execution intervals, log events and sample machine states on regular basis.
Testing Tactics
c) Controllability- The better we can control the software the more the testing can be
automated and optimized. Controlling the software and hardware variables is
important.
c) Decomposibility- The software system is built from independent modules that can
be tested independently.
d) Simplicity- There should be functional simplicity , structural simplicity and
Code simplicity.
e) Stability- Changes to the software are infrequent , controlled when they occur and
do not invalidate existing tests.
Black Box test examines some fundamental aspect of a system with little regard for the
internal logical structure of the software.
White Box testing involves examination of the procedural details, logical paths through
the software and collaborations between components are tested by providing test-cases
that examine specific set of conditions and or loops.
1. White Box Testing- White box testing sometimes called glass-box testing is a test
design philosophy that uses the control structure described as part of component-
level design to derive test cases. Using the white box testing method the software
engineer can derive test-cases that cover areas like check all independent paths,
exercise logical decisions, execute loops and use internal data structures.
It involves data flow testing which is a method that selects test paths of a
program according to location of definition and use of variables.
Product Metrics
3. Quantitative View-
a) Measure- Measure provides a quantitative indication of the extent ,
Amount, dimension, capacity or size of some attribute of a product or
process.
b) Metric- A quantitative measure of the degree to which a system,
Component or process processes a given attribute.
c) Indicator- An indicator is a metric that provides insight into the software
Process, software project or the product itself.
d) Characteristics- Metrics can be characterized by five activities-
i. Formulation – The derivation of software measures and metrics
appropriate for the representation of the software being considered.
ii. Collection- The mechanism used to accumulate data required to
derive formulated metrics.
iii. Analysis- The computation of metrics and application of
mathematical tools.
iv. Interpretation- The evaluation of metrics in an effort to gain insight
into quality.
v. Fault detection- Detect faults using developed metrics.
4. Goal oriented software measurement- GQM emphasizes the need to-
a) Establish an explicit measurement goal.
b) Define set of questions to be answered to achieve the goal.
c) Identify the metrics that help to achieve the goal.