Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
History of modification
2 Text
1|Page
Objective:
The Foundation Level qualification is aimed at professionals who need to demonstrate practical
knowledge of the fundamental concepts of software testing. This includes people in roles such as
test designers, test analysts, test engineers, test consultants, test managers, user acceptance
testers and IT Professionals.
Note: This document is prepared from web based training. On comparing with book and
merging the book information, below are noticeable changes in the document.
1. The italic lines in this document are not part of the standard ISTQB book. I.e. extra.
2. Font with “3DS” sentence mean, missing information belong to the text book are added to
this document.
2|Page
The Foundation Level exam is characterized by:
40 multiple-choice questions
a scoring of 1 point for each correct answer
a pass mark of 65% (26 or more points)
a duration of 60 minutes (or 75 minutes for candidates taking exams that are not in their native or
local language)
Exam questions are distributed across K-levels, which represent deepening levels of
knowledge, as shown in the following table:
K1 K2 K3 K4 Total
Foundation 20 12 8 40
Exam questions are distributed across Syllabus chapters as shown in the following table:
Foundation 7 6 3 12 8 4 40
Questions with different K-levels may be awarded with different pre-defined scores to reflect their
cognitive level.
The Foundation and Advanced exams cover four different K-levels (K1 to K4):
3|Page
Introduction to software testing [Chapter-1: Fundamentals of testing]
The necessity of software testing
Software system form an integral part of our daily lives, therefore software failure can
prove expensive and result in the loss of time, effort and reputation. For critical software, these
failures can cause major financial loss, injury or result in the loss of life.
Faulty software is a result of errors (or mistake) made while designing and building the
software. Most of the software issues can be divided into three categories,
Root cause analysis is used to reduce the likelihood of an error occurring in future by
conducting analysis. It involves trying to track the failure of the system all the way back to its
root cause. It brings overall improvement in the quality of those systems.
Defect can arise in four stages in a product life cycle. Analysis, Design, Development and
Implementation. The extent to which defects are removed in the phase of introduction is called
phase containment. Cost of investigating and correcting defects is directly proportional to
increase in time and later stages of life cycle.
False-fail result or (false positive result) –A test result in which a defect is reported although no
such defect actually exist in test object.
False-pass result or (false negative result) – A test result which fails to identify the presence of
defect that is actually present in the test object.
4|Page
Quality is how well a component, systems or process is designed i.e. degree to which a
component/system meets specified requirements, needs and expectations. Testing helps to
improve confidence that a product meets quality criteria. Testing effort is focused on
verification and part on validation.
What is software testing? The standard that gives definitions of testing terms is: BS7925-1
The main objective of testing is to discover defects, other objectives include preventing
defects and getting measure on the quality of software.
1. Uncovering defects
2. Gathering confidence about the level of quality and providing information for decision
marking
3. Preventing defects
General software testing principles – provides standard framework for testers to conduct tests
and discover defects include,
1. Testing show the presence of defect – but cannot prove that there are no defects.
Testing reduces the probability of undiscovered defects remaining in the software. But
even if no defects are found, is not proof of correctness.
2. Exhaustive testing (complete testing) is impossible – Testing everything is not feasible
except trivial cases. Risk analysis and priority should be used to focus testing efforts.
3. Absence of errors fallacy –Finding and fixing defect does not help if the system built is
unusable and does not fulfill user needs and expectation.
Applied software testing principles – provides a standardized format for creating test plans and
act as a guide to effective testing.
4. Early testing – Testing activity should start as early as possible in the software life cycle
and focused on defined objective.
5. Defect clustering – Testing effort focus on later observed defect density on modules. It
is based on paerto principle, suggests 80% of defect is caused by 20% of module.
6. Pesticide paradox – Repeating same test cases won’t find new defects, test cases need
to be regularly reviewed to find new defects.
7. Testing is context dependent – Testing is performed upon the context of software. E.g.
testing safety critical software differs from testing e-commerce site.
5|Page
Process activity and psychology of testing
Fundamental software test process
5. Test closure activities – requires to consolidate and document, data from the completed
testing, like all reported incidents are closed. Records are archived and handed over to
maintenance team for regression testing.
6|Page
Exit criteria and test closure
a. Coverage criteria – It decides the test cases that must be included during the exit
criteria evaluation process.
b. Acceptance criteria – Checks whether the software under testing has
passed/failed in the overall process.
1. Check test logs – Check test log, identify defect reported and fixed.
2. Estimate additional requirements – Depending on test log analysis, additional tests
requirement is determined. It is also based on business risks.
3. Prepare test summary report – Summary report are prepared for stake holders of the
project, which helps in critical decisions. This document summarize testing activity and
evaluation of test results against exit criteria.
Test closure, ensures the completion of all test activities and sign off the end product e.g.
deliver software to client. Test closure activity include,
1. Checking deliverables
2. Archiving testware
3. Submitting testware to maintenance team
4. Evaluating overall test process
1. Software developer
2. Peer reviewer
3. Internal tester
4. Third party reviewer
Contrasting software testers and developers – Traits of testers with right mindset include,
7|Page
[Chapter -2: Testing through software life cycle]
Software development model and Testing
Software development models
Depending on the availability of time, resources, allocated budget & scope of the project,
development strategy is decided which are known as software development models.
1. Sequential – This model has distinct activities, which are initiated after completion of
preceding activity. E.g. waterfall model.
2. Iterative – in this model, developers build a product using a series of iterative steps.
Main drawback of this model is testing towards end of development life cycle, will be hard to
fix the bugs as the code is complex. New bugs can be introduced at this stage.
8|Page
V-model is a sequential model that improves upon the waterfall model. As the product is
progressively developed at life stage, testers also works at various stages to promote test
activity related to the stage. Testing is divided into four levels
1. Component testing – Each component of the products are tested for defect. Detailed
design of the product is used to create component testing plan.
2. Integration testing – used to test the inter-relationship between components of the
product e.g. interaction between components and with computer hardware/software.
To create integration test plan, overall design of product is used.
3. System testing – Developers integrate the components and build a functional software,
exhibits the features and functions. Testers create system testing plan based on system
requirements and execute them.
4. Acceptance testing – Acceptance testing is conducted by client representatives. Aim is
to validate the product meets their requirements. Testers create acceptance testing plan
after the client’s requirement has been identified.
Even V-model has the drawback, verification of product against the client requirement only
towards the end of the development process. Fixing bugs, adding missing feature at this stage
is difficult, expensive and time consuming.
Iterative model eliminate the drawback of v-model. In this model, developers build a product
using a series of iterative steps. Examples of iterative or incremental development model are
prototyping, Rapid Application Development (RAD), Rational Unified Process (RUP) and agile
development (e.g. SCRUM).
Dynamic System Development Methodology (DSDM) is a refined RAD with control option to
stop the process getting out of control.
1. Identify requirements
2. Create a design
3. Develop code
4. Test code
9|Page
Testers can test the product while it is being developed, aids in identifying bugs easily and
accurately. Also, representative can take part in the development to suggest change in the
product during development.
In Iterative model,
10 | P a g e
Rapid Application Development
RAD is formally a parallel development of functions and sub-sequent integration. This can
give customer something to easy and provide feedback.
Agile Development
Widely SCRUM is adapted for management approach and XP extreme programming is used as
the main source of development ideas. Their characteristics are
1. Test basis are from less formal way and subject to change.
2. Component testing done by development leaves tester no work, although system
testing or any non-functional testing cannot fit in sprint.
3. Adaptability of tester with this methodology
4. Time constraint and pressure on delivery is high
5. Regression testing becomes complex on later stages.
11 | P a g e
Component and integration testing [Test Levels]
These testing helps to locate code related defects and determine various code
interactions with each other as intended.
Component testing helps to identify errors in each component, such as object, program or
module of software application. A formal documentation is not maintained to record defects,
but defects are fixed as soon as detected.
Approach of component testing is test first approach which is iterative process. XP Model
All the components may not be available during testing. Temporary component are created,
1. Stubs – Stubs (also called “Mock Objects”) simulate called components, e.g. if a
component calls an object that is not ready, you can use stub in place of the object.
2. Drivers – A driver (also called Test harness, scaffolding) act as a substitute for a
component that call the component you are testing. E.g. Driver is used in place of
unavailable button that call the authentication system you’re testing.
Non-functional requirements are also tested during component testing level. Also, robustness
of the component is tested, e.g. how well the component responds to negative inputs.
Integration testing after system testing is known as system integration testing, which analyze
the interaction between a complete software product and other software systems.
12 | P a g e
test the interactions between software components more effectively than top-
down approach.
c. Functional incremental - Integration and testing takes place on the basis of the
functions or functionality as documented in the functional specification.
In most cases, incremental testing is preferable than big-bang integration testing, because of
most advantage on the other process.
System testing involves testing the application as single entity after integration. It verify
whether application meets all the requirements set initially. It is conducted by dedicated testing
team, involves assessing the application on the basis of external features and not the
underlying code.
Testing functional requirements starts with a specification based black box approach. Black box
testing is so called because it takes no interest in the internal structure of the system or
component.
White box or glass box testing is concerned with internal workings of a system.
The attributes and sub attributes tested during non-functional requirements are,
13 | P a g e
Acceptance testing are conducted by representatives of the client. The testers verify whether
the application meet all the requirements, both functionally and non-functionally.
1. Alpha testing – Alpha testing involves team and set of prospective customers to ensure
the application meets all the specified requirements.
2. Beta testing – After successful alpha testing, application is sent to another set of
prospective customers for beta testing, to simulate real time environment. Depending
on the report, application is fixed and released to market.
Alpha and beta testing are carried on application meant for COTS, commercial off the shelf
software. Since alpha testing done of developer site, it is referred to as factory acceptance
testing, while beta testing simulating real time environment, referred to as site acceptance
testing
1. Factory acceptance testing – When client representative drop by to test the product.
2. Site acceptance testing – When the software product is sent to client/site to perform
acceptance testing at site.
14 | P a g e
Software Test Types
Functional testing – tests the functionality of a selected component
Non Functional testing – tests the behavioral or quantified characteristics of the systems and
software e.g. reliability, performance
Change based testing – includes regression and confirmation testing (FV testing) involves re-run
of tests to ensure software works correctly
Functional testing is the process of testing a software product to determine its specified
behavior or functionality. Tests are performed for various quality characteristics as per ISO
9126,
Suitability – It determines if the product performs as expected for its intended use. It test the
capability of the software to provide appropriate set of functions to perform task and user
objectives.
Interoperability – It tests the capability of the system to interact with other specified
component or systems.
Accuracy – It involves ensuring that faulty product does not leave the production line and cause
error during beta testing.
Functional tests are derived from requirement /use case specification. Because only program
spec is considered – not the design/implementation of the program, this type of test will fall
under Black box/specification based testing.
Business process based testing – use case provides a basis for test cases from a business
perspective, day to day scenario business use/needs.
15 | P a g e
Non-functional testing can be performed at all test levels, the behavioral characteristics of
systems and software are tested and quantified on varying scale.
Performance testing – tests the degree to which a system fulfills its specified function within
given processing time and throughput rate constraint.
Stress testing – evaluates a system at and beyond the boundaries of its specified requirements.
Reliability testing – tests how reliably a system performs over a given period of time. How good
the non-broken time frame
Portability testing – tests how easily a system can be transferred from one platform to another.
According to ISO 9126, six quality characteristics, out of which five characteristics covered by
non-functional testing,
First two
Reliability – A software product is reliable when it performs its required functions under stated
conditions. It can defined further into fault-tolerance, maturity, recoverability and compliance.
Usability – A software product is said to be usable, if the user easily understand and likes the
interface and finds it easy to operate. It is divided into understandability, learnability,
operability, attractiveness and compliance.
Other three,
Maintainability – The ease with which product can be modified to correct defects, meet new
requirements, adapted to a changed environment is called maintainability. It is divided into
analyzability, changeability, stability, testability and compliance.
Portability – When a software product can be transferred easily from one hardware / software
environment to another, it is said to be portable. It is divided into adaptability, Installability, co-
existence, replaceability and compliance.
Functionality - the process of testing a software product to determine its specified behavior or
functionality. Sub characteristics include, suitability, accuracy, security, interoperability and
compliance
16 | P a g e
Structural and Changed-based Software Testing
Structural testing is concerned with internal architecture and workings of the software, and so
referred to as white-box or glass-box testing.
Structural testing measures the amount of testing done by checking the coverage of a set of
structural elements or coverage items. In structural testing, tests are based on logic of the
application code, compared to requirement in case of functional testing.
Confirmation testing – After fixing the defect, software need to be retested by re-executing the
test case that failed last time. The product is considered free of known defect only when it
passed retest.
Regression testing – Confirmation testing doesn’t guarantee a quality product, may fail at
integration level. A set of tests are designed to check despite changes, system works as
expected. These type of tests are called regression tests.
Regression testing involves repeating tests. To decide which tests to repeat, techniques are
1. Traceability – tracing back to the requirement, identify behavior and design tests
2. Change analysis – This involves analysis of how changes could affect other portions,
which in turn requires understanding of the code and system design.
3. Quality risk analysis – retest will be based on business risk and its priority.
Maintenance of regression suite should be carried out because over time content gets added &
test suite becomes heavy. So selected test are run or another approach is to eliminate test
cases which doesn’t detect defect for long time.
17 | P a g e
Maintenance Software Testing
Maintenance testing activity are carried out on a software in production, which is
modified due to defect, improvements or to adapt modified environment. Maintenance testing
confirms the software is once again free of known defects after modification.
Maintenance include,
To reduce time spent of maintenance testing, impact analysis is carried out, which determine
the area that require maintenance testing after maintenance work/change.
18 | P a g e
[Chatper-3: Static technique]
Static Software Testing Technique
Static Techniques and the Software Test Process
In the initial stage of testing, testing helps to verify product meets the requirement, at later
stages testing is done to check whether defects occur during product execution. Depending on
when the software is executed, testing is categorized as
1. Static – Testing is done before executing software, used to detect defects in the early
stage of the software development. Static testing technique includes,
a. Reviews – It involves detecting defects in documents such as requirement &
specification, verifying conformance to standards, else may result in defect after
implementation.
b. Static analysis –It involves reviewing source code by automated tools, which
detects violation in programming standards and code syntax.
2. Dynamic – Testing by executing the software product. Goal is to detect runtime defects.
Dynamic testing technique involves,
a. Specification based – Requirement/specification drives the input for
specification based testing, which considers software as black-box with inputs
and outputs. This technique test, what the software does.
b. Structure based – Structure based/white box technique examine the software
implementation by evaluating the code structure. Used in component integration
testing especially for code coverage.
c. Experience based – Knowledge and skills acquired by experience helps to
evaluate the software. This technique helps when the specification is not
adequate and in time constraint situation.
19 | P a g e
The Review Process in Software Testing [Review Process]
Review is a static software testing technique, which are executed on the documents such as
functional and requirement specification. Review help in identifying ambiguities, deviations
from specified standards and defects in documents. It also enhances knowledge on product.
1. Informal – Author request peer or technical lead in the same domain to take a glance at
the document to validate it. It’s very cost effective.
3. Technical review (Peer review) – It involves team made up of technical experts in review
process. This type of review ensures technical concepts, models and technical standards
identified for the product development are accurate and valid for the product. This
review is applied for product of critical nature.
4. Inspection – This review is carried out by moderator and peer group. Leader plans &
leads the review activity. Review is performed by team members led by a trained
moderator. Using rules & checklist, the inspection team finds and record all defects in an
inspection report. Defects are tracked for corrective measure on the document were
undertaken & closed, only then inspection review is considered complete.
Any of the above review types can be chosen depending on the criticality or nature of
software. Formality level of the review as categorized below,
20 | P a g e
Success of review depended on
1. Defined objectives – Review must have clear, predefined objectives, which helps
reviewer to choose proper review technique.
2. Nature of review team – Right people having expertise/knowledge in the product
domain should be involved in review process to locate defects and provide
recommendations.
3. How defects are communicated – Defects found during review should be communicated
objectively and constructively. No personal attacks on the capability of author/review
team.
4. Review technique used – Choice of review technique vary as per need. Software
enhancement might require informal or walkthrough. While new software development
require inspection to ensure quality and standards.
5. Review process – Review process when followed as planned ensures time & cost
incurred in review are not wasted. It aids in course of action for review and rework.
i. Focus on higher level documents e.g. does the design comply with
requirements.
ii. Focus on standards e.g. internal consistency, clarity, naming
conventions, templates
iii. Focus on related documents e.g. interfaces between software functions.
iv. Focus on usage of the document e.g. for testability and maintainability
Goal of Kick-off is everyone understands the document under review and its need
and clarify doubts/Q&A.
21 | P a g e
4. Review Meeting – Review team discloses the defects and severity level found during
preparation stage. Defects are discussed (handling defects & decision) and logged as per
severity levels (Critical/major/minor). Minutes of meeting is recorded and defect log is
given to author. Note: spelling error are not part of defect classification, they are noted
and given to the author.
Review meeting consist of three phase depending on review type.
a. Logging phase – Only defects are logged. Good logging rate is 1 to 2 defects
logged per minute.
b. Discussion phase – Defects are discussed and justified.
c. Decision phase – At the end of the review, decision is taken based on exit
criteria. If no of defect found per page exceeds certain level, the document re-
reviewed after corrections.
5. Rework – Author of document fixes the defect identified as a priority, ensuring changes
are easily traceable. Issues not fixed should contain proper reason from the author.
6. Follow-up – Moderator follows up on the corrections made for the logged defects.
a. Checking the defects has been addressed
b. Gathering metrics
c. Checking exit criteria (for formal review types)
Moderator also collect information such as number of defects found, time spent to
correct them and total effort spent on review. This information is stored for future
analysis, based on which leader suggests improvement in the process.
Formal review process is characterized by defined roles and responsibilities that each member
in the review team is assigned with. The roles include,
22 | P a g e
Static analysis in software testing [Static analysis by tools]
Static analysis involves analysis of software artifacts, e.g. requirements, design and
code without actually executing the software. It detects code-related defects through analysis
without executing the code. Static analysis aids to improve quality and maintainability of the
code. Static analysis aid in identifying the gaps and logical error in software model (pictorial
representation of object relation). Static analysis find defects rather than failure.
Uses:
1. Detects defects in the early stage
of development which reduce cost
and time.
2. Detects breach of standards and
metrics in code
3. It helps to reduce number of
defects found in dynamic testing.
Code Metrics - Static analysis also deals with code metrics or attributes of code structure. Code
metrics measure the structural complexity of code and can be used to indicate risky problem
areas. E.g. 20% of the code causes 80% of the problem as said in defect clustering. Complexity
metrics identify high risk, complex areas.
23 | P a g e
Code Structure – There are many kinds of structural measure to consider,
1. Control flow structure – identifies the sequence in which instructions are executed. It
helps in identifying the unwanted or unused/dead code. Code metric relates to control
flow e.g. cyclomatic complexity
2. Data flow structure –follows the trail of data item as it accessed and modified by the
code. Defects can be found such as referencing a data with undefined value and
variable that are never used.
3. Data structure – It refers to the organization of data itself e.g. data arranged in list,
queue, and stack. Sometimes program is complex because it has complex data
structure, rather than because of complex control or data flow.
24 | P a g e
[Chatper-4: Test design technique]
Development and Categories of Software Test Design [Test development process]
The software test development process essentially covers dynamic testing techniques
because this type of testing requires you to design tests. To test code for runtime failures, you
need to identify test conditions, specify test cases and create test procedures to use.
1. Test condition – Critical element in any test is to decide the focus of the test, which
becomes the test condition. Test condition are derived from test basis. Test basis could
be a system requirement, technical specification, the code itself (for structural testing)
or a business process. Many test conditions are possible for a feature, but testing all
may not be feasible, only important test conditions are executed. (test possibilities)
E.g. In user authentication feature test condition may include checking if the system
accepts alphanumeric password and string of letters are user name.
2. Test case – Test cases is needed to test one or more test conditions. Test case uses
elements such as Input values, preconditions and expected results. In order to known,
what the system should do, we need information about correct behavior of the system,
this is called an oracle (or test oracle).
E.g. In user authentication, password validation, range of values allowed are test cases.
The software is then tested against specific values and outcomes.
3. Test procedure – The procedure to follow while implementing test case to verify test
conditions is referred to as test procedure or test script (or manual test script). It details
the step to be performed and order in which they need to be executed.
E.g. In user authentication, for password field first test if it accepts six characters and
then check if it accepts alphanumeric.
IEEE 829 standard is the Test Documentation Standard may be used to capture test conditions,
test cases, test procedures, which aids in well controlled and unambiguous test process.
I. Horizontal (Test documentation) - e.g. system testing from test condition to test case
to test scripts
II. Vertical (Development documentation) – e.g. from requirements to components
25 | P a g e
Test Design Specification
26 | P a g e
Test Procedure Specification
In dynamic testing, code is executed to check for defects. Three different approach are available
within dynamic testing to create test conditions and test cases.
1. Equivalence partitioning – Test conditions are divided into logical groups called
partitions if they exhibit similar behaviors. Test cases are created for one condition from
each partition, as its behavior is similar to behavior of other condition within same
partition, hence avoiding multiple test case and validation.
2. Boundary value analysis – BVA used to assess the inputs at the edge or boundary of
each equivalence partition.
3. Decision table testing – Decision table helps to validate system requirements that
contain logical conditions. Specification of the system are analyzed, its conditions and
actions are tabulated to state possible combination of results.
4. State transition technique – It uses state transition diagram to test the software that
modifies it response depending on its current condition or prehistory of state. A state
table shows the relationship between the state and inputs, and can highlight transitions
that are invalid. Test can be designed to cover these states as well.
5. Use case testing – Testing based on real world use cases/ different types of interactions
between actors or systems is use case testing. This type of technique helps in
acceptance testing.
Black box design techniques all doesn’t have an associated measurement technique
27 | P a g e
[Structure based techniques]
1. Test coverage measurement - Test Coverage is the extent to which a test suite exercises
a specific test item.
2. Structure-based test case design –If additional coverage is required, code structure is
used to create test cases using different coverage method.
E.g. you’ve written a program that has 20 statements. Your test covers 13 of 20 statements,
which means test coverage is 65%.
Types of Coverage
Depending on the number of system elements to be tested, you can measure coverage at
various levels. They are
1. Component testing – Each unit of code such as module, object or class is tested
separately. Unit testing is typically done by developers.
2. Integration testing – the integrated units of code are tested. Measure the coverage of
specific interactions between the units or modules in the code that have been tested.
3. System or acceptance testing – The code is tested as whole, coverage item include client
requirements and user –interface elements such as menu options.
Or test design technique like specification based test technique methods can be measure using
test coverage e.g. EP (% of equivalence partition exercised), BVA, Decision table, state transition
testing.
1. Decide on the structural element to be used i.e. the coverage items to be counted.
2. Count the structural elements or items
3. Instrument the code
4. Run the test for which coverage is required
5. Using Output from instrumentation, determine % of elements or items exercised.
28 | P a g e
Here step 3 means inserting a code, which will record the usage of item. Step 5 is to analyze
the parameter generated by Step 3 code.
Testing tools can be used to measure coverage and to create test cases, which improves
productivity and efficiency.
Pseudocode does not belong to any particular programming language, it is a simplified version
of the source code that typically covers all important control structure used in programming.
1. Non executable – Certain lines of code doesn’t perform any action, they are called non
executable code, includes statements that declare variables.
2. Executable – Executable code statement instruct the computer to take a specific actions
and return a result. Depending on the function you want to perform, you can structure
executable code in
a. Sequence – When code statements are in sequence and executed in linear
fashion. Doesn’t involve decision points or iterations. E.g. a = b+c
b. Selection – Selection method use decision points in the code. E.g. If else
c. Iteration – If a specific step need to be repeated specific no of times, then
iteration method is used. E.g. For, while, do while loops.
Statement coverage helps to measure the percentage of executable statements tested by a test
case suite.
Control flow diagram are used to depict program structures. It is easy to analyze the code and
arrive at test cases by understanding the control flow diagram. It used two symbols, a rectangle
(for sequential statements) and a diamond (for decision statements).
29 | P a g e
Decision coverage measures percentage of decision outcomes exercised by a test suite.
100 % decision coverage guarantee 100 % statement coverage. But 100 % statement coverage
doesn’t necessarily mean 100 % decision coverage.
Programs often contains numerous conditions, which can lead to numerous path. To
ensure 100% coverage, advanced technique such as condition and path testing can be used.
100% path coverage implies 100% decision and statement coverage. These techniques are
implemented in safety critical systems.
Condition coverage checks and evaluates the outcomes of each condition. Typically performed
after decision coverage.
Path coverage testing is the most comprehensive type of test, which is difficult and reserved for
critical sections of code.
In addition to path coverage, Linear Code Sequence And Jump (LCSAJ) technique can be used for
measuring code coverage, which is a variation of path coverage technique in which only sub
path of the program are exercised, provided sub path execute in linear manner at run time.
http://www.hcltech.com/blogs/engineering-and-rd-services/structure-based-or-whitebox-
testing-techniques
https://www.istqb.guru/how-to-calculate-statement-branchdecision-and-path-coverage-for-
istqb-exam-purpose/
http://shailajakiran-testing.blogspot.in/2007/11/statement-coverage-decision-coverage.html
Structure based measures and related test design techniques are described in BS792-2
30 | P a g e
Experience based software testing technique [Experience based techniques]
When specifications aren’t available for test cases, tester relies on past experience and skills.
This is called experience based testing. Skills and intuitions are used to locate defects.
Experience based testing is called ad-hoc or reactive testing because the specifics of these tests
are defined while testing the software.
On executing this kind of tests, two key factor must be watched for
1. Managing time & effort – Since, this test is open ended, tester have to run them in a
“time-boxed” manner.
2. Tracking test coverage – To track coverage, tester needs test charters containing high
level description of the test cases.
1. Error guessing – tester guess the location of error based on experience with previous
application. Fault attack or attack is directed and focused attempt to evaluate
reliability of a test object by attempting to force failure to occur.
2. Exploratory testing technique – It is a combination of ad-hoc and structure based
testing. It is the process by which tester design test cases, execute tests and log test
results based on test charter within a time box.
31 | P a g e
Choosing software testing technique
What is the important criterion in deciding what testing technique to use?
Answer: the objective of the test
1. The models used while developing the system - Choice of testing would depend on the
software model used during specification, design & implementation. E.g. if the spec
requires to test various cases with different outcome, then decision table testing
technique can be used.
2. Knowledge and experience of the tester – Experience and skill of tester influence the
strategy of test.
3. Potential defects – Based on past experience, if certain defects are known to exist, then
experience based testing can be used.
4. Test objective – Test technique depends on the test objective, during functional
validation use case based approach taken, if complete & rigorous testing is needed
structure based technique can be used.
5. Documentation – If the specification aren’t clear, experience based technique is used. If
the specification are detailed and contain diagram then state transition testing can be
used.
6. Life cycle model – If iterative life cycle model is used, here various loop structure are
present then exploratory testing should be used. In case of sequential model, structure
based testing technique can be used.
1. Risk – In case of safety critical application, structure based testing need to be used for
thorough testing.
2. Contractual requirements – there might be contractual obligations to perform certain
type of test. Then specific set of testing model need to be adapted.
3. Type of system – depending on the type of system/software testing technique can be
chosen.
4. Regulatory requirements – Regulatory guideline influence testing selection. E.g. medical
software require equivalence partitioning and boundary value testing as mandatory.
5. Time and cost – Depending on the time constraint, combination of test technique can
be used.
32 | P a g e
[Chatper-5: Test management]
Test organization and Independence
Dependent and Independent software testing
Independent testing is more beneficial than dependent testing, because it adds a level of
objectivity to testing process.
Testers with dev team – Testers working under project manager associated with DEV team. I.e.
integrated testers.
Test specialists – To implement highly independent test practice, employ specialist testers such
as usability, security and certification testers.
Outsourced testers – The highest level of independence is implemented when a third party
testing team tests the application. It is expensive.
33 | P a g e
Benefits of independence testing includes,
1. Difference in perspective – Independent testers view the product unbiased, they can
identify varied defects that developer can miss. Perspective of developer is that of a
creator, hence tend to overlook/miss critical defects.
2. Verification of assumptions – Independent testers verify the assumptions made during
specification and implementation phase of the test process. Hence no defect reporting
on the invalid defect.
3. Different set of assumptions – Independent testers make different set of assumptions
than developers, this helps to detect hidden problems in the application e.g. developer
might think 75% of the software performance is measurable, but testers tries to
achieve 100% performance measure to detect defects.
In addition to the dependent and independent approaches, there is one more approach
Agile approach – In this approach quality assurance is more integrated with development team.
Developers are aware of their responsibility for quality. Agile testing approach ensures minimal
overhead on communication and coordination.
Test leader is a skilled professional who manages the testing activities and its resources.
A person who performs testing such as system or component testing is knows as tester.
Test role is not the same as a test job. A role can be defined as one or many
responsibilities to which an individual is assigned. On the other hand, a test job is what an
individual is employed to do. This job may consist of one or more roles. For e.g. a tester can also
be a leader.
34 | P a g e
Test leader:
Test leader manage the test team. Test leader plans, monitor and implement testing activities
and task. They can also be test managers, test coordinators, test team leaders and test program
managers. Test leaders must have the mindset and training to plan, monitor and control testing
effectively.
35 | P a g e
Tester:
Basic qualification that a tester must have include the ability to communicate effectively,
prepare and present the report efficiently.
1. Application – Tester should be aware of the specs, benefits & limitation of the app being
tested.
2. Technology – A tester must be able to determine important functions and features of
the system that is tested, its intended behavior, issues it resolves and process it
automates.
3. Testing – being accountable for test execution, testers should be expert at all test
processes and techniques used to evaluate software.
1. Creating test plans and test designs – Developing and reviewing test plans are the prime
task of a tester. After test plan outline the testing process, tester creates test design,
test condition, test case and test procedure and test data.
2. Analyzing test specifications – tester need to analyze, review, and assess specifications
of the applications such as user requirements, design specifications and test models.
3. Setting up the test environment – Coordinating with system administration and
network managements to setup the environment is another task.
4. Implementing tests – testers implement test suites for all test levels of a test
environment.
5. Reviewing tests
6. Using testing tools
7. Automating tests
8. Measuring performance
9. Performing peer review
36 | P a g e
Test planning and estimation
Software test planning:
Software test planning process includes organizing test activities, identifying the
resources and assigning responsibilities to them. Also need to list the test procedures, strategy,
schedule and deliverables in the planning stage. Test plan document after planning acts as tool
for software testing process.
IEEE 829 Standard for software test documentation provides a template with standardized
method for performing tests and also helps you write, record and track tests throughout the life
of project.
Various sections of IEEE 829 standard test plan template includes 16 sections, they are
37 | P a g e
Master Test plan-
Test plans can be created at various levels and derive these plans from master test plan.
The various test levels are
1. Acceptance test plan – Acceptance testing ensure requirements are met. This planning
takes place in early stage of product life cycle after the high-level requirements are
defined.
2. System test plan – System testing is comprehensive test on performance and reliability
of the software. A system test plan is based on the requirement and design
specifications in accordance with master test plan.
3. Integration test plan – In Integration testing, interaction of components of system and
cohesive functioning are tested. Integration test planning can start after stabilizing the
software design.
4. Component test plan – Component testing tests the individual source of code. Test plan?
Factors that influence the test planning process and test plan
1. Product – Product knowledge helps to perform efficient testing. Any risk that causes
product to fail should be understood at early stage of product development.
2. Process – Test planning is influenced by factors affecting test process such as
availability of testing tools. Using these tools can reduce the manual effort that would
require for executing the test.
3. People – This factor can include the skills of the individuals on the test team and their
relevant experience with similar software products and testing projects.
4. Approach – Different approach are required for testing different types of software
products. E.g. testing banking application is critical than testing a basic website.
Test planning involves Test estimation, test strategy, entry and exit criteria:
In order to obtain accurate test estimate, one have to analyze a software test project,
which will aid in mitigate the risk on available resource, test environment, funding are
adequate for the project.
Test approach according to risk influence the test success.
Before delivering the product, need to ensure necessary exit criteria are met.
Test estimation
To estimate the overall cost for a project or it duration, two approach are used,
1. Metric-based –When you rely on test data for estimation e.g. Testing cost based on no
of test cases or it can be written and executed test cases, time required to run and
execute them and defects found can be used to calculate the estimate.
2. Expert-based - When you use expert’s data to split test process into several test tasks
and summing up of the effort on each task helps you to estimate the cost for whole
project. With their guidance you can determine the test scope and the risks involved.
These experts may be technical, analytical or business experts.
38 | P a g e
These approaches can be combined for estimation. E.g. Experts data can be used to test process
into several task + identify the test duration and no of test run by past records, then estimate
the project.
Test strategy
A test strategy defines the test levels and their corresponding activities within the
project. With strategy enabled approach, you understand how to perform a test, identifying the
requirements of the tests from begin to end. The strategy you apply also helps to identify the
risk analysis to be carried out, design technique and exit criteria to be applied.
39 | P a g e
These test strategy falls either under Preventive or reactive. These strategy can be combined to
form a strategy. Blending strategy depends on many factors based on
1. Risks
2. Skills
3. Objectives
4. Regulations
5. Product
6. Business
Entry criteria are used to establish the conditions and timings for a given test activity to
commence. Inappropriate test start may result in loss of time and resource.
Exit criteria identify any inadequacies and incomplete testing tasks during project
execution. Successful completion is when the exit criteria is met.
1. Defects
2. Coverage
3. Tests
4. Quality
5. Budget
6. Schedule
7. Risks
Exit criteria varies according to test levels, e.g. component testing exit criteria is different from
system testing exit criteria.
40 | P a g e
Software test progress Monitoring and control
Testing activity data are collected to assess whether the activities are progressing as planned.
This data is known as metrics.
Metrics are collected at end of each test level, which helps to verify testing activity adhere to
allocated schedule and budget.
1. Percentage of work done – Work percentage is number of test cases prepared by the
tester against the time taken. It includes effort and time put on preparation of test
environment.
2. Performance of test cases – Execution of test case against time and % of success used to
determine this metrics.
3. Defect information – There are many defect related metrics like defect density, which is
the ratio of number of defects identified to the size of component or system. Other
metric include number of defects fixed and number of time a fixed component/system
is retested. When gap between number of defects open to number of defects closed
drastically decrease after time frame, then the product is fit for release.
4. Extent of testing – Extent of testing is related to approach of testing and ensure each
test progress as planned. E.g. testing the component thoroughly through each code
segment is one measure or only potential risks part of the code segment is tested is
another measure. Extent of testing also varies with test levels.
5. Confidence level of testers – When testers confidence on quality of the product is high,
may lead to overlook serious issue. If the confidence is low, may lead to unnecessary
tests. Try to ensure balanced and bias free approach used for testing. Ratio of testing
effort to defects found will provide an indication of tester’s confidence level.
6. Deliverable dates and testing costs – This metric collects data with respect to delivery
date and testing cost consumed to determine whether each test level can be completed
on time and budget. If not test may be stopped, scope can be modified or still stick to
the plan.
These metrics can be recorded using various tools e.g. IEEE 829 test log template, or
spreadsheet or bar graphs can be used.
Failure rate- The ratio of number of failures of a given category to a given unit of measure.
E.g. failures per unit of time, failures per no of transactions.
41 | P a g e
Defect density – The number of defects identified in a component/system divided by the size
of the component/system.
Metrics and data collected to monitor test level, need to be shared with other team
members. To share the information easy to understand we prepare a report called test
summary report.
1. Test summary report identifier – Unique number to identify and track the report.
2. Summary – Documents summary of testing activities that were part of the test level.
3. Variances – This section measures the deviation/variations between the actual testing
strategies, specifications, procedures and guidelines mentioned in the test plan. A
testing manager uses this data for future improvements.
4. Comprehensive assessment – This section specifies the extent to which tester
completed test activities, met the exit criteria at the end of the test. It also conforms
whether the test activity conforms to the test plan guidelines.
5. Summary of results – It includes summary of output and results of testing process. Also
includes types of defects and resolution process.
6. Evaluation – Each tested component/product evaluated against specified requirements
and metrics. E.g. Integration testing between software components are recorded based
on expected/unexpected behavior and their defect condition based on performance and
reliability helps to evaluate the software.
7. Summary of activities – Summary of major testing activities and events are recorded.
Also documents the total resources, time and budget allocated for testing.
8. Approvals – Records of parties responsible for approving the test summary report in the
approvals section. Signature are obtained, which acknowledge the understanding and
approval of the test results.
Test Control
Each testing activity can be delayed by risks. E.g. due to access rights, tester unable to
test the product, delayed delivery of code. Few measures can be taken to control the test. Test
control decisions are based on test metrics and information in test summary report. E.g. If test
summary report indicates deviation in test plan and schedule, control measures are taken to
minimize deviation
1. Analyzing test-monitoring data – using test monitoring data, test control decision are
taken, e.g. on higher percentage of test cases fails, may suggest stringent guidelines are
adopted for preparing test case.
2. Changing test schedule and priority – To accommodate delay, test schedule and priority
of test activities are changed. E.g. if crucial component not available for testing on the
scheduled date, then tester is assigned to test available component/ other critical tasks.
42 | P a g e
3. Setting acceptance criteria – by setting acceptance criteria for retesting, defects can be
minimized or eliminated. E.g. Tester accepts retesting only if development test it
thoroughly and declare defect free.
4. Reviewing risks – Test can be controlled by reviewing the risks. E.g. during the course of
testing risk associated with product may change, and hence the test plan should adopt.
This helps to eliminate unnecessary testing.
5. Adjusting scope of testing – If new features are added or changes made to component
at a later stage of development, then scope of testing need to adjust accordingly.
Version control is used to manage the configuration item as they go through iterations. Version
control has unique name, version number and attribute related to configuration item. This can
help to identify and trace the incidents back to test cases during the actual testing.
43 | P a g e
RISKs and Testing
Software project can face risks because of uncertainty, making risk management is
integral part of a project. You can minimize the potential risks and maximize the opportunity by
creating a risk management plan. It allows you to monitor you project.
1. Project risks – Project risks are unfavorable events that may impact the achievements of
one or more project objectives on time. E.g. exceeding development estimates, change in
scope of testing, inadequate training, budget constraint, third party software dependency
2. Product risks – Any risk that impacts the quality of a product being delivered constitutes
product risks. E.g. inadequate test process, inefficient tester, failure of the software to
meet specification etc.
Project risks
To deal with project risks, management of risks involves, applies for any risk product or project
1. Ignore – Ignore the risks, if the probability of occurrence and impacts is low. E.g. If
tester goes on vacation, other testers are used to manage the GAP. Ignore in situation
were nothing can be done to solve
2. Transfer - If a risk is high probable, it can be transferred to third party. It doesn’t
eliminate the risks.
3. Contingency – Risks events are identified and response plan is created that will be
executed if certain events occurs as predefined.
4. Mitigate – Some risks are mitigated to minimize negative impact of the risks e.g. an
experience member leaving the team. Taking early actions to reduce the potential risks
impact is more effective than trying to react after risks occurs.
1. Supplier issues – A supplier might fail to provide critical materials on time, required
for testing environment.
2. Organization issues – Shortage of skilled testers, inadequate training, outdated
development and testing practices & failure of tester to communicate and respond to
test results.
3. Product changes – Change in the product requirement may result in additional effort
and delayed schedule. It can result in modification/new test cases.
4. Test environment issues – Inadequate testing environments like limited network
connectivity, poor system configuration or unavailability of required applications
results in delayed project or misleading data.
5. Faulty test items – test cases/test items that cannot be executed in test
environment. E.g. test cases may not target critical area of the product. Smoke test
need to be perform before starting test process.
44 | P a g e
6. Technical problems – Risk that arises because of technical problems like difficulty in
defining test requirements, constraints that limit effectiveness of test cases, quality
of product design or highly complex systems.
Product Risk
Instead of concentrating on all features equally (conventional test), risk based test can
be conducted. In this test, feature that are more likely to fail and feature that have a big
business impact are given high priority. They are the two classification of risk levels.
o When two feature has same risk priority, then risk of “Impact of failure” acts are
tie breaker.
o Alternatively, they can be multiplied to get the risk priority value.
o Global attributes can also be included as part of risk analysis table.
45 | P a g e
Basic incident report in Software testing
When software produce results than expected, such a result is called an incident. If incident
caused software to malfunction they are categorized as defect.
An incident report enables to categorize the incident. Its purpose may include,
1. Provide feedback to developers – Complete and adequate information on the report will
help the developer to fix the defect.
2. Track quality and progress of testing – Incident report provides quality overview of the
product to the test manager/leads. E.g. numerous bugs means poor quality of software.
3. Improve test process – test process can be improved by determining the phase in which
incident or defect is detected.
A typical incident report has four main sections, IEEE 829 standard template for incident
reports states the same,
Defect detection percentage compares field defects with test defects, is an important metric of
the effectiveness of the test process.
𝐷𝑒𝑓𝑒𝑐𝑡𝑠 (𝑡𝑒𝑠𝑡𝑒𝑟𝑠)
DDP= 𝐷𝑒𝑓𝑒𝑐𝑡𝑠 (𝑡𝑒𝑠𝑡𝑒𝑟𝑠)+𝑑𝑒𝑓𝑒𝑐𝑡𝑠 (𝑓𝑖𝑒𝑙𝑑)
46 | P a g e
Stages involved in life cycle of an incident report are shown below,
At each stage of incident life cycle, distinct owners are responsible except Rejected, Deferred
and closed state.
Defect tracking tool can be used to track the defects, analyze and prepare metrics. Major
attributes of defect tracking tool should include,
1. User friendly interface – Interface should enable user on quick and easy approach to
report information.
2. Customizable fields – Cater various discipline of user to customize as per their need.
3. Discrete data analysis – With this feature, manager can easily extract data in the form of
graph, chart or table to analyze the data.
4. Dynamic incident log – Dynamic incident log for each incident recorded in the report
enables to track the progress of incident through various stage of life cycle.
5. Organization wide accessibility – Tool should be accessible across the organization at
any time, without the need to wait or when someone already accessing it.
47 | P a g e
[Chatper-6: Tool support for testing]
Tools support for testing [Types of test tools]
There is tool support for many different aspects of software testing.
Tools used in the software testing can be classified according to the activity they support. Tools
designed for management of the testing process comes under Management classification.
1. Test management tools - It is used for managing the testing process. These tools are
used by expert testers/managers during system or acceptance testing, they contain
support for,
a. Managing testing activities and tasks
b. Managing test procedures
c. Providing management progress reports based on metrics
d. Interfacing with other tools
48 | P a g e
2. Requirement management tools - This tools are used to store, manage and ensure
consistency and integrity of requirements during the lifecycle of the testing. These tools
used to,
a. Capture and store requirements
b. Identify defects in requirements e.g. ambiguous statements in requirement.
c. Identify any changes to other items
d. Calculate requirements coverage metrics
3. Incident management tools - These tools manage defects and incidents such as
anomalies, enhancement requests and suggestions that are recorded during testing.
These tools are used to create incident reports. Incidents also searched, analyzed and
presented as management information, used in planning and estimating in new project.
Incident report contain information on details of all stages that incident pass through.
(E.g. opened, rejected, duplicate, deferred, assigned, ready for confirmation testing,
closed). Incidents are stored in database with relevant fields like severity, current status,
people involved etc.
4. Configuration management tools - These tools are used to keep track of versions of the
software being tested along with related information on testing setup. Highly useful
when complex systems undergo changes. Mapping of this information allows
traceability. These tools allows to,
a. Store information
b. Trace testware to version and vice versa
c. Perform other miscellaneous activities
1. Review process support tools – tracks review process and logs all review details
2. Static analysis tools – Used to help developer to identify programming language related
issue and enforce coding standards e.g. syntax errors, invalid code structure, reference
to variable with null values.
3. Modeling tools - used by developer during analysis and design stages of product
development, which help to validate models of a system or software. They are used
before dynamic tests are run to find omissions, inconsistencies and defects early in the
life cycle of product development.
1. Test design tools – it helps in generating test inputs. When automated oracle/test basis
is available, test design tools also help generate test case with expected results. There
are different types of test design tools with varied level of automation. E.g. screen
scraper which is used to capture the UI elements and generate test design, but unless
the tool have access to oracle, it may not know the test result/expected outcome for
49 | P a g e
action. Another example is test design tool bundled with coverage tool. Feature or
characteristics of test design tools include support for,
a. Generating test input values from
i. Requirements,
ii. Design models
iii. Code,
iv. Graphical user interface
v. Test conditions
b. Generating expected results, if an oracle is available to the tool.
2. Test data preparation tools – Data preparation tools help you to collect data, such as
fictitious names and address for creating test cases. They are useful on the need to
collect large volume of data for testing. Test data preparation tools help in
a. Generating new records with random related data
b. Sorting/rearranging existing records differently
c. Extracting selected data records from files or databases
d. Using a template to construct a large number of similar records for volume data.
e. Modifying/massage data records for data protection and not connected to real
people
A test script is essentially program code and a programmer is required to write/modify scripts.
This test script is used to test software and compare the actual results to the expected results.
Some tools capture user action and create corresponding code to form a test script.
Cutting down unmanageable number of test by risk analysis. Using a technique such
orthogonal array can help.
1. Test execution tools – Some test execution tools are also called capture/playback tools,
helps automatically run test scripts and record results in test log. This tool enable you to
avoid manual errors both during execution and comparison.
2. Test harness/unit framework tools – These two types of tool are grouped together
because variants of type used to support development activity e.g. testing components.
a. A test harness provides stubs and drivers, designed to call/input or to accept
output from other software module being tested.
b. Unit test framework tools provide support for object-oriented software, “XUnit”
tools can be used to create test harness. E.g. using .NET to create “NUnit” tool.
Using Java to create “Junit” tool.
3. Test comparators – It compare software actual output to the expected output. E.g. on
testing a data transfer accuracy from database to database, test comparison ease
manual comparison job. Two types of test comparison are
50 | P a g e
a. Dynamic comparison – When comparison in preformed in real time/on
execution.
b. Post execution comparison – Comparing after the test is finished. Used in
comparing a large volume of data.
4. Coverage measurement tools – It measures the percentage of quantifiable structural
elements or “coverage items” that are covered by given test suite. The process of
identifying the coverage items at component level is called “Instrumenting the code”
Steps used are,
a. Instrument the code
b. Test the instrumented code
c. Identify the coverage items that were exercised
d. Remove the instrumented code
5. Security tools – These tools are used to test a system’s resistance to security threats
such as computer viruses, worms, or denial of service attacks. Tool are used to,
a. Identify virus attacks
b. Identify weak passwords
c. Identify open ports and points of attack
d. Simulate different types of external attacks.
e. Detecting intrusion such as denial of service attack.
1. Dynamic analysis tools – These tools are used to analyze the software code while
running. Developers use this tools during component and integration testing. This tool
can,
a. Identify memory leaks
b. Identify pointer arithmetic errors such as null pointers.
c. Detect time dependencies
d. Report status of software during execution
e. Monitor allocation/use/de-allocation of memory
E.g. web spider tool helps to identify dead link in a website.
2. Performance testing tools – These tools test the performance of a system under
different load or usage patterns. Measures system characteristics such as response time
and mean time between failures. They perform,
a. Load testing – tests whether the system can cope with expected number of
transactions/load. Logs are used to generate reports/graphs.
b. Volume testing – Testing whether system can handle large volume of data. E.g. a
file contains many records, testing whether system can open and display all the
records.
c. Stress testing – Testing whether system can handle more transactions than it
designed to handle, by performing both load and volume testing.
51 | P a g e
3. Monitoring tools – Is used to monitor the status and performance of the systems in use.
They are used on deployment. In order to have the earliest warning of problems and to
improve service. It covers IT infrastructure such as servers, databases, networks, internet
usage and applications.
1. Underestimating the time, cost and effort when first introducing tool
2. Expecting more from tool
3. Underestimating the time and effort need to derive benefits from tool
4. Over-reliance on the tool
5. Underestimating the effort required to maintain test assets generated by the tool
6. Interoperability issue between tools
7. Skill needed to create good tests with type of tool
52 | P a g e
Script used by testing tools include, 1&2 are advanced level scripts
1. Data driven scripts – where a control script reads test data stored in a file/spreadsheet
2. Keyword driven scripts – Information about the test and control script are invoked from
them
3. Linear scripts – Capturing manual test actions and then replaying it.
4. Shared scripts – reused as a script called by other scripts.
5. Structure scripts – uses selection and iteration programming structures.
Before purchasing the tools, its advantage and disadvantage need to be analyzed.
To evaluate the organization maturity for deployment of tool, you can use,
1. Gain knowledge
2. Assess compatibility
3. Decide on process modifications
4. Decide on how to ensure people make optimum use of the tool
5. Evaluate the benefits of the tool
6. Determine other details for using the tool
53 | P a g e
Foundation Level professionals should be able to:
Use a common language for efficient and effective communication with other testers and project
stakeholders.
Understand established testing concepts, the fundamental test process, test approaches, and
principles to support test objectives.
Design and prioritize tests by using established techniques; analyze both functional and non-
functional specifications (such as performance and usability) at all test levels for systems with a low
to medium level of complexity.
Execute tests according to agreed test plans, and analyze and report on the results of tests.
Write clear and understandable incident reports.
Effectively participate in reviews of small to medium-sized projects.
Be familiar with different types of testing tools and their uses; assist in the selection and
implementation process.
54 | P a g e