Sei sulla pagina 1di 54

ISTQB Handout

History of modification

S.No Person Date Modification Color code

1 S4M 14-Dec-15 Content Creation Text

2 Text

1|Page
Objective:
The Foundation Level qualification is aimed at professionals who need to demonstrate practical
knowledge of the fundamental concepts of software testing. This includes people in roles such as
test designers, test analysts, test engineers, test consultants, test managers, user acceptance
testers and IT Professionals.

Note: This document is prepared from web based training. On comparing with book and
merging the book information, below are noticeable changes in the document.

1. The italic lines in this document are not part of the standard ISTQB book. I.e. extra.
2. Font with “3DS” sentence mean, missing information belong to the text book are added to
this document.

2|Page
The Foundation Level exam is characterized by:

 40 multiple-choice questions
 a scoring of 1 point for each correct answer
 a pass mark of 65% (26 or more points)
 a duration of 60 minutes (or 75 minutes for candidates taking exams that are not in their native or
local language)

Exam questions are distributed across K-levels, which represent deepening levels of
knowledge, as shown in the following table:

Exam Number of questions per K levels

K1 K2 K3 K4 Total

Foundation 20 12 8 40

Exam questions are distributed across Syllabus chapters as shown in the following table:

Exam Number of questions per Chapter

Ch. 1 Ch. 2 Ch. 3 Ch. 4 Ch. 5 Ch. 6 Total

Foundation 7 6 3 12 8 4 40

What are K-levels?


A K-level, or Cognitive level, is used to classify learning objectives according to the revised
taxonomy from Bloom [Anderson 2001]. ISTQB® uses this taxonomy to design its syllabi
examinations.

Questions with different K-levels may be awarded with different pre-defined scores to reflect their
cognitive level.

The Foundation and Advanced exams cover four different K-levels (K1 to K4):

 K1 (Remember) = the candidate should remember or recognize a term or a concept.


 K2 (Understand) = the candidate should select an explanation for a statement related to the question
topic.
 K3 (Apply) = the candidate should select the correct application of a concept or technique and apply
it to a given context.
 K4 (Analyze) = The candidate can separate information related to a procedure or technique into its
constituent parts for better understanding and can distinguish between facts and inferences.

3|Page
Introduction to software testing [Chapter-1: Fundamentals of testing]
The necessity of software testing

Software system form an integral part of our daily lives, therefore software failure can
prove expensive and result in the loss of time, effort and reputation. For critical software, these
failures can cause major financial loss, injury or result in the loss of life.

Faulty software is a result of errors (or mistake) made while designing and building the
software. Most of the software issues can be divided into three categories,

1. Error – An error (or mistake) is an action performed by a person that leads to an


incorrect result. This error could be wrong usage of software / error made by person in
the process of designing and building the software.
2. Defect – A defect is known as bug or fault, is a flaw in the component or a system that
can cause component or system to fail to perform its required function.
3. Failure – A failure is defined as a deviation of the component or system from its
expected delivery, service or result. A defect can cause software system to fail.

Root cause analysis is used to reduce the likelihood of an error occurring in future by
conducting analysis. It involves trying to track the failure of the system all the way back to its
root cause. It brings overall improvement in the quality of those systems.

Defect can arise in four stages in a product life cycle. Analysis, Design, Development and
Implementation. The extent to which defects are removed in the phase of introduction is called
phase containment. Cost of investigating and correcting defects is directly proportional to
increase in time and later stages of life cycle.

False-fail result or (false positive result) –A test result in which a defect is reported although no
such defect actually exist in test object.

False-pass result or (false negative result) – A test result which fails to identify the presence of
defect that is actually present in the test object.

4|Page
Quality is how well a component, systems or process is designed i.e. degree to which a
component/system meets specified requirements, needs and expectations. Testing helps to
improve confidence that a product meets quality criteria. Testing effort is focused on
verification and part on validation.

1. Verification – Evaluates a product to determine that it meets the requirements set.


2. Validation – It ensure product meets user needs, i.e. fit for the purpose it was built.

What is software testing? The standard that gives definitions of testing terms is: BS7925-1

The main objective of testing is to discover defects, other objectives include preventing
defects and getting measure on the quality of software.

Types of software testing include, (there are other types exist)

1. Static testing – involves review of documents and source code.


2. Dynamic testing – testing the code on runtime.

Software test objectives

Software testing has three clearly defined objectives.

1. Uncovering defects
2. Gathering confidence about the level of quality and providing information for decision
marking
3. Preventing defects

General software testing principles – provides standard framework for testers to conduct tests
and discover defects include,

1. Testing show the presence of defect – but cannot prove that there are no defects.
Testing reduces the probability of undiscovered defects remaining in the software. But
even if no defects are found, is not proof of correctness.
2. Exhaustive testing (complete testing) is impossible – Testing everything is not feasible
except trivial cases. Risk analysis and priority should be used to focus testing efforts.
3. Absence of errors fallacy –Finding and fixing defect does not help if the system built is
unusable and does not fulfill user needs and expectation.

Applied software testing principles – provides a standardized format for creating test plans and
act as a guide to effective testing.

4. Early testing – Testing activity should start as early as possible in the software life cycle
and focused on defined objective.
5. Defect clustering – Testing effort focus on later observed defect density on modules. It
is based on paerto principle, suggests 80% of defect is caused by 20% of module.
6. Pesticide paradox – Repeating same test cases won’t find new defects, test cases need
to be regularly reviewed to find new defects.
7. Testing is context dependent – Testing is performed upon the context of software. E.g.
testing safety critical software differs from testing e-commerce site.

5|Page
Process activity and psychology of testing
Fundamental software test process

Five fundamental phases in software testing process,

1. Test planning and control


a. In test planning,
i. Define scope of testing
ii. Exit criteria of testing
iii. Specifies schedule of activities and resource required.
iv. List the risks and contingency plans.
b. Test control compares actual progress with test plan based on test monitoring
data and reports.

2. Test analysis and design


a. In test analysis, test basis (Requirements, architecture, design and interface) are
reviewed. Evaluate the testability of test basis and test objects. Test objectives
are converted to test conditions and choosing test conditions to be covered.
b. In test design, test cases are designed from test condition using test design
techniques and hence necessary data and test environment to support them.

3. Test implementation and execution


a. Test Implementation includes,
i. Developing and prioritizing test procedures
ii. Create test data
iii. Creating test suites from the test procedure
iv. Ensure test environment set properly
b. Test execution are performed when,
i. Execute the planned test case and report discrepancy
ii. Log the test report
iii. During confirmation testing to confirm the fix
iv. During regression testing to confirm the fix didn’t affect other area.

4. Evaluating exit criteria and reporting


a. In exit criteria, analysis of the test execution results against the objective defined
in the test plan takes place.
b. Analysis result is submitted to stake holders as test summary report.

5. Test closure activities – requires to consolidate and document, data from the completed
testing, like all reported incidents are closed. Records are archived and handed over to
maintenance team for regression testing.

6|Page
Exit criteria and test closure

Exit criteria evaluation types,

a. Coverage criteria – It decides the test cases that must be included during the exit
criteria evaluation process.
b. Acceptance criteria – Checks whether the software under testing has
passed/failed in the overall process.

Exit criteria evaluation includes a number of tasks, important three are

1. Check test logs – Check test log, identify defect reported and fixed.
2. Estimate additional requirements – Depending on test log analysis, additional tests
requirement is determined. It is also based on business risks.
3. Prepare test summary report – Summary report are prepared for stake holders of the
project, which helps in critical decisions. This document summarize testing activity and
evaluation of test results against exit criteria.

Test closure, ensures the completion of all test activities and sign off the end product e.g.
deliver software to client. Test closure activity include,

1. Checking deliverables
2. Archiving testware
3. Submitting testware to maintenance team
4. Evaluating overall test process

Psychology of software testing

Four basic level of independence

1. Software developer
2. Peer reviewer
3. Internal tester
4. Third party reviewer

For communication among team during test process, it is important to communicate,

1. Defects and incident neutrally


2. How the end user will benefit
3. The need for collaboration

Contrasting software testers and developers – Traits of testers with right mindset include,

1. Curiosity – How the software works and the way it works


2. Professional pessimism – Good tester expects to find defects and failures.
3. A critical eye – If in doubt it’s a bug
4. Attention to detail – Good tester notice even smallest of details
5. Experience – Good tester leverage the experience during testing
6. Good communication skills – Ability to communicate test results and improve quality

7|Page
[Chapter -2: Testing through software life cycle]
Software development model and Testing
Software development models

Depending on the availability of time, resources, allocated budget & scope of the project,
development strategy is decided which are known as software development models.

Software development models are of two types,

1. Sequential – This model has distinct activities, which are initiated after completion of
preceding activity. E.g. waterfall model.
2. Iterative – in this model, developers build a product using a series of iterative steps.

Waterfall model structure as below,

1. Identification of client requirements and recording


them in requirement specification document.

2. Identification of requirement that product should


satisfy, comprising functions and features. Output
of this activity is functional specification.

3. Team creates an overall design which outlines both


external design and internal component. This
information is saved in technical specification
document

4. Functions of each component identified and


methods to create components. Program
specification document stores the detailed design

5. Using detailed design, team writes code for each


component and integrate them to create product.

6. After developing the product testing is done to


ensure product meets client requirements.

Main drawback of this model is testing towards end of development life cycle, will be hard to
fix the bugs as the code is complex. New bugs can be introduced at this stage.

8|Page
V-model is a sequential model that improves upon the waterfall model. As the product is
progressively developed at life stage, testers also works at various stages to promote test
activity related to the stage. Testing is divided into four levels

1. Component testing – Each component of the products are tested for defect. Detailed
design of the product is used to create component testing plan.
2. Integration testing – used to test the inter-relationship between components of the
product e.g. interaction between components and with computer hardware/software.
To create integration test plan, overall design of product is used.
3. System testing – Developers integrate the components and build a functional software,
exhibits the features and functions. Testers create system testing plan based on system
requirements and execute them.
4. Acceptance testing – Acceptance testing is conducted by client representatives. Aim is
to validate the product meets their requirements. Testers create acceptance testing plan
after the client’s requirement has been identified.

Even V-model has the drawback, verification of product against the client requirement only
towards the end of the development process. Fixing bugs, adding missing feature at this stage
is difficult, expensive and time consuming.

Iterative or incremental development model as below,

Iterative model eliminate the drawback of v-model. In this model, developers build a product
using a series of iterative steps. Examples of iterative or incremental development model are
prototyping, Rapid Application Development (RAD), Rational Unified Process (RUP) and agile
development (e.g. SCRUM).

Dynamic System Development Methodology (DSDM) is a refined RAD with control option to
stop the process getting out of control.

Iterative model consist of four tasks,

1. Identify requirements

2. Create a design

3. Develop code

4. Test code

9|Page
Testers can test the product while it is being developed, aids in identifying bugs easily and
accurately. Also, representative can take part in the development to suggest change in the
product during development.

In Iterative model,

1. Don’t prepare formal documents such as requirement specification.


2. Don’t have to identify all client and system requirement
3. Schedule and cost for the project based on iterative steps.
4. Repeat iteration until develop the product, which meet all the client requirements.

Two Significant drawbacks of iterative model,

1. Absence of formal documentation – you can’t verify the requirements accurately. So


testers write functional test and ask developer to create code that can pass.
2. Increased testing time and cost – e.g. developer can modify the approved feature (i.e.
tested code) to prevent this action, testing has to spend more effort. This can be
drawback for small scale project.

In Incremental development model, Incremental fundamentally means add onto. Incremental


development helps you improve your process.

In iterative development model, Iterative fundamentally means re-do. Iterative development


helps you improve your product.

10 | P a g e
Rapid Application Development

RAD is formally a parallel development of functions and sub-sequent integration. This can
give customer something to easy and provide feedback.

Agile Development

Agile software development is based on iterative incremental development model, where


requirement and solution evolve through collaboration between self-organizing cross
functional team.

Widely SCRUM is adapted for management approach and XP extreme programming is used as
the main source of development ideas. Their characteristics are

1. Business story based functional specification


2. Business representative part of development process, during each sprint
3. Open for change in requirements
4. Shared code ownership and testing is closely integrated.
5. Test driven developments e.g. test case first and then code development
6. Simplicity, building only what is necessary
7. Continuous integration and testing of code atleast once a day.

Benefits of agile methodologies are,

1. Good quality code, since development is responsible for quality


2. Test driven development
3. Involvement of business stake holder,
4. Simplicity of design, easy to test.

Challenges of agile development are,

1. Test basis are from less formal way and subject to change.
2. Component testing done by development leaves tester no work, although system
testing or any non-functional testing cannot fit in sprint.
3. Adaptability of tester with this methodology
4. Time constraint and pressure on delivery is high
5. Regression testing becomes complex on later stages.

11 | P a g e
Component and integration testing [Test Levels]

These testing helps to locate code related defects and determine various code
interactions with each other as intended.

Component testing helps to identify errors in each component, such as object, program or
module of software application. A formal documentation is not maintained to record defects,
but defects are fixed as soon as detected.

Approach of component testing is test first approach which is iterative process. XP Model

1. Create test cases (Specification document used to create test cases)


2. Develop code
3. Run tests

All the components may not be available during testing. Temporary component are created,

1. Stubs – Stubs (also called “Mock Objects”) simulate called components, e.g. if a
component calls an object that is not ready, you can use stub in place of the object.
2. Drivers – A driver (also called Test harness, scaffolding) act as a substitute for a
component that call the component you are testing. E.g. Driver is used in place of
unavailable button that call the authentication system you’re testing.

Non-functional requirements are also tested during component testing level. Also, robustness
of the component is tested, e.g. how well the component responds to negative inputs.

Component Integration testing or Integration testing determines whether the software


component interacts with each other, as well as with hardware and other software as intended.

Integration testing after system testing is known as system integration testing, which analyze
the interaction between a complete software product and other software systems.

Non-functional requirements are also tested during component/system integration testing.

There are two types of integration strategy,

1. Big-bang integration testing – Integration testing on fully integrated system is known


as big bang integration testing. This is successful only if the components have few and
uncomplicated bugs.
2. Incremental integration testing – Incremental testing is opposite to big bang method,
involves testing system on a component by component basis. E.g. start with testing
interactions between two components before you integrate and test additional
component.
a. Top down integration testing – It involves testing external feature of a software
application first and then testing internal component by integrating. E.g. start
testing from GUI. If GUI is not ready, we use stubs to simulate them.
Advantage of this method is drivers need not be created.
b. Bottom up integration testing – It involves testing of components at lowest level.
Drivers are used to simulate the call to this components. This method helps to

12 | P a g e
test the interactions between software components more effectively than top-
down approach.
c. Functional incremental - Integration and testing takes place on the basis of the
functions or functionality as documented in the functional specification.

Big bang Incremental


Starting test on small scale, this
Advantage Doesn’t require stubs & drivers
increase chance of isolating defects.
Time consuming to test entire system
Late identification of defects in development Stubs and drivers need to be used,
Disadvantage
process which is time consuming.
Difficult to identify cause of the defect quickly.

In most cases, incremental testing is preferable than big-bang integration testing, because of
most advantage on the other process.

System and acceptance testing

System testing involves testing the application as single entity after integration. It verify
whether application meets all the requirements set initially. It is conducted by dedicated testing
team, involves assessing the application on the basis of external features and not the
underlying code.

System testing are tested against requirements categorized into,

1. Functional – It refers to features and functions of the application.


2. Non-functional – It refers to those doesn’t related to specific functionality, but to
the quality characteristic/ attributes such as usability, efficiency, reliability etc.

Testing functional requirements starts with a specification based black box approach. Black box
testing is so called because it takes no interest in the internal structure of the system or
component.

White box or glass box testing is concerned with internal workings of a system.

The attributes and sub attributes tested during non-functional requirements are,

1. Reliability – comprising robustness, fault tolerance, recoverability and compliance


2. Efficiency – comprising speed, resource utilization and compatibility
3. Usability – comprising comprehensibility, learnability, operability, appeal and
compliance
4. Maintainability – comprising analyzability, stability, changeability, testability and
compliance
5. Portability – comprising adaptability, compatibility, Installability, replaceability and
compliance

13 | P a g e
Acceptance testing are conducted by representatives of the client. The testers verify whether
the application meet all the requirements, both functionally and non-functionally.

Acceptance testing determine, whether an application

1. Meets all the specified requirements


2. Is ready to be deployed
3. Adversely affects existing applications and business operations

Various types of acceptance testing are,

1. User acceptance testing – It involves prospective user testing the application, by


checking whether application allow them to perform business tasks.
2. Operation acceptance testing – System administrators run this test, by testing whether
system withstand malicious attacks, can recover from failures and can be maintained
easily and efficiently e.g. checking the backup data recovery system
3. Contract acceptance testing – If a contract has been signed for an application
development, then acceptance test are performed based on details specified in contract.
4. Regulation acceptance testing – To ensure that an application complies with the
government standards, legal and safety regulations, client conducts regulation based
acceptance testing.

Two other types of acceptance testing are,

1. Alpha testing – Alpha testing involves team and set of prospective customers to ensure
the application meets all the specified requirements.
2. Beta testing – After successful alpha testing, application is sent to another set of
prospective customers for beta testing, to simulate real time environment. Depending
on the report, application is fixed and released to market.

Alpha and beta testing are carried on application meant for COTS, commercial off the shelf
software. Since alpha testing done of developer site, it is referred to as factory acceptance
testing, while beta testing simulating real time environment, referred to as site acceptance
testing

Depending on the place of acceptance testing, this can be

1. Factory acceptance testing – When client representative drop by to test the product.
2. Site acceptance testing – When the software product is sent to client/site to perform
acceptance testing at site.

Acceptance testing can also be performed at other stages,

1. During component testing – Verification of component behavior as expected.


2. During installation or integration testing – COTS are subjected to this testing, while
installing or integrating with other application.
3. Before system testing – If a new feature is added to the existing application, you first
perform acceptance testing on the new feature and then system testing on entire
application.

14 | P a g e
Software Test Types
Functional testing – tests the functionality of a selected component

Non Functional testing – tests the behavioral or quantified characteristics of the systems and
software e.g. reliability, performance

Structural testing – tests the structural aspects of the component/system

Change based testing – includes regression and confirmation testing (FV testing) involves re-run
of tests to ensure software works correctly

Functional and Non-Functional Software Testing

Functional testing is the process of testing a software product to determine its specified
behavior or functionality. Tests are performed for various quality characteristics as per ISO
9126,

Suitability – It determines if the product performs as expected for its intended use. It test the
capability of the software to provide appropriate set of functions to perform task and user
objectives.

Interoperability – It tests the capability of the system to interact with other specified
component or systems.

Accuracy – It involves ensuring that faulty product does not leave the production line and cause
error during beta testing.

Functional testing is also performed on below characteristics,

Security – Test involves investigating functions relating to prevention of unauthorized access to


software and data, either intentionally or un-intentionally.

Compliance – Testing system adherences to certain specified criteria, such as standards,


conventions, regulations and law.

Functional tests are derived from requirement /use case specification. Because only program
spec is considered – not the design/implementation of the program, this type of test will fall
under Black box/specification based testing.

Two approach to functional testing are

Requirement based testing - It designs tests based on functional requirements specification of


the system and prioritizing requirement based on risk criteria for tests.

Business process based testing – use case provides a basis for test cases from a business
perspective, day to day scenario business use/needs.

15 | P a g e
Non-functional testing can be performed at all test levels, the behavioral characteristics of
systems and software are tested and quantified on varying scale.

Various types of non-functional test are,

Performance testing – tests the degree to which a system fulfills its specified function within
given processing time and throughput rate constraint.

Load testing – measures the behavior of a system with increasing load

Stress testing – evaluates a system at and beyond the boundaries of its specified requirements.

Usability testing – tests how easily user can perform a task

Maintainability testing – tests how easily product can be modified in future.

Reliability testing – tests how reliably a system performs over a given period of time. How good
the non-broken time frame

Portability testing – tests how easily a system can be transferred from one platform to another.

According to ISO 9126, six quality characteristics, out of which five characteristics covered by
non-functional testing,

First two

Reliability – A software product is reliable when it performs its required functions under stated
conditions. It can defined further into fault-tolerance, maturity, recoverability and compliance.

Usability – A software product is said to be usable, if the user easily understand and likes the
interface and finds it easy to operate. It is divided into understandability, learnability,
operability, attractiveness and compliance.

Other three,

Efficiency – It is the capability of the software to provide appropriate performance relative to


the amount of resource used. It is divided into performance, resource utilization and compliance

Maintainability – The ease with which product can be modified to correct defects, meet new
requirements, adapted to a changed environment is called maintainability. It is divided into
analyzability, changeability, stability, testability and compliance.

Portability – When a software product can be transferred easily from one hardware / software
environment to another, it is said to be portable. It is divided into adaptability, Installability, co-
existence, replaceability and compliance.

Functionality - the process of testing a software product to determine its specified behavior or
functionality. Sub characteristics include, suitability, accuracy, security, interoperability and
compliance

16 | P a g e
Structural and Changed-based Software Testing

Structural testing is concerned with internal architecture and workings of the software, and so
referred to as white-box or glass-box testing.

Structural test are mostly done at

1. Component (code in a program) and


2. Component integration level (architecture of the system, i.e. calling hierarchy).

Structural testing measures the amount of testing done by checking the coverage of a set of
structural elements or coverage items. In structural testing, tests are based on logic of the
application code, compared to requirement in case of functional testing.

White box technique at various level include

1. Component – Structure at component level is code itself, i.e. statement or decision.


2. Integration – test area is call tree where module call other module
3. System – tests are menu structure, a business process or web page structure.

Change based testing

Confirmation testing – After fixing the defect, software need to be retested by re-executing the
test case that failed last time. The product is considered free of known defect only when it
passed retest.

Regression testing – Confirmation testing doesn’t guarantee a quality product, may fail at
integration level. A set of tests are designed to check despite changes, system works as
expected. These type of tests are called regression tests.

Three types of regression

1. Local regression – a change or bug fix, creates a new bug


2. Exposed regression – a change or bug fix reveals an existing bug
3. Remote regression – a change or bug fix in one area triggers an error in another area.

Regression testing involves repeating tests. To decide which tests to repeat, techniques are

1. Traceability – tracing back to the requirement, identify behavior and design tests
2. Change analysis – This involves analysis of how changes could affect other portions,
which in turn requires understanding of the code and system design.
3. Quality risk analysis – retest will be based on business risk and its priority.

Maintenance of regression suite should be carried out because over time content gets added &
test suite becomes heavy. So selected test are run or another approach is to eliminate test
cases which doesn’t detect defect for long time.

17 | P a g e
Maintenance Software Testing
Maintenance testing activity are carried out on a software in production, which is
modified due to defect, improvements or to adapt modified environment. Maintenance testing
confirms the software is once again free of known defects after modification.

Maintenance include,

1. Modifications – It refers to any enhancements, bug fixes, adaptation to environment


change i.e. OS/database change, patches, upgrades. Modification can be
a. Planned modification – The activity is well planned
i. Perfective modification – it involves feature addition/performance
enhancement
ii. Adaptive modification – It involves adaptation to environment changes
e.g. new hardware, new system software.
iii. Corrective planned modification – It refer to changes such as deferred
defects being finally corrected.
b. Unplanned modification / Ad-hoc corrective modification – It occurs when
defects suddenly arise and require immediate attention. E.g. server patching
results in security vulnerability.
2. Migrations – It refers to transfer of a system from one platform to another. It should
include operational testing of the new environment as well as changed software.
3. Retirement of the system – It refers to testing of data migration. E.g. from old server
machine to new server machine, testing whether the tables are mapped properly in new
server.

During maintenance testing two fundamental tests are carried out,

1. Confirmation tests (testing the changes)


2. Regressions tests (Test to show the maintenance work doesn’t introduce a bug)

To reduce time spent of maintenance testing, impact analysis is carried out, which determine
the area that require maintenance testing after maintenance work/change.

18 | P a g e
[Chatper-3: Static technique]
Static Software Testing Technique
Static Techniques and the Software Test Process

In the initial stage of testing, testing helps to verify product meets the requirement, at later
stages testing is done to check whether defects occur during product execution. Depending on
when the software is executed, testing is categorized as

1. Static – Testing is done before executing software, used to detect defects in the early
stage of the software development. Static testing technique includes,
a. Reviews – It involves detecting defects in documents such as requirement &
specification, verifying conformance to standards, else may result in defect after
implementation.
b. Static analysis –It involves reviewing source code by automated tools, which
detects violation in programming standards and code syntax.

2. Dynamic – Testing by executing the software product. Goal is to detect runtime defects.
Dynamic testing technique involves,
a. Specification based – Requirement/specification drives the input for
specification based testing, which considers software as black-box with inputs
and outputs. This technique test, what the software does.
b. Structure based – Structure based/white box technique examine the software
implementation by evaluating the code structure. Used in component integration
testing especially for code coverage.
c. Experience based – Knowledge and skills acquired by experience helps to
evaluate the software. This technique helps when the specification is not
adequate and in time constraint situation.

Static testing Dynamic testing


Performed in early stage of software Performed during runtime ready of the
development software
Applicable to any software product Only applicable to executable programs
Offers more financial & scheduling benefits
compared to Dynamic testing
Adherence to coding standards can be Coding standards cannot be checked in
checked runtime.
Detecting defects during static testing saves
quality at initial stages of SDLC, saves time &
money compared to dynamic testing.

19 | P a g e
The Review Process in Software Testing [Review Process]

Review is a static software testing technique, which are executed on the documents such as
functional and requirement specification. Review help in identifying ambiguities, deviations
from specified standards and defects in documents. It also enhances knowledge on product.

There are four types of review,

1. Informal – Author request peer or technical lead in the same domain to take a glance at
the document to validate it. It’s very cost effective.

2. Walkthrough – A walkthrough is carried out by author with stakeholders to obtain


common understanding on the document under review with Q&A. Its main purpose is to
seek consensus. It is especially useful, when people from outside software disciple are
present.

3. Technical review (Peer review) – It involves team made up of technical experts in review
process. This type of review ensures technical concepts, models and technical standards
identified for the product development are accurate and valid for the product. This
review is applied for product of critical nature.

4. Inspection – This review is carried out by moderator and peer group. Leader plans &
leads the review activity. Review is performed by team members led by a trained
moderator. Using rules & checklist, the inspection team finds and record all defects in an
inspection report. Defects are tracked for corrective measure on the document were
undertaken & closed, only then inspection review is considered complete.

Any of the above review types can be chosen depending on the criticality or nature of
software. Formality level of the review as categorized below,

20 | P a g e
Success of review depended on

1. Defined objectives – Review must have clear, predefined objectives, which helps
reviewer to choose proper review technique.
2. Nature of review team – Right people having expertise/knowledge in the product
domain should be involved in review process to locate defects and provide
recommendations.
3. How defects are communicated – Defects found during review should be communicated
objectively and constructively. No personal attacks on the capability of author/review
team.
4. Review technique used – Choice of review technique vary as per need. Software
enhancement might require informal or walkthrough. While new software development
require inspection to ensure quality and standards.
5. Review process – Review process when followed as planned ensures time & cost
incurred in review are not wasted. It aids in course of action for review and rework.

Formal review process consist of six stages,

1. Planning – Planning involves below activity


a. Defining the review criteria
b. Selecting the personnel
c. Allocating roles
d. Entry & exit criteria definition for formal reviews like inspection.
e. Selecting which part of document to review
f. Checking entry criteria.

To improve effectiveness of review, different roles are assigned to each


participants,

i. Focus on higher level documents e.g. does the design comply with
requirements.
ii. Focus on standards e.g. internal consistency, clarity, naming
conventions, templates
iii. Focus on related documents e.g. interfaces between software functions.
iv. Focus on usage of the document e.g. for testability and maintainability

2. Kick-off – During Kick-off,


a. Documents are distributed to reviewers in a meeting
b. Explaining review objectives, process, documents and time constraints

Goal of Kick-off is everyone understands the document under review and its need
and clarify doubts/Q&A.

3. Preparation – Participants individually review the documents, check them against


checklist and identify defects and questions. A time period is set for this phase.

21 | P a g e
4. Review Meeting – Review team discloses the defects and severity level found during
preparation stage. Defects are discussed (handling defects & decision) and logged as per
severity levels (Critical/major/minor). Minutes of meeting is recorded and defect log is
given to author. Note: spelling error are not part of defect classification, they are noted
and given to the author.
Review meeting consist of three phase depending on review type.
a. Logging phase – Only defects are logged. Good logging rate is 1 to 2 defects
logged per minute.
b. Discussion phase – Defects are discussed and justified.
c. Decision phase – At the end of the review, decision is taken based on exit
criteria. If no of defect found per page exceeds certain level, the document re-
reviewed after corrections.

5. Rework – Author of document fixes the defect identified as a priority, ensuring changes
are easily traceable. Issues not fixed should contain proper reason from the author.

6. Follow-up – Moderator follows up on the corrections made for the logged defects.
a. Checking the defects has been addressed
b. Gathering metrics
c. Checking exit criteria (for formal review types)
Moderator also collect information such as number of defects found, time spent to
correct them and total effort spent on review. This information is stored for future
analysis, based on which leader suggests improvement in the process.

Formal review process is characterized by defined roles and responsibilities that each member
in the review team is assigned with. The roles include,

1. Manager – It is the manager responsibility to


a. Manage the review process, assigning time for review in project plan
b. Manager checks if the defined review objective is met
c. Manager decides if member from review team need to be trained before review
2. Moderator (Review leader) – Moderator decides on the entry & exit criteria, review
meetings and documentation. Moderator acts as facilitator during discussion, ensure
review objective is met.
3. Author – The Author develops the document to be reviewed and fixes the error found
during review. The Author helps team to understand the gray areas and importantly
recognize the quality issues that can be avoided in future documents.
4. Reviewers (Checkers or Inspectors) – Reviewers check for defects in the document based
on the experience and domain knowledge they possess. They follow standards and
checklist during review.
5. Scribe (or recorder) – Points raised and discussed during review meeting are
documented by a scribe. It is the scribe’s responsibility to ensure defects and suggestion
are logged clearly, so that author can understand them without confusion.

22 | P a g e
Static analysis in software testing [Static analysis by tools]

Static analysis involves analysis of software artifacts, e.g. requirements, design and
code without actually executing the software. It detects code-related defects through analysis
without executing the code. Static analysis aids to improve quality and maintainability of the
code. Static analysis aid in identifying the gaps and logical error in software model (pictorial
representation of object relation). Static analysis find defects rather than failure.

Uses:
1. Detects defects in the early stage
of development which reduce cost
and time.
2. Detects breach of standards and
metrics in code
3. It helps to reduce number of
defects found in dynamic testing.

It detects critical defects exit in the form of

1. Undefined values in variables


2. Unreachable or dead code
3. Variation errors in interface – communication error between objects. E.g. function
calling another function for 2 values, but returns only one.
4. Non adherence to standards – Violation or non-compliance with coding standards
results in programming standards violation and there by failures.
5. Security vulnerabilities – e.g. user credential left unsecured can aid successful security
attack.
6. Programming language errors – when compiler detects syntax violation in the code

Code Metrics - Static analysis also deals with code metrics or attributes of code structure. Code
metrics measure the structural complexity of code and can be used to indicate risky problem
areas. E.g. 20% of the code causes 80% of the problem as said in defect clustering. Complexity
metrics identify high risk, complex areas.

E.g. cyclomatic complexity or nesting levels of a component is based on number of decision in a


program. It is calculated by

1. Sum of binary decision statement + 1


2. Using formula L-N+2P, where L – number of lines, N – Number of nodes, P- No of
disconnected parts of the graph.

23 | P a g e
Code Structure – There are many kinds of structural measure to consider,

1. Control flow structure – identifies the sequence in which instructions are executed. It
helps in identifying the unwanted or unused/dead code. Code metric relates to control
flow e.g. cyclomatic complexity
2. Data flow structure –follows the trail of data item as it accessed and modified by the
code. Defects can be found such as referencing a data with undefined value and
variable that are never used.
3. Data structure – It refers to the organization of data itself e.g. data arranged in list,
queue, and stack. Sometimes program is complex because it has complex data
structure, rather than because of complex control or data flow.

24 | P a g e
[Chatper-4: Test design technique]
Development and Categories of Software Test Design [Test development process]
The software test development process essentially covers dynamic testing techniques
because this type of testing requires you to design tests. To test code for runtime failures, you
need to identify test conditions, specify test cases and create test procedures to use.

1. Test condition – Critical element in any test is to decide the focus of the test, which
becomes the test condition. Test condition are derived from test basis. Test basis could
be a system requirement, technical specification, the code itself (for structural testing)
or a business process. Many test conditions are possible for a feature, but testing all
may not be feasible, only important test conditions are executed. (test possibilities)
E.g. In user authentication feature test condition may include checking if the system
accepts alphanumeric password and string of letters are user name.

2. Test case – Test cases is needed to test one or more test conditions. Test case uses
elements such as Input values, preconditions and expected results. In order to known,
what the system should do, we need information about correct behavior of the system,
this is called an oracle (or test oracle).
E.g. In user authentication, password validation, range of values allowed are test cases.
The software is then tested against specific values and outcomes.

3. Test procedure – The procedure to follow while implementing test case to verify test
conditions is referred to as test procedure or test script (or manual test script). It details
the step to be performed and order in which they need to be executed.
E.g. In user authentication, for password field first test if it accepts six characters and
then check if it accepts alphanumeric.

IEEE 829 standard is the Test Documentation Standard may be used to capture test conditions,
test cases, test procedures, which aids in well controlled and unambiguous test process.

Template Documents the Content


Test Design specification Test Condition Test approach and the exit criteria for test condition
How to verify test condition, one more test cases listing
Test Case Specification Test Case inputs for testing, expected behavior/results,
dependencies and test environment.
Test cases are logically grouped and ordered by
Test Procedure Specification Test Procedure specifying test procedure. Helps in implementing the test
case in the order of priority.
Both test condition and test cases need to be traceable i.e. test case/conditions linked to
requirement. This is called traceability. Traceability can be

I. Horizontal (Test documentation) - e.g. system testing from test condition to test case
to test scripts
II. Vertical (Development documentation) – e.g. from requirements to components

25 | P a g e
Test Design Specification

1. Unique no used to identify the document.

2. Lists the set of objectives covered by the test


conditions, which defines the features of the
software to be tested.

3. Any refinements of test approach/ test design from


what is listed in test plan is detailed here.

4. Details of individual test cases are listed here.

5. Assessment is done based on the information


entered in “Features pass/fail criteria” section. Test
case ID, feature tested and its pass/fail criteria are
tabulated here.

Test Case Specification

1. Unique number used to identify the document. It


can be named based on feature to be tested.

2. Test Items list all the components, features and


items to be tested. Can also include reference to
source document to ensure traceability.

3. Input specification list the inputs for test execution


e.g. test data, values, user action.
4. Output specifications list the expected behavior for
the respective input action. This section is essential
to confirm if the test succeeded/failed

5. Environment needs describes the hardware and


software platforms required for testing, should be
similar to real environment.

6. Special procedural requirements list the special


requirements/constraints to be considered for test
case execution. E.g. load, setup, operator
intervention.

7. If test case is dependent on any other test case or


input for other test case, those details are listed
here for traceability.

26 | P a g e
Test Procedure Specification

Components of test procedure specification template are

1. Test procedure specification identifier – A unique number used as identifier


2. Purpose – It is used to describe the procedure to be followed and to mention the test
cases to which it applies.
3. Special requirements –Manual/automation execution, setup needs are detailed here.
4. Procedure steps – List the specific instructions to the tester to execute the procedure.
E.g. logging into software, setting up and navigating through the software.

[Test design techniques]


Dynamic testing techniques

In dynamic testing, code is executed to check for defects. Three different approach are available
within dynamic testing to create test conditions and test cases.

1. Specification based technique – It is also called black box technique/ behavioral


technique used to check the outcome of the software by feeding inputs. Test cases are
based on functional/non-functional aspects of the software.
2. Structure based technique – This techniques checks how the software functions
internally, also known as white box technique. Test cases are based on internal structure
of the component e.g. looping, branching, decision etc.
3. Experience based technique – Test cases are based on the experience of the tester.

Black box technique [Specification based techniques]

There are five commonly used black box technique

1. Equivalence partitioning – Test conditions are divided into logical groups called
partitions if they exhibit similar behaviors. Test cases are created for one condition from
each partition, as its behavior is similar to behavior of other condition within same
partition, hence avoiding multiple test case and validation.
2. Boundary value analysis – BVA used to assess the inputs at the edge or boundary of
each equivalence partition.
3. Decision table testing – Decision table helps to validate system requirements that
contain logical conditions. Specification of the system are analyzed, its conditions and
actions are tabulated to state possible combination of results.
4. State transition technique – It uses state transition diagram to test the software that
modifies it response depending on its current condition or prehistory of state. A state
table shows the relationship between the state and inputs, and can highlight transitions
that are invalid. Test can be designed to cover these states as well.
5. Use case testing – Testing based on real world use cases/ different types of interactions
between actors or systems is use case testing. This type of technique helps in
acceptance testing.
Black box design techniques all doesn’t have an associated measurement technique

27 | P a g e
[Structure based techniques]

Statement testing and coverage in software testing


Structure based techniques serves two purpose

1. Test coverage measurement - Test Coverage is the extent to which a test suite exercises
a specific test item.
2. Structure-based test case design –If additional coverage is required, code structure is
used to create test cases using different coverage method.

Coverage is often misunderstood as “How many or what percentage of tests have


been run”, this can be called as “Test completeness” not “Coverage”. Coverage is
coverage of something else by the tests.

Test coverage provides an objective measure and sets exit criteria.

E.g. you’ve written a program that has 20 statements. Your test covers 13 of 20 statements,
which means test coverage is 65%.

Types of Coverage
Depending on the number of system elements to be tested, you can measure coverage at
various levels. They are

1. Component testing – Each unit of code such as module, object or class is tested
separately. Unit testing is typically done by developers.
2. Integration testing – the integrated units of code are tested. Measure the coverage of
specific interactions between the units or modules in the code that have been tested.
3. System or acceptance testing – The code is tested as whole, coverage item include client
requirements and user –interface elements such as menu options.

Or test design technique like specification based test technique methods can be measure using
test coverage e.g. EP (% of equivalence partition exercised), BVA, Decision table, state transition
testing.

How to measure coverage


E.g. of coverage measurement,

1. Decide on the structural element to be used i.e. the coverage items to be counted.
2. Count the structural elements or items
3. Instrument the code
4. Run the test for which coverage is required
5. Using Output from instrumentation, determine % of elements or items exercised.

28 | P a g e
Here step 3 means inserting a code, which will record the usage of item. Step 5 is to analyze
the parameter generated by Step 3 code.

Testing tools can be used to measure coverage and to create test cases, which improves
productivity and efficiency.

Pseudocode does not belong to any particular programming language, it is a simplified version
of the source code that typically covers all important control structure used in programming.

1. Non executable – Certain lines of code doesn’t perform any action, they are called non
executable code, includes statements that declare variables.
2. Executable – Executable code statement instruct the computer to take a specific actions
and return a result. Depending on the function you want to perform, you can structure
executable code in
a. Sequence – When code statements are in sequence and executed in linear
fashion. Doesn’t involve decision points or iterations. E.g. a = b+c
b. Selection – Selection method use decision points in the code. E.g. If else
c. Iteration – If a specific step need to be repeated specific no of times, then
iteration method is used. E.g. For, while, do while loops.

Statement coverage helps to measure the percentage of executable statements tested by a test
case suite.

Decision testing and coverage in software testing

Control flow diagram are used to depict program structures. It is easy to analyze the code and
arrive at test cases by understanding the control flow diagram. It used two symbols, a rectangle
(for sequential statements) and a diamond (for decision statements).

Control flow diagrams can be used to depict structures such as

Sequential structure Selection or decision structure Iteration or loop structure

29 | P a g e
Decision coverage measures percentage of decision outcomes exercised by a test suite.
100 % decision coverage guarantee 100 % statement coverage. But 100 % statement coverage
doesn’t necessarily mean 100 % decision coverage.

Advanced structure based software testing technique

Programs often contains numerous conditions, which can lead to numerous path. To
ensure 100% coverage, advanced technique such as condition and path testing can be used.
100% path coverage implies 100% decision and statement coverage. These techniques are
implemented in safety critical systems.

Condition coverage checks and evaluates the outcomes of each condition. Typically performed
after decision coverage.

Path coverage testing is the most comprehensive type of test, which is difficult and reserved for
critical sections of code.

In addition to path coverage, Linear Code Sequence And Jump (LCSAJ) technique can be used for
measuring code coverage, which is a variation of path coverage technique in which only sub
path of the program are exercised, provided sub path execute in linear manner at run time.

Multiple condition coverage/condition combination coverage and condition determination


coverage/modified condition decision coverage (MDMC) are other available coverage types.

http://www.hcltech.com/blogs/engineering-and-rd-services/structure-based-or-whitebox-
testing-techniques

https://www.istqb.guru/how-to-calculate-statement-branchdecision-and-path-coverage-for-
istqb-exam-purpose/

http://shailajakiran-testing.blogspot.in/2007/11/statement-coverage-decision-coverage.html

Structure based measures and related test design techniques are described in BS792-2

30 | P a g e
Experience based software testing technique [Experience based techniques]

When specifications aren’t available for test cases, tester relies on past experience and skills.
This is called experience based testing. Skills and intuitions are used to locate defects.

Experience based testing is called ad-hoc or reactive testing because the specifics of these tests
are defined while testing the software.

On executing this kind of tests, two key factor must be watched for

1. Managing time & effort – Since, this test is open ended, tester have to run them in a
“time-boxed” manner.
2. Tracking test coverage – To track coverage, tester needs test charters containing high
level description of the test cases.

Types of Experience based technique

1. Error guessing – tester guess the location of error based on experience with previous
application. Fault attack or attack is directed and focused attempt to evaluate
reliability of a test object by attempting to force failure to occur.
2. Exploratory testing technique – It is a combination of ad-hoc and structure based
testing. It is the process by which tester design test cases, execute tests and log test
results based on test charter within a time box.

To perform exploratory test, sequence of step to be followed,

1. Study the system


2. Guess problem area
3. Design test case
4. Run the test

31 | P a g e
Choosing software testing technique
What is the important criterion in deciding what testing technique to use?
Answer: the objective of the test

Internal factor affecting the choice of testing technique

1. The models used while developing the system - Choice of testing would depend on the
software model used during specification, design & implementation. E.g. if the spec
requires to test various cases with different outcome, then decision table testing
technique can be used.
2. Knowledge and experience of the tester – Experience and skill of tester influence the
strategy of test.
3. Potential defects – Based on past experience, if certain defects are known to exist, then
experience based testing can be used.
4. Test objective – Test technique depends on the test objective, during functional
validation use case based approach taken, if complete & rigorous testing is needed
structure based technique can be used.
5. Documentation – If the specification aren’t clear, experience based technique is used. If
the specification are detailed and contain diagram then state transition testing can be
used.
6. Life cycle model – If iterative life cycle model is used, here various loop structure are
present then exploratory testing should be used. In case of sequential model, structure
based testing technique can be used.

External factor affecting the choice of testing technique

1. Risk – In case of safety critical application, structure based testing need to be used for
thorough testing.
2. Contractual requirements – there might be contractual obligations to perform certain
type of test. Then specific set of testing model need to be adapted.
3. Type of system – depending on the type of system/software testing technique can be
chosen.
4. Regulatory requirements – Regulatory guideline influence testing selection. E.g. medical
software require equivalence partitioning and boundary value testing as mandatory.
5. Time and cost – Depending on the time constraint, combination of test technique can
be used.

32 | P a g e
[Chatper-5: Test management]
Test organization and Independence
Dependent and Independent software testing

Two testing style that an organization can use are,

1. Dependent testing– It involves developer testing the software


2. Independent testing – It involves other than developer testing the software.

Independent testing is more beneficial than dependent testing, because it adds a level of
objectivity to testing process.

Developers – Peer review among developer.

Testers with dev team – Testers working under project manager associated with DEV team. I.e.
integrated testers.

Testers within organization – Testers working within organization, doesn’t associate to


application developed.

Testers provided by operational business units

Test specialists – To implement highly independent test practice, employ specialist testers such
as usability, security and certification testers.

Outsourced testers – The highest level of independence is implemented when a third party
testing team tests the application. It is expensive.

33 | P a g e
Benefits of independence testing includes,

1. Difference in perspective – Independent testers view the product unbiased, they can
identify varied defects that developer can miss. Perspective of developer is that of a
creator, hence tend to overlook/miss critical defects.
2. Verification of assumptions – Independent testers verify the assumptions made during
specification and implementation phase of the test process. Hence no defect reporting
on the invalid defect.
3. Different set of assumptions – Independent testers make different set of assumptions
than developers, this helps to detect hidden problems in the application e.g. developer
might think 75% of the software performance is measurable, but testers tries to
achieve 100% performance measure to detect defects.

Drawbacks of independence testing include,

1. Isolation – Independent testing is performed in isolation from development team,


relying only on test cases and rare updates on scope of testing.
2. Lack of communication – If the communication process between developer & tester is
not strong, it can result in unexpected delay.
3. Probable delay in delivery – independent testing is time consuming and may delay the
project delivery because developer might refuse to fix defects due to difference in
perspective. This test is usually done at final stage of project to avoid bottle neck in early
stages.
4. Less responsibility – independent testing reduces the responsibility in development
team. E.g. developer might submit software with known errors to test team, knowing
that existing errors take time to fix. Development team wait for the test team to report
the errors.

In addition to the dependent and independent approaches, there is one more approach

Agile approach – In this approach quality assurance is more integrated with development team.
Developers are aware of their responsibility for quality. Agile testing approach ensures minimal
overhead on communication and coordination.

Task of test leader and tester in software testing

Test leader is a skilled professional who manages the testing activities and its resources.

A person who performs testing such as system or component testing is knows as tester.

Test role is not the same as a test job. A role can be defined as one or many
responsibilities to which an individual is assigned. On the other hand, a test job is what an
individual is employed to do. This job may consist of one or more roles. For e.g. a tester can also
be a leader.

34 | P a g e
Test leader:

Test leader manage the test team. Test leader plans, monitor and implement testing activities
and task. They can also be test managers, test coordinators, test team leaders and test program
managers. Test leaders must have the mindset and training to plan, monitor and control testing
effectively.

Task that test leaders perform are

1. Creating test policies and strategies


a. Test policy contain the rule of testing. E.g. if software is driven by design
statements, then leader can state test policy as review of design before testing.
b. Test strategy provides a high level overview of method used to test the software.
E.g. depending on the software, leader might decide to adapt system testing.
2. Selecting the testing tools and setting up the environment
a. Test leader determine the tools required for testing, like spreadsheet, project
planning tool, work processors etc. Also adequate trainings are arranged. E.g.
Testers should be familiar with defect logging tool.
b. Test leader decides the structure and implementation of test environment. E.g.
supporting hardware and software required for testing.
3. Planning and managing the tests
a. Test leader select the test approach, estimates time, effort and cost of testing.
Defines test levels and cycles and plan incident management.
b. Test leader also manages the specification, preparation, implementation and
execution of tests.
4. Ensuring configuration management
a. Configuration management means identifying the items that makeup the
software or comprise a system. Test leader ensures appropriate configuration
management of the testware is created and traceability of the tests.

Additional task of test leader includes,

5. Planning and implementing automation


6. Preparing test schedules
7. Assisting other project activities
8. Measuring test progress
9. Preparing test summary reports.

35 | P a g e
Tester:

Basic qualification that a tester must have include the ability to communicate effectively,
prepare and present the report efficiently.

Apart from basic qualification, tester should acquire skills on

1. Application – Tester should be aware of the specs, benefits & limitation of the app being
tested.
2. Technology – A tester must be able to determine important functions and features of
the system that is tested, its intended behavior, issues it resolves and process it
automates.
3. Testing – being accountable for test execution, testers should be expert at all test
processes and techniques used to evaluate software.

The task of tester include

1. Creating test plans and test designs – Developing and reviewing test plans are the prime
task of a tester. After test plan outline the testing process, tester creates test design,
test condition, test case and test procedure and test data.
2. Analyzing test specifications – tester need to analyze, review, and assess specifications
of the applications such as user requirements, design specifications and test models.
3. Setting up the test environment – Coordinating with system administration and
network managements to setup the environment is another task.
4. Implementing tests – testers implement test suites for all test levels of a test
environment.

Testers also perform task such as

5. Reviewing tests
6. Using testing tools
7. Automating tests
8. Measuring performance
9. Performing peer review

36 | P a g e
Test planning and estimation
Software test planning:

Software test planning process includes organizing test activities, identifying the
resources and assigning responsibilities to them. Also need to list the test procedures, strategy,
schedule and deliverables in the planning stage. Test plan document after planning acts as tool
for software testing process.

Single master test plan should exist for a project.

IEEE 829 Standard for software test documentation provides a template with standardized
method for performing tests and also helps you write, record and track tests throughout the life
of project.

Various sections of IEEE 829 standard test plan template includes 16 sections, they are

1. Identify the test plan –


a. Unique no to identify the test level.
b. Software associated
c. Current version of the test plan
2. Provide introduction – Introduces to project scope and key features.
3. List test items –item need to be tested, such as source code, control data. Also, have
reference to design and requirements specifications.
4. List feature to be tested – It include list of product features that need to be tested.
5. Include features not to be tested – features that doesn’t require testing and reason.
6. Testing approach – Test approach and factors affecting the success may documented.
7. Pass/fail criteria of test items – Mark pass/fail depending on the numbers and severity.
8. Suspension and resumption criteria – Document criteria of suspending or resuming test
process.
9. Test deliverables – Provides a list of documents, tools for testing effort e.g. test design
specification, custom tools and simulators.
10. Remaining test tasks – In case of multi-phase project, this section list the task which
are in scope and which are part of future development.
11. Environmental needs – documents the hardware, software, interface and security
access requirement in this section.
12. Responsibilities – Defines the responsibility of the people involved in the tests. Also,
includes information on the responsibility criteria of those who establish the test
environment and manage the software configuration.
13. Staffing and training needs – Lists the skill requirements and training required for them
on the testing methodology adapted.
14. Schedule – List key milestone such as document delivery dates and available resources
in the time frame.
15. Risks and contingencies – Project risks such as budget and resource availability and
contingency plans associated with these risks.
16. Approvals – Signature of the responsible are collected.

37 | P a g e
Master Test plan-

Test plans can be created at various levels and derive these plans from master test plan.
The various test levels are

1. Acceptance test plan – Acceptance testing ensure requirements are met. This planning
takes place in early stage of product life cycle after the high-level requirements are
defined.
2. System test plan – System testing is comprehensive test on performance and reliability
of the software. A system test plan is based on the requirement and design
specifications in accordance with master test plan.
3. Integration test plan – In Integration testing, interaction of components of system and
cohesive functioning are tested. Integration test planning can start after stabilizing the
software design.
4. Component test plan – Component testing tests the individual source of code. Test plan?

Factors that influence the test planning process and test plan

1. Product – Product knowledge helps to perform efficient testing. Any risk that causes
product to fail should be understood at early stage of product development.
2. Process – Test planning is influenced by factors affecting test process such as
availability of testing tools. Using these tools can reduce the manual effort that would
require for executing the test.
3. People – This factor can include the skills of the individuals on the test team and their
relevant experience with similar software products and testing projects.
4. Approach – Different approach are required for testing different types of software
products. E.g. testing banking application is critical than testing a basic website.

Test planning involves Test estimation, test strategy, entry and exit criteria:

 In order to obtain accurate test estimate, one have to analyze a software test project,
which will aid in mitigate the risk on available resource, test environment, funding are
adequate for the project.
 Test approach according to risk influence the test success.
 Before delivering the product, need to ensure necessary exit criteria are met.

Test estimation

To estimate the overall cost for a project or it duration, two approach are used,

1. Metric-based –When you rely on test data for estimation e.g. Testing cost based on no
of test cases or it can be written and executed test cases, time required to run and
execute them and defects found can be used to calculate the estimate.
2. Expert-based - When you use expert’s data to split test process into several test tasks
and summing up of the effort on each task helps you to estimate the cost for whole
project. With their guidance you can determine the test scope and the risks involved.
These experts may be technical, analytical or business experts.

38 | P a g e
These approaches can be combined for estimation. E.g. Experts data can be used to test process
into several task + identify the test duration and no of test run by past records, then estimate
the project.

Bottom-up estimation is arriving at the estimate by meeting and planning by individual


submission of estimate.

Tester to developer ratio is an example of Top-down estimation, in that entire estimate is


derived at the project level.

Key factors influencing test efforts are

1. Product – Project documentation helps to learn about product requirement. E.g.


managing avionics time constraint require intricate knowledge on product requirement.
2. Process – Cost of the test project is directly proportional to the size and life cycle of the
test project. E.g. Test execution tools like automation reduces time and costs. Rework
costs can be reduced when there is effective test management system available.
3. People – A good test team with relevant experience and skills is more capable of
performing efficient testing. E.g. a team that change often, the experience and
knowledge level may differ, which affects test efficiency.
4. Test results – If high quality software is delivered to test and fixing defects quickly will
prevent unexpected delays and executions, resulting timely delivery.

Test strategy

A test strategy defines the test levels and their corresponding activities within the
project. With strategy enabled approach, you understand how to perform a test, identifying the
requirements of the tests from begin to end. The strategy you apply also helps to identify the
risk analysis to be carried out, design technique and exit criteria to be applied.

Various software testing strategy include,

1. Analytical – This strategy involves analysis of the requirements of a project and


associated risk involved, end result of analysis is used in preparing plan, design, and
estimate tests.
2. Model based – This strategy uses the model which describes the functional aspects of
the product. Well fit for automation.
3. Methodical – When planned, systematic approach to gather information on area of
testing and develop a checklist based on this information.
4. Process or Standard compliant – When adopted an external testing approach based on
standards like IEEE 829 standards for developing a template for test strategy.
5. Dynamic – This type of strategy is involved in runtime to create test possibilities and
find defects. E.g. exploratory testing.
6. Consultative or directed – It involves consulting with end user or a developer to
understand the product and formulate approach based on knowledge.
7. Regression-averse – regression occurs on change in software. In this strategy, test are
rerun to detect and report defects.

39 | P a g e
These test strategy falls either under Preventive or reactive. These strategy can be combined to
form a strategy. Blending strategy depends on many factors based on

1. Risks
2. Skills
3. Objectives
4. Regulations
5. Product
6. Business

Entry and exit criteria

Entry criteria are used to establish the conditions and timings for a given test activity to
commence. Inappropriate test start may result in loss of time and resource.

Entry criteria include,

1. Test environment availability


2. Test tools installed and available in test environment
3. Test Item should be available for testing
4. Test data is available and ready to use
5. Test design activity is complete

Exit criteria identify any inadequacies and incomplete testing tasks during project
execution. Successful completion is when the exit criteria is met.

Exit criteria include,

1. Defects
2. Coverage
3. Tests
4. Quality
5. Budget
6. Schedule
7. Risks

Exit criteria varies according to test levels, e.g. component testing exit criteria is different from
system testing exit criteria.

40 | P a g e
Software test progress Monitoring and control

Testing activity data are collected to assess whether the activities are progressing as planned.
This data is known as metrics.

Metrics are collected at end of each test level, which helps to verify testing activity adhere to
allocated schedule and budget.

Some of the metrics collected includes.

1. Percentage of work done – Work percentage is number of test cases prepared by the
tester against the time taken. It includes effort and time put on preparation of test
environment.
2. Performance of test cases – Execution of test case against time and % of success used to
determine this metrics.
3. Defect information – There are many defect related metrics like defect density, which is
the ratio of number of defects identified to the size of component or system. Other
metric include number of defects fixed and number of time a fixed component/system
is retested. When gap between number of defects open to number of defects closed
drastically decrease after time frame, then the product is fit for release.
4. Extent of testing – Extent of testing is related to approach of testing and ensure each
test progress as planned. E.g. testing the component thoroughly through each code
segment is one measure or only potential risks part of the code segment is tested is
another measure. Extent of testing also varies with test levels.
5. Confidence level of testers – When testers confidence on quality of the product is high,
may lead to overlook serious issue. If the confidence is low, may lead to unnecessary
tests. Try to ensure balanced and bias free approach used for testing. Ratio of testing
effort to defects found will provide an indication of tester’s confidence level.
6. Deliverable dates and testing costs – This metric collects data with respect to delivery
date and testing cost consumed to determine whether each test level can be completed
on time and budget. If not test may be stopped, scope can be modified or still stick to
the plan.

These metrics can be recorded using various tools e.g. IEEE 829 test log template, or
spreadsheet or bar graphs can be used.

IEEE 829 test log template consist of three sections.

1. Test log identifier –Unique number to identify the log


2. Description – Describe the software being tested and test environment.
3. Activity and event entries – Information about testing activity and events. E.g.
testing process, its outcome, defects identified and description and location of
defect report.

Failure rate- The ratio of number of failures of a given category to a given unit of measure.
E.g. failures per unit of time, failures per no of transactions.

41 | P a g e
Defect density – The number of defects identified in a component/system divided by the size
of the component/system.

Test summary report

Metrics and data collected to monitor test level, need to be shared with other team
members. To share the information easy to understand we prepare a report called test
summary report.

IEEE829 Standard template for test summary report include,

1. Test summary report identifier – Unique number to identify and track the report.
2. Summary – Documents summary of testing activities that were part of the test level.
3. Variances – This section measures the deviation/variations between the actual testing
strategies, specifications, procedures and guidelines mentioned in the test plan. A
testing manager uses this data for future improvements.
4. Comprehensive assessment – This section specifies the extent to which tester
completed test activities, met the exit criteria at the end of the test. It also conforms
whether the test activity conforms to the test plan guidelines.
5. Summary of results – It includes summary of output and results of testing process. Also
includes types of defects and resolution process.
6. Evaluation – Each tested component/product evaluated against specified requirements
and metrics. E.g. Integration testing between software components are recorded based
on expected/unexpected behavior and their defect condition based on performance and
reliability helps to evaluate the software.
7. Summary of activities – Summary of major testing activities and events are recorded.
Also documents the total resources, time and budget allocated for testing.
8. Approvals – Records of parties responsible for approving the test summary report in the
approvals section. Signature are obtained, which acknowledge the understanding and
approval of the test results.

Test Control

Each testing activity can be delayed by risks. E.g. due to access rights, tester unable to
test the product, delayed delivery of code. Few measures can be taken to control the test. Test
control decisions are based on test metrics and information in test summary report. E.g. If test
summary report indicates deviation in test plan and schedule, control measures are taken to
minimize deviation

Test manager performs following test control activities,

1. Analyzing test-monitoring data – using test monitoring data, test control decision are
taken, e.g. on higher percentage of test cases fails, may suggest stringent guidelines are
adopted for preparing test case.
2. Changing test schedule and priority – To accommodate delay, test schedule and priority
of test activities are changed. E.g. if crucial component not available for testing on the
scheduled date, then tester is assigned to test available component/ other critical tasks.

42 | P a g e
3. Setting acceptance criteria – by setting acceptance criteria for retesting, defects can be
minimized or eliminated. E.g. Tester accepts retesting only if development test it
thoroughly and declare defect free.
4. Reviewing risks – Test can be controlled by reviewing the risks. E.g. during the course of
testing risk associated with product may change, and hence the test plan should adopt.
This helps to eliminate unnecessary testing.
5. Adjusting scope of testing – If new features are added or changes made to component
at a later stage of development, then scope of testing need to adjust accordingly.

Configuration management, risks and incidents in software testing


Configuration management in software testing

When software undergoes changes due to testing, change in requirement, change in


functionality it is important to identify and record configuration items such as OS used,
networking elements and source code modified. This actions is called configuration
management.

Version control is used to manage the configuration item as they go through iterations. Version
control has unique name, version number and attribute related to configuration item. This can
help to identify and trace the incidents back to test cases during the actual testing.

Configuration management involves two functions,

1. Library management – Library management involves administrative functions like,


a. managing and documenting the change made in the product
b. versioning of the software
c. maintaining baseline of the software for easy tracking
2. Change control board (CCB) or Bug triage committee– CCB analyzed the incident reports
during testing and decides how to handle them, e.g. critical/minor/enhancement. Board
comprises of end users, testers, developers and stakeholders of the product. It is
responsible for resolving conflict resolution and release management.

IEEE 829 Standard: Test Item transmittal report template

1. Transmittal report identifier


2. Transmitted items
3. Location
4. Status
5. Approvals

43 | P a g e
RISKs and Testing

Software project can face risks because of uncertainty, making risk management is
integral part of a project. You can minimize the potential risks and maximize the opportunity by
creating a risk management plan. It allows you to monitor you project.

Software testing typically face two types of risks.

1. Project risks – Project risks are unfavorable events that may impact the achievements of
one or more project objectives on time. E.g. exceeding development estimates, change in
scope of testing, inadequate training, budget constraint, third party software dependency
2. Product risks – Any risk that impacts the quality of a product being delivered constitutes
product risks. E.g. inadequate test process, inefficient tester, failure of the software to
meet specification etc.

Project risks

To deal with project risks, management of risks involves, applies for any risk product or project

1. Ignore – Ignore the risks, if the probability of occurrence and impacts is low. E.g. If
tester goes on vacation, other testers are used to manage the GAP. Ignore in situation
were nothing can be done to solve
2. Transfer - If a risk is high probable, it can be transferred to third party. It doesn’t
eliminate the risks.
3. Contingency – Risks events are identified and response plan is created that will be
executed if certain events occurs as predefined.
4. Mitigate – Some risks are mitigated to minimize negative impact of the risks e.g. an
experience member leaving the team. Taking early actions to reduce the potential risks
impact is more effective than trying to react after risks occurs.

Risk types that occurs in a project are,

1. Supplier issues – A supplier might fail to provide critical materials on time, required
for testing environment.
2. Organization issues – Shortage of skilled testers, inadequate training, outdated
development and testing practices & failure of tester to communicate and respond to
test results.
3. Product changes – Change in the product requirement may result in additional effort
and delayed schedule. It can result in modification/new test cases.
4. Test environment issues – Inadequate testing environments like limited network
connectivity, poor system configuration or unavailability of required applications
results in delayed project or misleading data.
5. Faulty test items – test cases/test items that cannot be executed in test
environment. E.g. test cases may not target critical area of the product. Smoke test
need to be perform before starting test process.

44 | P a g e
6. Technical problems – Risk that arises because of technical problems like difficulty in
defining test requirements, constraints that limit effectiveness of test cases, quality
of product design or highly complex systems.

Product Risk

Instead of concentrating on all features equally (conventional test), risk based test can
be conducted. In this test, feature that are more likely to fail and feature that have a big
business impact are given high priority. They are the two classification of risk levels.

Risk analysis table can be used to categories the feature,

1. Map the probability of failure for each feature to be tested.


2. Impact of the failure on the customer business.
3. Risk priority is calculated by adding probability of failure and impact of failure. E.g. if
High =3
Medium =2
Low =1
Shop online has 3+3 = 6
View item has 1+3 = 4

o When two feature has same risk priority, then risk of “Impact of failure” acts are
tie breaker.
o Alternatively, they can be multiplied to get the risk priority value.
o Global attributes can also be included as part of risk analysis table.

45 | P a g e
Basic incident report in Software testing

When software produce results than expected, such a result is called an incident. If incident
caused software to malfunction they are categorized as defect.

An incident report enables to categorize the incident. Its purpose may include,

1. Provide feedback to developers – Complete and adequate information on the report will
help the developer to fix the defect.
2. Track quality and progress of testing – Incident report provides quality overview of the
product to the test manager/leads. E.g. numerous bugs means poor quality of software.
3. Improve test process – test process can be improved by determining the phase in which
incident or defect is detected.

Logging and managing incidents in software testing.

A typical incident report has four main sections, IEEE 829 standard template for incident
reports states the same,

1. Test incident report identifier – Unique ID is provided for the incident.


2. Summary – brief description of the incident is provided in summary.
3. Incident description – Incident are described here. Sub sections include
a. Inputs – Record input data which triggered the incident.
b. Expected results – Document how software is expected to function for the input
c. Actual results – Actual result observed after execution is recorded here.
d. Anomalies – How actual result differ from expected result. Or unusual behavior
should be documented.
Other subsections include,
e. Date and time of the incident – Record of date and time of incident
f. Procedural steps – steps to be followed to reproduce the bug,
g. Environment – Hardware/software environment used.
h. Number of attempts to repeat an incident – consistency of bug.
i. Comments – additional information from tester if any.
4. Impact – describes how the incident would impact the end user and assign a severity to
the incident from low to high impact.

Defect detection percentage compares field defects with test defects, is an important metric of
the effectiveness of the test process.

𝐷𝑒𝑓𝑒𝑐𝑡𝑠 (𝑡𝑒𝑠𝑡𝑒𝑟𝑠)
DDP= 𝐷𝑒𝑓𝑒𝑐𝑡𝑠 (𝑡𝑒𝑠𝑡𝑒𝑟𝑠)+𝑑𝑒𝑓𝑒𝑐𝑡𝑠 (𝑓𝑖𝑒𝑙𝑑)

46 | P a g e
Stages involved in life cycle of an incident report are shown below,

1. Reported – When incident reported is logged, it is in reported stage. Test manager/peer


reviews the incident and decides promotion to “Opened” state or send back for rewriting
2. Rejected – If incident report is not reproducible/ defect couldn’t be justified, then
incident report is moved to rejected stage. Test manager might reject IR for more
information to be added, due to inadequate information in the report.
3. Opened – When Incident report passed the initial review.
4. Deferred – When project team consider incident report as defect and holds it from fixing
temporarily, then the incident is in deferred state.
5. Reopened – In case of failure in confirmation test or a problem being observed after
incident is closed, it must be reopened and reassigned.
6. Assigned – When project team approves for repair, programmer start to fix the defect.
7. Fixed – Programmer send a fixed report after fixing the defect. Testers performs a
confirmation testing based on this report.
8. Closed – incident report is closed when tester confirms the defect is fixed.

At each stage of incident life cycle, distinct owners are responsible except Rejected, Deferred
and closed state.

Defect tracking tool can be used to track the defects, analyze and prepare metrics. Major
attributes of defect tracking tool should include,

1. User friendly interface – Interface should enable user on quick and easy approach to
report information.
2. Customizable fields – Cater various discipline of user to customize as per their need.
3. Discrete data analysis – With this feature, manager can easily extract data in the form of
graph, chart or table to analyze the data.
4. Dynamic incident log – Dynamic incident log for each incident recorded in the report
enables to track the progress of incident through various stage of life cycle.
5. Organization wide accessibility – Tool should be accessible across the organization at
any time, without the need to wait or when someone already accessing it.

47 | P a g e
[Chatper-6: Tool support for testing]
Tools support for testing [Types of test tools]
There is tool support for many different aspects of software testing.

Generic reason for using test tool might be,

1. Aid in test execution


2. Manage testing process e.g. test result, data, requirement incident etc.
3. Tool for monitoring activity of application
4. Tools like spreadsheet to indicate test progress and documentation.

Objective of using a tool could be,

1. Improve efficiency, by automating repeated task like regression replay.


2. Automating task like static testing of code.
3. To test large scale application, which humanly impossible e.g. load testing
4. To increase reliability of testing, e.g. in case of data comparison of large data.

Probe effect is the effect on the component/system by the measurement instrument.

Test Tool classification

1. Tool support for management of testing and test


2. Tool support for static testing
3. Tool support for test specification
4. Tool support for test execution and logging
5. Tool support for performance and monitoring

Tool support for management of testing and test

Tools used in the software testing can be classified according to the activity they support. Tools
designed for management of the testing process comes under Management classification.

Tools for the management of tests and testing procedures include,

1. Test management tools - It is used for managing the testing process. These tools are
used by expert testers/managers during system or acceptance testing, they contain
support for,
a. Managing testing activities and tasks
b. Managing test procedures
c. Providing management progress reports based on metrics
d. Interfacing with other tools

48 | P a g e
2. Requirement management tools - This tools are used to store, manage and ensure
consistency and integrity of requirements during the lifecycle of the testing. These tools
used to,
a. Capture and store requirements
b. Identify defects in requirements e.g. ambiguous statements in requirement.
c. Identify any changes to other items
d. Calculate requirements coverage metrics
3. Incident management tools - These tools manage defects and incidents such as
anomalies, enhancement requests and suggestions that are recorded during testing.
These tools are used to create incident reports. Incidents also searched, analyzed and
presented as management information, used in planning and estimating in new project.
Incident report contain information on details of all stages that incident pass through.
(E.g. opened, rejected, duplicate, deferred, assigned, ready for confirmation testing,
closed). Incidents are stored in database with relevant fields like severity, current status,
people involved etc.
4. Configuration management tools - These tools are used to keep track of versions of the
software being tested along with related information on testing setup. Highly useful
when complex systems undergo changes. Mapping of this information allows
traceability. These tools allows to,
a. Store information
b. Trace testware to version and vice versa
c. Perform other miscellaneous activities

Tool support for static testing

Tools available for static testing are,

1. Review process support tools – tracks review process and logs all review details
2. Static analysis tools – Used to help developer to identify programming language related
issue and enforce coding standards e.g. syntax errors, invalid code structure, reference
to variable with null values.
3. Modeling tools - used by developer during analysis and design stages of product
development, which help to validate models of a system or software. They are used
before dynamic tests are run to find omissions, inconsistencies and defects early in the
life cycle of product development.

Tool support for test specification

Tools available for test specification are,

1. Test design tools – it helps in generating test inputs. When automated oracle/test basis
is available, test design tools also help generate test case with expected results. There
are different types of test design tools with varied level of automation. E.g. screen
scraper which is used to capture the UI elements and generate test design, but unless
the tool have access to oracle, it may not know the test result/expected outcome for

49 | P a g e
action. Another example is test design tool bundled with coverage tool. Feature or
characteristics of test design tools include support for,
a. Generating test input values from
i. Requirements,
ii. Design models
iii. Code,
iv. Graphical user interface
v. Test conditions
b. Generating expected results, if an oracle is available to the tool.

2. Test data preparation tools – Data preparation tools help you to collect data, such as
fictitious names and address for creating test cases. They are useful on the need to
collect large volume of data for testing. Test data preparation tools help in
a. Generating new records with random related data
b. Sorting/rearranging existing records differently
c. Extracting selected data records from files or databases
d. Using a template to construct a large number of similar records for volume data.
e. Modifying/massage data records for data protection and not connected to real
people

A test script is essentially program code and a programmer is required to write/modify scripts.
This test script is used to test software and compare the actual results to the expected results.
Some tools capture user action and create corresponding code to form a test script.

Cutting down unmanageable number of test by risk analysis. Using a technique such
orthogonal array can help.

Tool support for test execution and logging

Tools enabling test execution and logging are,

1. Test execution tools – Some test execution tools are also called capture/playback tools,
helps automatically run test scripts and record results in test log. This tool enable you to
avoid manual errors both during execution and comparison.
2. Test harness/unit framework tools – These two types of tool are grouped together
because variants of type used to support development activity e.g. testing components.
a. A test harness provides stubs and drivers, designed to call/input or to accept
output from other software module being tested.
b. Unit test framework tools provide support for object-oriented software, “XUnit”
tools can be used to create test harness. E.g. using .NET to create “NUnit” tool.
Using Java to create “Junit” tool.
3. Test comparators – It compare software actual output to the expected output. E.g. on
testing a data transfer accuracy from database to database, test comparison ease
manual comparison job. Two types of test comparison are

50 | P a g e
a. Dynamic comparison – When comparison in preformed in real time/on
execution.
b. Post execution comparison – Comparing after the test is finished. Used in
comparing a large volume of data.
4. Coverage measurement tools – It measures the percentage of quantifiable structural
elements or “coverage items” that are covered by given test suite. The process of
identifying the coverage items at component level is called “Instrumenting the code”
Steps used are,
a. Instrument the code
b. Test the instrumented code
c. Identify the coverage items that were exercised
d. Remove the instrumented code
5. Security tools – These tools are used to test a system’s resistance to security threats
such as computer viruses, worms, or denial of service attacks. Tool are used to,
a. Identify virus attacks
b. Identify weak passwords
c. Identify open ports and points of attack
d. Simulate different types of external attacks.
e. Detecting intrusion such as denial of service attack.

Tool support for performance and monitoring

1. Dynamic analysis tools – These tools are used to analyze the software code while
running. Developers use this tools during component and integration testing. This tool
can,
a. Identify memory leaks
b. Identify pointer arithmetic errors such as null pointers.
c. Detect time dependencies
d. Report status of software during execution
e. Monitor allocation/use/de-allocation of memory
E.g. web spider tool helps to identify dead link in a website.

2. Performance testing tools – These tools test the performance of a system under
different load or usage patterns. Measures system characteristics such as response time
and mean time between failures. They perform,
a. Load testing – tests whether the system can cope with expected number of
transactions/load. Logs are used to generate reports/graphs.
b. Volume testing – Testing whether system can handle large volume of data. E.g. a
file contains many records, testing whether system can open and display all the
records.
c. Stress testing – Testing whether system can handle more transactions than it
designed to handle, by performing both load and volume testing.

51 | P a g e
3. Monitoring tools – Is used to monitor the status and performance of the systems in use.
They are used on deployment. In order to have the earliest warning of problems and to
improve service. It covers IT infrastructure such as servers, databases, networks, internet
usage and applications.

Benefits and risks of tools in software testing

A good software testing tool can potentially, (benefits)

1. Reduce time and effort for repetitive work


2. Provide more predictable and consistent results
3. Access and present accurate test management information
4. Ensure reports or findings are assessed objectively.

There are also risks involved in using tools, such as

1. Underestimating the time, cost and effort when first introducing tool
2. Expecting more from tool
3. Underestimating the time and effort need to derive benefits from tool
4. Over-reliance on the tool
5. Underestimating the effort required to maintain test assets generated by the tool
6. Interoperability issue between tools
7. Skill needed to create good tests with type of tool

Some of the test tools are,

1. Test execution tools


2. Performance testing tools
3. Static analysis tools
4. Test management tools

52 | P a g e
Script used by testing tools include, 1&2 are advanced level scripts

1. Data driven scripts – where a control script reads test data stored in a file/spreadsheet
2. Keyword driven scripts – Information about the test and control script are invoked from
them
3. Linear scripts – Capturing manual test actions and then replaying it.
4. Shared scripts – reused as a script called by other scripts.
5. Structure scripts – uses selection and iteration programming structures.

Introducing a tool to organization

Before purchasing the tools, its advantage and disadvantage need to be analyzed.

Before buying a tool things to keep in mind are,

1. Maturity of internal test process


2. Availability of alternate solutions
3. Constraint faced and requirements specified
4. Availability of tools that meet these requirements
5. Detailed evaluation/proof of concept
6. Identifying and planning internal implementation
7. Cost of the selected tool and estimation of ROI (return of investment).

To evaluate the organization maturity for deployment of tool, you can use,

1. Test process improvement (TPI) model


2. Capability maturity model integration (CMMI) model

Pilot project need to be conducted to test the tool, to help

1. Gain knowledge
2. Assess compatibility
3. Decide on process modifications
4. Decide on how to ensure people make optimum use of the tool
5. Evaluate the benefits of the tool
6. Determine other details for using the tool

After buying the tool, to implement the tool, one should

1. Adopt incremental approach


2. Adapt the tool to existing processes
3. Use the test pilot data to create guideline for working with tool.
4. Train employee
5. Create a database of learning with new tool

53 | P a g e
Foundation Level professionals should be able to:

 Use a common language for efficient and effective communication with other testers and project
stakeholders.
 Understand established testing concepts, the fundamental test process, test approaches, and
principles to support test objectives.
 Design and prioritize tests by using established techniques; analyze both functional and non-
functional specifications (such as performance and usability) at all test levels for systems with a low
to medium level of complexity.
 Execute tests according to agreed test plans, and analyze and report on the results of tests.
 Write clear and understandable incident reports.
 Effectively participate in reviews of small to medium-sized projects.
 Be familiar with different types of testing tools and their uses; assist in the selection and
implementation process.

54 | P a g e

Potrebbero piacerti anche