Sei sulla pagina 1di 64

Manual Testing Guide

Software Testing: Testing is a process of executing a program with the intent of finding error.

Software Testing are contains two types:


1) Manual Testing and,
2) Automation Testing
Manual Testing:
Manual testing is the process of manually testing software for defects. It requires a tester to play
the role of an end user, and use most of all features of the application to ensure correct behavior.
To ensure completeness of testing, the tester often follows a written test plan that leads them
through a set of important test cases
Software Testing Definition: Testing is the process of executing a program with the intention of
finding errors.

Drawbacks of Manual Testing


(i) Time consuming.
(ii) More resources required.
(iii)Human Errors
(iv)Repetition of the Task
(v)Tiredness
(vi)Simultaneous auctions are not possible (Parallel)

Software Engineering: Software Engineering is the establishment and use of sound engineering
principles in order to obtain economically software that is more reliable and works efficiently on
real machines.

Software engineering is based on Computer Science, Management Science, Economics,


Communication Skills and Engineering approach.

What should be done during testing?


Confirming product as
Product that has been developed according to specifications
Working perfectly
Satisfying customer requirements
Why should we do testing?
Error free superior product
Quality Assurance to the client
Competitive advantage
Cut down costs
How to test?
Testing can be done in the following ways:
Manually
Automation (By using tools like WinRunner, LoadRunner, QTP )
Combination of Manual and Automation.
Software Project: An individual or collaborative enterprise planned and designed to achieve an
aim.
Information Gathering Requirements Analysis Design Coding Testing Maintenance:
Are called as Project

Software Development Phases:


Information Gathering: It encompasses requirements gathering at the strategic business level.

Planning: To provide a framework that enables the management to make reasonable estimates of
Resources Cost Schedules Size
Requirements Analysis: Data, Functional and Behavioral requirements are identified.

Data Modeling: Defines data objects, attributes, and relationships.


Functional Modeling: Indicates how data are transformed in the system.
Behavioral Modeling: Depicts the impact of events.
Design: Design is the engineering representation of product that is to be built.

Data Design: Transforms the information domain model into the data structures that will be
required to implement the software.
Architectural design: Relationship between major structural elements of the software. Represents
the structure of data and program components that are required to build a computer based
system.
Interface design: Creates an effective communication medium between a human and a computer.
Component level Design: Transforms structural elements of the software architecture into a
procedural description of software components.
Coding: Translation into source code (Machine readable form)
Testing: Testing is a process of executing a program with the intent of finding error
Unit Testing: It concentrates on each unit (Module, Component) of the software as
implemented in source code.
Integration Testing: Putting the modules together and construction of software architecture.
System and Functional Testing: Product is validated with other system elements are tested as a
whole
User Acceptance Testing: Testing by the user to collect feed back.
Maintenance: Change associated with error correction, adaptation and enhancements.
Correction: Changes software to correct defects.

Adaptation: Modification to the software to accommodate changes to its external environment.


Enhancement: Extends the software beyond its original functional requirements.
Prevention: Changes software so that they can be more easily corrected, adapted and enhanced.
Business Requirements Specification (BRS): Consists of definitions of customer requirements.
Also called as CRS/URS (Customer Requirements Specification / User
Requirements Specification)

Software Requirements Specification (S/wRS): Consists of functional requirements to develop


and system requirements(s/w & H/w) to use.

Review: A verification method to estimate completeness and correctness of documents.

High Level Design Document (HLDD): Consists of the overall hierarchy of the system in terms
of modules.

Low Level Design Document (LLDD): Consists of every sub module in terms of Structural
logic (ERD) and Backend Logic(DFD)

Prototype: A sample model of an application without functionality is called as


prototype(Screens)

White Box Testing: A coding level testing technique to verify completeness and correctness of
the programs with respect to design. Also called as Glass BT or Clear BT

Black Box Testing: It is a .exe level of testing technique to validate functionality of an


application with respect to customer requirements. During this test engineer validate internal
processing depends on external interface.

Grey Box Testing: Combination of white box and black box testing.

Build: A .Exe form of integrated module set is called build.

Verification: whether system is right or wrong?

Validation: whether system is right system or not?

Software Quality Assurance(SQA): SQA concepts are monitoring and measuring the strength
of development process.
Ex: LCT (Life Cycle Testing)

Quality:
Meet customer requirements
Meet customer expectations (cost to use, speed in process or performance, security)
Possible cost
Time to market
For developing the quality software we need LCD and LCT

LCD: A multiple stages of development stages and the every stage is verified for completeness.

V model:

Build: When coding level testing over. it is a completely integration tested modules. Then it is
called a build. Build is developed after integration testing. (.exe)

Test Management: Testers maintain some documents related to every project. They will refer
these documents for future modifications.

Port Testing: This is to test the installation process.

Change Request: The request made by the customer to modify the software.

Defect Removel Efficiency:


DRE= a/a+b.
a = Total no of defects found by testers during testing.
b = Total no of defects found by customer during maintenance.

DRE is also called as DD(Defect Deficiency).

BBT, UAT and Test management process where the independent testers or testing team will be
involved.

Refinement form of V-Model: Due to cost and time point of view v-model is not applicable to
small scale and medium scale companies. This type of organizations are maintaining a
refinement form of v-model.

Fig: Refinement Form of V-Model

Development starts with information gathering. After the requirements gathering


BRS/CRS/URS will be prepared. This is done by the Business Analyst.

During the requirements analysis all the requirements are analyzed. at the end of this phase
S/wRS is prepared. It consists of the functional (customer requirements) + System Requirements
(h/w + S/w) requirements. It is prepared by the system analyst.

During the design phase two types of designs are done. HLDD and LLDD. Tech Leads will be
involved.

During the coding phase programs are developed by programmers.

During unit testing, they conduct program level testing with the help of WBT techniques.

During the Integration Testing, the testers and programmers or test programmers integrating the
modules to test with respect to HLDD.

During the system and functional testing the actual testers are involved and conducts tests based
on S/wRS.

During the UAT customer site people are also involved, and they perform tests based on the
BRS.

From the above model the small scale and medium scale organizations are also conducts life
cycle testing. But they maintain separate team for functional and system testing.

Reviews during Analysis:


Quality Analyst decides on 5 topics. after completion of information gathering and analysis a
review meeting conducted to decide following 5 factors.
Are they complete?

Are they correct? Or Are they right Requirements?


Are they achievable?
Are they reasonable? ( with respect to cost & time)
Are they testable?
Reviews during Design:
After the completion of analysis of customer requirements and their reviews, technical support
people (Tech Leads) concentrate on the logical design of the system. In this every stage they will
develop HLDD and LLDD.

After the completion of above like design documents, they (tech leads) concentrate on review of
the documents for correctness and completeness. In this review they can apply the below factors.

Is the design good? (understandable or easy to refer)


Are they complete? (all the customer requirements are satisfied or not)
Are they correct? Are they right Requirements? (the design flow is correct or not)
Are they follow able? (the design logic is correct or not)
Does they handle error handling? ( the design should be able to specify the positive and negative
flow also)

Unit Testing:

After the completion of design and their reviews programmers are concentrating on coding.
During this stage they conduct program level testing, with the help of the WBT techniques. This
WBT is also known as glass box testing or clear box testing.

WBT is based on the code. The senior programmers will conduct testing on programs WBT is
applied at the module level.

There are two types of WBT techniques, such as

1.

Execution Testing
Basis path coverage (correctness of every statement execution.)
Loops coverage (correctness of loops termination.)

Program technique coverage (Less no of Memory Cycles and CPU cycles during
execution.)

1.
Operations Testing: Whither the software is running under the customer expected
environment platforms (such as OS, compilers, browsers and etcsys s/w.)
Integration Testing: After the completion of unit testing, development people concentrate on
integration testing, when they complete dependent modules of unit testing. During this test
programmers are verifying integration of modules with respect to HLDD (which contains
hierarchy of modules).

There are two types of approaches to conduct Integration Testing:


Top-down Approach
Bottom-up approach.
Stub: It is a called program. It sends back control to main module instead of sub module.
Driver: It is a calling Program. It invokes a sub module instead of main module.

Bottom-Up: This approach starts testing, from lower-level modules. drivers are used to connect
the sub modules. ( ex login, create driver to accept default uid and pwd)

Sandwich: This approach combines the Top-down and Bottom-up approaches of the integration
testing. In this middle level modules are testing using the drivers and stubs.

System Testing:
Conducted by separate testing team
Follows Black Box testing techniques
Depends on S/wRS
Build level testing to validate internal processing depends on external interface processing
depends on external interface

This phase is divided into 4 divisions


After the completion of Coding and that level tests(U & I) development team releases a finally
integrated all modules set as a build. After receiving a stable build from
development team, separate testing team concentrate on functional and system testing with the
help of BBT.

This testing is classified into 4 divisions.

Usability Testing (Ease to use or not. Low level Priority in Testing)


Functional Testing (Functionality is correct or not. Medium Priority in Testing)
Performance Testing (Speed of Processing. Medium Priority in Testing)
Security Testing (To break the security of the system. High Priority in Testing)

Usability and System testing are called as Core testing and Performance and Security Testing
techniques are called as Advanced testing.

Usability Testing is a Static Testing. Functional Testing is called as Dynamic Testing.

From the testers point of view functional and usability tests are important.

Usability Testing: User friendliness of the application or build. (WYSIWYG.)


Usability testing consists of following subtests also.

User Interface Testing

Ease of Use ( understandable to end users to operate )

Look & Feel ( Pleasantness or attractiveness of screens )

Speed in interface ( Less no. of events to complete a task.)

Manual Support Testing: In general, technical writers prepares user manuals after completion of
all possible tests execution and their modifications also. Now a days help documentation is
released along with the main application.

Help documentation is also called as user manual. But actually user manuals are prepared after
the completion of all other system test techniques and also resolving all the bugs.

Functional testing: During this stage of testing, testing team concentrate on " Meet Customer
Requirements". For performing what functionality, the system is developed met or not can be
tested.

For every project functionality testing is most important. Most of the testing tools, which are
available in the market are of this type.

The functional testing consists of following subtests

System Testing
|
|
80 % --->

Functional Testing

|
|
80 %

--->

Functionality / Requirements Testing

Functionality or Requirements Testing: During this subtest, test engineer validates correctness
of every functionality in our application build, through below coverage.
If they have less time to do system testing, they will be doing Functionality Testing only.
Functionality or Requirements Testing has following coverages

Behavioral Coverage ( Object Properties Checking ).


Input Domain Coverage ( Correctness of Size and Type of every i/p Object ).
Error Handling Coverage ( Preventing negative navigation ).
Calculations Coverage ( correctness of o/p values ).
Backend Coverage ( Data Validation & Data Integrity of database tables ).
URLs Coverage (Links execution in web pages)
Service Levels ( Order of functionality or services ).

Successful Functionality ( Combination of above all ).

All the above coverages are mandatory or must.

Input Domain Testing: During this test, the test engineer validates size and type of every input
object. In this coverage, test engineer prepares boundary values and equivalence classes for
every input object.

Ex: A login process allows user id and password. User id allows alpha numeric from 4-16
characters long. Password allows alphabet from 4-8 characters long.

Boundary Value analysis:


Boundary values are used for testing the size and range of an object.

Equivalence Class Partitions:


Equivalence classes are used for testing the type of the object.

Recovery Testing: This test is also known as Reliability testing. During this test, test engineers
validates that, whether our application build can recover from abnormal situations or not.

Ex: During process power failure, network disconnect, server down, database disconnected etc

Recovery Testing is an extension of Error Handling Testing.


Compatibility Testing: This test is also known as portable testing. During this test, test engineer
validates continuity of our application execution on customer expected platforms( like OS,
Compilers, browsers, etc..)
During this compatibility two types of problems arises like
Forward compatibility
Backward compatibility
Forward compatibility:
The application which is developed is ready to run, but the project technology or environment
like OS is not supported for running.

Backward compatibility:
The application is not ready to run on the technology or environment.

Configuration Testing: This test is also known as Hardware Compatibility testing. During this
test, test engineer validates that whether our application build supports different technology i.e.
hardware devices or not?
Inter Systems Testing: This test is also known as End-to-End testing. During this test, test
engineer validates that whither our application build coexistence with other existing software in
the customer site to share the resources (H/w or S/w).
Installation Testing: Testing the applications, installation process in customer specified
environment and conditions.

The following conditions or tests done in this installation process.

Setup Program: Whither Setup is starting or not?


Easy Interface: During Installation, whither it is providing easy interface or not ?
Occupied Disk Space: How much disk space it is occupying after the installation?
Sanitation Testing: This test is also known as Garbage Testing. During this test, test engineer
finds extra features in your application build with respect to S/w RS.
Maximum testers may not get this type of problems.

Parallel or Comparitive testing: During this test, test engineer compares our application build
with similar type of applications or old versions of same application to find competitiveness.

This comparative testing can be done in two views:

Similar type of applications in the market.


Upgraded version of application with older versions.
Performance Testing: It is an advanced testing technique and expensive to apply. During this
test, testing team concentrate on Speed of Processing.

This performance test classified into below subtests.


Load Testing
Stress Testing
Data Volume Testing
Storage Testing
Load Testing:
This test is also known as scalability testing. During this test, test engineer executes our
application under customer expected configuration and load to estimate performance.

Load: No. of users try to access system at a time.


This test can be done in two ways
Manual Testing. 2.By using the tool, Load Runner.
Stress Testing:
During this test, test engineer executes our application build under customer expected
configuration and peak load to estimate performance.

Data Volume Testing:


A tester conducts this test to find maximum size of allowable or maintainable data, by our
application build.

Storage Testing:
Execution of our application under huge amounts of resources to estimate storage limitations to
be handled by our application is called as Storage Testing.

Security Testing: It is also an advanced testing technique and complex to apply. To conduct this
tests, highly skilled persons who have security domain knowledge.

This test is divided into three sub tests.


Authorization: Verifies authors identity to check he is a authorized user or not.

Access Control: Also called as Privileges testing. The rights given to a user to do a system task.

Encryption / Decryption:
Encryption- To convert actual data into a secret code which may not be understandable to
others.
Decryption- Converting the secret data into actual data.

User Acceptance Testing: After completion of all possible system tests execution, our
organization concentrate on user acceptance test to collect feed back.
To conduct user acceptance tests, they are following two approaches like Alpha () - Test and
Beta () -Test.

Note: In s/w development projects are two types based on the products like software application
( also called as Project ) and Product.

Software Application ( Project ) : Get requirements from the client and develop the project.
This software is for only one company. And has specific customer. For this Alpha test will be
done.

Product : Get requirements from the market and develop the project. This software may have
more than one company. And has no specific customer. For this - Version or Trial version will
be released in the market to do Beta test.

Testing during Maintenance:


After the completion of UA Testing, our organization concentrate
on Release Team (RT) formation. This team conducts Port Testing in customer site, to estimate
completeness and correctness of our application installation.

During this Port testing Release team validate below factors in customer site:

Compact Installation (Fully correctly installed or not)


On screen displays
Overall Functionality
Input device handling
Output device handling
Secondary Storage Handling
OS Error handling
Co-existence with other Software
The above tests are done by the release team. After the completion of above testing, the Release
Team will gives training and application support in customer site for a period.

During utilization of our application by customer site people, they are sending some Change
Request (CR) to our company. When CR is received the following steps are done
Based on the type of CR there are two types,
Enhancement
Missed Defect

Testing Stages Vs Roles:

Reviews in Analysis

Business Analyst / Functional Lead.

Reviews in Design

Technical Support / Technical Lead.

Unit Testing

Senior Programmer.

Integration Testing

Developer / Test Engineer.

Functional & System Testing Test Engineer.


User Acceptance Testing

Customer site people with involvement of testing team.

Port Testing

Release Team.

Testing during Maintenance Change Control Board

Testing Stages

Roles

Reviews in Analysis

Business Analyst / Functional Lead.

Reviews in Design

Technical Support / Technical Lead.

Unit Testing

Senior Programmer.

Integration Testing

Developer / Test Engineer.

Functional & System Testing

Test Engineer.

User Acceptance Testing

Customer site people with involvement of Testing

team.
Port Testing

Release Team.

Testing during Maintenance/


Test Software Changes

Change Control Board

Testing Team:

From refinement form of V-Model small scale companies and medium scale companies are
maintaining separate testing team for some of the stages in LCT.
In their teams organisation maintains below roles

Quality Control: Defines the objectives of Testing


Quality Assurance: Defines approach done by Test Manager
Test Manager: Schedule that approach
Test Lead: Maintain testing team with respect to the test plan
Test Engineer: Conducts testing to find defects
Quality Control: Defines the objectives of Testing
Quality Assurance: Defines approach done by Test Manager
Test Manager: Schedule, Planning
Test Lead: Applied
Test Engineer: Followed

Testing Terminology:-

Monkey / Chimpanzee Testing: The coverage of main activities only in your application during
testing is called as monkey testing.(Less Time)

Gerilla Testing: To cover a single functionality with multiple possibilities to test is called Gerilla
ride or Gerilla Testing. (No rules and regulations to test a issue)
Exploratory Testing: Level by level of activity coverage of activities in your application during
testing is called exploratory testing. (Covering main activities first and other activities next)
Sanity Testing: This test is also known as Tester Acceptance Test (TAT). They test for whither
developed team build is stable for complete testing or not?

Smoke Testing: An extra shakeup in sanity testing is called as Smoke Testing. Testing team
rejects a build to development team with reasons, before start testing.

Bebugging: Development team release a build with known bugs to testing them.

Bigbang Testing: A single state of testing after completion of all modules development is called
Bigbang testing. It is also known as informal testing.

Incremental Testing: A multiple stages of testing process is called as incremental testing. This is
also known as formal testing.

Static Testing: Conduct a test without running an application is called as Static Testing.
Ex: User Interface Testing

Dynamic Testing: Conduct a test through running an application is called as Dynamic Testing.

Ex: Functional Testing, Load Testing, Compatibility Testing

Manual Vs Automation: A tester conduct a test on application without using any third party
testing tool. This process is called as Manual Testing.

A tester conduct a test with the help of software testing tool. This process is called as
Automation.

Need for Automation:


When tools are not available they will do manual testing only. If your company already has
testing tools they may follow automation.

For verifying the need for automation they will consider following two types:

Impact of the test: It indicates test repetition

Criticality: Load Testing, for 1000 users.

Criticality indicates complex to apply that test manually. Impact indicates test repetition.

Retesting: Re execution of our application to conduct same test with multiple test data is called
Retesting.

Regression Testing: The re execution of our test on modified build to ensure bug fix work and
occurrences of side effects is called regression testing.

Any dependent modules may also cause side effects

Selection of Automation: Before starting one project level testing by one separate testing team,
corresponding project manager or test manager or quality analyst defines the need of test
automation for that project depends on below factors.

Type of external interface:


GUI Automation.
CUI Manual.

Size of external interface:


Size of external interface is Large Automation.
Size of external interface is Small Manual.

Expected No. of Releases:


Several Releases Automation.
Less Releases

Manual.

Maturity between expected releases:

More Maturity Manual.


Less Maturity Automation.
Tester Efficiency:
Knowledge of automation on tools to test engineers

Automation.

No Knowledge of automation on tools to test engineers Manual.


Support from Senior Management:
Management accepts Automation.
Management rejects Manual.

Testing Policy: It is a company level document and developed by QC people. This document
defines testing objectives, to develop a quality software.
Test Strategy:
Scope & Objective: Definition, need and purpose of testing in your in your organization
Business Issues: Budget Controlling for testing
Test approach: defines the testing approach between development stages and testing factors.

TRM: Test Responsibility Matrix or Test Matrix defines mapping between test factors and
development stages.
Test environment specifications: Required test documents developed by testing team during
testing.
Roles and Responsibilities: Defines names of jobs in testing team with required responsibilities.
Communication & Status Reporting: Required negotiation between two consecutive roles in
testing.
Testing measurements and metrics: To estimate work completion in terms of Quality Assessment,
Test management process capability.
Test Automation: Possibilities to go test automation with respect to corresponding project
requirements and testing facilities / tools available (either complete automation or selective
automation)
Defect Tracking System: Required negotiation between the development and testing team to fix
defects and resolve.
Change and Configuration Management: required strategies to handle change requests of
customer site.
Risk Analysis and Mitigations: Analyzing of future common problems appears during testing and
possible solutions to recover.
Training plan: Need of training for testing to start/conduct/apply.

Test Factor: A test factor defines a testing issue. There are 15 common test factors in S/w
Testing.

Ex:

QC Quality
PM/QA/TM Test Factor
TL Testing Techniques

TE Test cases

PM/QA/TM Ease of use


TL UI testing
TE MS 6 Rules

PM/QA/TM Portable
TL Compatibility Testing
TE Run on different OS

Test Factors:
Authorization: Validation of users to connect to application
Security Testing
Functionality / Requirements Testing
Access Control: Permission to valid user to use specific service
Security Testing
Functionality / Requirements Testing
Audit Trail: Maintains metadata about operations
Error Handling Testing
Functionality / Requirements Testing
Correctness: Meet customer requirements in terms of functionality
All black box Testing Techniques
Continuity in Processing: Inter process communication
Execution Testing

Operations Testing
Coupling: Co existence with other application in customer site
Inter Systems Testing
Ease of Use: User friendliness
User Interface Testing
Manual Support Testing
Ease of Operate: Ease in operations
Installation testing
File Integrity: Creation of internal files or backup files
Recovery Testing
Functionality / Requirements Testing
Reliability: Recover from abnormal situations or not. Backup files using or not
Recovery Testing
Stress Testing
Portable: Run on customer expected platforms
Compatibility Testing
Configuration Testing
Performance: Speed of processing
Load Testing
Stress Testing
Data Volume Testing
Storage Testing
Service Levels: Order of functionalities
Stress Testing
Functionality / Requirements Testing
Methodology: Follows standard methodology during testing

Compliance Testing
Maintainable: Whither application is long time serviceable to customers or not
Compliance Testing (Mapping between quality to testing connection)

Quality Gap: A conceptual gap between Quality Factors and Testing process is called as Quality
Gap.

Test Methodology: Test strategy defines over all approach. To convert a over all approach into
corresponding project level approach, quality analyst / PM defines test methodology.
Step 1: Collect test strategy
Step 2: Project type

Project Type

Information Gathering & Design


Analysis

Codin
g

System Maintenance
Testing

Traditional

Off-theShelf

Maintenance

Step 3: Determine application type: Depends on application type and requirements the QA
decrease number of columns in the TRM.
Step 4: Identify risks: Depends on tactical risks, the QA decrease number of factors (rows) in the
TRM.
Step 5: Determine scope of application: Depends on future requirements / enhancements, QA try
to add some of the deleted factors once again. (Number of rows in the TRM)
Step 6: Finalize TRM for current project

Step 7: Prepare Test Plan for work allocation.

PET (Process Experts Tools and Technology): It is an advanced testing process developed by
HCL, Chennai.This process is approved by QA forum of India. It is a refinement form of VModel

Test Planning: After completion of test initiation, test plan author concentrates on test plan
writing to define what to test, how to test, when to test and who to test .

What to test

- Development Plan

How to test

- S/wRS

When to test

- Design Documents

Who to test

1.

- Team Formation

Team Formation

In general test planning process starts with testing team formation, depends on below factors.

Availability of Testers
Test Duration
Availability of test environment resources
The above three are dependent factors.

Test Duration:
Common market test team duration for various types of projects.

C/S, Web, ERP projects - SAP, VB, JAVA Small


System Software
Machine Critical

- C, C++
- Prolog, LISP

- 3-5 months

- Medium 7-9 months


- Big

- 12-15 months

System Software Projects: Network, Embedded, Compilers


Machine Critical Software: Robotics, Games, Knowledge base, Satellite, Air Traffic.

2.

Identify tactical Risks

After completion of team formation, test plan author concentrates on risks analysis and
mitigations.

1)

Lack of knowledge on that domain

2)

Lack of budget

3)

Lack of resources(h/w or tools)

4)

Lack of testdata (amount)

5)

Delays in deliveries(server down)

6)

Lack of development process rigor

7)

Lack of communication( Ego problems)

3.

Prepare Test Plan

Format:

1)

Test Plan id: Unique number or name

2)

Introduction: About Project

3)

Test items: Modules

4)

Features to be tested: Responsible modules to test

5)

Feature not to be tested: Which ones and why not?

6)

Feature pass/fail criteria: When above feature is pass/fail?

7)

Suspension criteria: Abnormal situations during above features testing.

8)

Test environment specifications: Required docs to prepare during testing

9)

Test environment: Required H/w and S/w

10) Testing tasks: what are the necessary tasks to do before starting testing
11) Approach: List of Testing Techniques to apply
12) Staff and training needs: Names of selected testing Team
13) Responsibilities: Work allocation to above selected members
14) Schedule: Dates and timings
15) Risks and mitigations : Common non technical problems
16) Approvals: Signatures of PM/QA and test plan author

4.

Review Test Plan

After completion of test plan writing test plan author concentrate on review of that document for
completeness and correctness. In this review, selected testers also involved to give feedback. In
this reviews meeting, testing team conducts coverage analysis.

S/wRS based coverage ( What to test )


Risks based coverage ( Analyze risks point of view )
TRM based coverage ( Whither this plan tests all tests given in TRM )

Test Design:

After completion of test plan and required training days, every selected test engineer
concentrate on test designing for responsible modules. In this phase test engineer prepares a list
of testcases to conduct defined testing, on responsible modules.

There are three basic methods to prepare testcases to conduct core level testing.

Business Logic based testcase design

Input Domain based testcase design

User Interface based testcase design

Business Logic based testcase design:


In general test engineers are writing list of testcases depends on usecases / functional
specifications in S/wRS. A usecase in S/wRS defines how a user can use a specific functionality
in your application.

To prepare testcases depends on usecases we can follow below approach:

Step 1: Collect responsible modules usecases


Step 2: select a usecase and their dependencies ( Dependent & Determinant )
Step 2-1: identify entry condition
Step 2-2: identify input required
Step 2-3: identify exit condition
Step 2-4: identify output / outcome
Step2-5: study normal flow
Step 2-6: study alternative flows and exceptions
Step3: prepare list of testcases depends on above study
Step 4: review testcases for completeness and correctness

TestCase Format:

After completion of testcases selection for responsible modules, test engineer prepare an IEEE
format for every test condition.

TestCase Id : Unique number or name


TestCase Name : Name of the test condition
Feature to be tested : Module / Feature / Service
TestSuit Id : Parent batch Ids, in which this case is participating as a member.
Priority : Importance of that testcase
Po Basic functionality
P1 General Functionality (I/p domain, Error handling )
P2 Cosmetic TestCases
(Ex: p0 os, p1-difft oss, p2 look & feel)

Test Environment: Required H/w and S/w to execute the test cases
Test Effort: (Person Per Hour or Person / Hr) Time to execute this test case ( 20 Mins )
Test Duration: Date of execution
Test Setup: Necessary tasks to do before start this case execution
Test Procedure: Step by step procedure to execute this testcase.
TestCase Pass/Fail Criteria: When that testcase is Pass, When that testcase is fail

Input Domain based TestCase Design:


To prepare functionality and error handling testcases, test engineers are using UseCases or
functional specifications in S/wRS. To prepare input domain testcases test engineers are
depending on data model of the project (ERD & LLD)

Step1: Identify input attributes in terms of size, type and constraints.


(size- range, type int, float constraint Primary key)
Step2: Identify critical attributes in that list, which are participating in data retrievals and
manipulations.
Step3: Identify non critical attributes which are input, output type.
Step4: Prepare BVA & ECP for every attribute.

ECP ( Type )
Input Attribute

Valid

Invalid

BVA ( Size / Range )


Minimum

Maximum

Figure: Data Matrix

User Interface based testcase design:


To conduct UI testing, test engineer write a list of test cases, depends on our organization level
UI rules and global UI conventions.

For preparing this UI testcases they are not studying S/wRS, LLDD etc
Functionality testcases source: S/wRS. I/P domain testcases source: LLDD

Testcases: For all projects applicable


Testcase1: Spelling checking
Tesecase2: Graphics checking (alignment, font, style, text, size, micro soft 6 rules)
Testcase3: Meaningful error messages or not. (Error Handling Testing related message is
coming or not. Here they are testing that message is easy to understand or not)
TestCase4: Accuracy of data displayed (WYSIWYG) (Amount, d o b)
Testcase5: Accuracy of data in the database as a result of user input.
(Tc4 screen level, tc5 at database level)
Testcase6: Accuracy of data in the database as a result of external factors?

Testcase7: Meaningful Help messages or not?(First 6 tc for uit and 7 manual support testing)

Review Testcases: After completion of testcases design with required documentation [IEEE] for
responsible modules, testing team along with test lead concentrate on review of testcases for
completeness and correctness. In this review testing team conducts coverage analysis

Business Requirements based coverage


UseCases based coverage
Data Model based coverage
User Interface based coverage
TRM based coverage
Business Requirements
******

Sources (Use Cases, Data Model)

TestCases

*****

*
*

*****

*
*

*****

*
*

Figure: Requirements Validation / Traceability Matrix.

Test Execution levels Vs Test Cases:


Level 0 P0

Level 1 P0, P1 and P2 testcases as batches


Level 2 Selected P0, P1 and P2 testcases with respect to modifications
Level 3 Selected P0, P1 and P2 testcases at build.

Test Harness = Test Environment + Test Bed

Build Version Control: Unique numbering system. ( FTP or SMTP)

After defect reporting the testing team may receive


Modified Build
Modified Programs
To maintain this original builds and modified builds, development team use version control
softwares.

Level 0 (Sanity / Smoke / TAT):


After receiving initial build from development team, testing team install into test environment.
After completion of dumping / installation testing team ensure that basic functionality of that
build to decide completeness and correctness of test execution.

During this testing, testing team observes below factors on that initial build.

Understandable: Functionality is understandable to test engineer.

Operable: Build is working without runtime errors in test environment.


Observable: Process completion and continuation in build is estimated by tester.
Controllable: Able to Start/ Stop processes explicitly.
Consistent: Stable navigations
Maintainable: No need of reinstallations
Simplicity: Short navigations to complete task.
Automatable: Interfaces supports automation test script creation.
This level-0 testing is also called as Testability or Octangle Testing (bcz based on 8 factors).

Test Automation: After receiving a stable build from development team, testing team
concentrate on test automation.
Test Automation two types: Complete and Selective.

Level-1: (Comprehensive Testing):


After completion of stable build receiving from development team and automation, testing team
starts test execution of their testcases as batches. The test batch is also known as TestSuit or test
set. In every batch, base state of one testcase is end state of previous testcase.
During this test batches execution, test engineers prepares test log with three types of entries.
Passed
Failed
Blocked
Passed: All expected values are equal to actual.
Failed: Any expected value is variated with actual.

Blocked: Corresponding testcases are failed.

Level-2 Regression Testing: Actually this Regression testing is part of Level-1 testing. During
comprehensive test execution, testing team reports mismatches to development team as defects.
After receiving that defect, development team performs modifications in coding to resolve that
accepted defects. When they release modified build, testing team concentrate on regression
testing before conducts remaining comprehensive testing.

Severity: Seriousness of the defect defined by the tester through Severity (Impact and
Criticality) importance to do regression testing. In organizations they will be giving three types
of severity like High, Medium and Low.

High: Without resolving this mismatch tester is not able to continue remaining testing. (Show
stopper).
Medium: Able to continue testing, but resolve must.
Low: May or may not resolve.

Ex:

High: Database not connecting.

Medium: Input domain wrong. (Accepting wrong values also)


Low: Spelling mistake.

Xyz are three dependent modules. If u find bug in z, then

Do on z and colleges: High


Full z module: Medium
Partial z module: Low

Possible ways to do Regression Testing:

Case 1: If development team resolved bug and its severity is high, testing team will re execute
all P0, P1 and carefully selected P2 test cases with respect to that modification.

Case 2: If development team resolved bug and its severity is medium, testing team will re
execute all P0, selected P1 [80-90 %] and some of P2 test cases with respect to that modification.

Case 3: If development team resolved bug and its severity is low, testing team will re execute
some of the P0, P1, P2 test cases with respect to that modification.

Case 4: If development team performs modifications due to project requirement changes, testing
team reexecute all P0 and selected testcases.

Severity: With respect to functionality


Priority: With respect to customer.

Severity: All defects are not with same severity.


Priority: All defects are not with same priority.

Severity: Seriousness of the defect.


Priority: Importance of the defect.

Severity: Project functionality point of view important.


Priority: Customer point of view important.

Defect Reporting and Tracking:


During comprehensive test execution, test engineers are reporting mismatches to development
team as defect reports in IEEE format.

Defect Id: A unique number or name.

Defect Description: Summary of defect.


Build Version Id: Parent build version number.
Feature: Module / Functionality
Testcase name and Description: Failed testcase name with description
Reproducible: (Yes / No)
If yes, attach test procedure.
If No, attach snapshots and strong reasons
Severity: High / Medium / Low
Priority
Status: New / Reopen (after 3 times write new programs)
Reported by: Name of the test engineer
Reported on: Date of Submission
Suggested fix: optional
Assign to: Name of PM
Fixed by: PM or Team Lead
Resolved by: Name of the Developer
Resolved on: Date of solving
Resolution type:
Approved by: Signature of the PM
Defect Age: The time gap between resolved on and reported on.
Defect Submission:

Figure: Large Scale Organizations


Defect Submission:

Figure: Small Scale Organizations


Defect Status Cycle:

Bug Life Cycle:

Resolution Type:
There are 12 resolution types such as
Duplicate: Rejected due to defect like same as previous reported defect.
Enhancement: Rejected due to defect related to future requirement of the customer.
H/w Limitation: Raised due to limitations of hardware (Rejected)
S/w Limitation: Rejected due to limitation of s/w technology.
Functions as design: Rejected due to coding is correct with respect to design documents.
Not Applicable: Rejected due to lack of correctness in defect.
No plan to fix it: Postponed part timely (Not accepted and rejected)
Need for More Information: Developers want more information to fix. (Not accepted and
rejected)
Not Reproducible: Developer want more information due to the problem is not reproducible.
(Not accepted and rejected)
User misunderstanding: (Both argues you r thinking wrong) (Extra negotiation between tester
and developer)
Fixed: Opened a bug to resolve (Accepted)

Fixed Indirectly: Differed to resolve (Accepted)

Types of Bugs:

UI bugs: (Low severity)


Spelling mistake: High Priority
Wrong alignment: Low Priority

Input Domain bugs: (Medium severity)


Object not taking Expected values: High Priority
Object taking Unexpected values: Low Priority

Error Handling bugs: (Medium severity)


Error message is not coming: High Priority
Error message is coming but not understandable: Low Priority

Calculation bugs: (High severity)


Intermediate Results Failure: High Priority
Final outputs are Wrong: Low Priority

Service Levels bugs: (High severity)


Deadlock: High Priority
Improper order of Services: Low Priority

Load condition bugs: (High severity)


Memory leakage under load: High Priority
Doesn't allows customer expected load: Low Priority

Hardware bugs: (High severity)


Printer not connecting: High Priority
Invalid printout: Low Priority

Boundary Related Bugs: (Medium Severity)

Id control bugs: (Medium severity) Wrong version no, Logo

Version Control bugs: (Medium severity) Difference between two consecutive versions

Source bugs: (Medium severity) Mismatch in help documents

Test Closure:
After completion of all possible testcase execution and their defect reporting and tracking, test
lead conduct test execution closure review along with test engineers.

In this review test lead depends on coverage analysis:

BRS based coverage


UseCases based coverage (Modules)
Data Model based coverage (i/p and op)
UI based coverage (Rules and Regulations)
TRM based coverage (PM specified tests are covered or not)

Analysis of the differed bugs:


Whither deferred bugs are postponable or not.

Testing team try to execute the high priority test cases once again to confirm correctness of
master build.

Final Regression Process:


Gather requirements
Effort estimation (Person/Hr)
Plan Regression
Execute Regression
Report Regression

User Acceptance Testing:


After completion of test execution closure review and final regression, our organization
concentrates on UAT to collect feed back from customer / customer site like people.
There are two approaches:

Alpha testing
Beta testing

SignOff:
After completion of UA and then modifications, test lead creates Test Summary Report (TSR). It
is a part of s/w release note. This TSR consists of

Test Strategy / Methodology (what tests)


System Test Plan (schedule)
Traceability Matrix (mapping requirements and testcases)
Automated Test Scripts (TSL + GUI map entries)
Final Bug summary Report
Bug Id

Description

Case Study (Schedule for 5 Months):


Deliverable

Responsibility

Completion Time

TestCase Selection

Test Engineer

20-30 days

TestCase Review

Test Lead, Test Engineer

4-5 days

RVM / RTM

Test Lead

1 day

Sanity & Test Automation

Test Engineer

20-30 days

Test Execution as Batches

Test Engineer

40-60 days

Test Reporting

Test Engineer & Test Lead

On going during test


execution

Communication and Status


Reporting

Everyone in testing team

Weakly twice

Final Regression Testing &


Closer Review

Test Engineer and Test Lead

4-5 days

User Acceptance Testing

Customer Site People


( Involvement of Testing Team)

5-10 days

Test Summary Report

Test Lead

1-2 days

(Sign Off)

Auditing:
During testing and maintenance, testing team conducts audit meetings to estimate status and
required improvements. In this auditing process they can use three types of measurements and
metrics.

Quality Measurement Metrics:


These measurements are used by QA or PM to estimate achievement of quality in current project
testing [monthly once]

Product Stability:

Sufficiency:
Requirements Coverage
Type Trigger Analysis (Mapping between covered requirements and applied tests)

Defect Severity Distribution Organization trend limit check:


Organisation trend limit check

Test Management Measurements:


These measurements are used by test lead during test execution of current project [weakly twice]

Test Status
Executed tests
In progress
Yet to execute

Delays in Delivery
Defect Arrival Rate
Defect Resolution Rate
Defect Aging
Test Effort
Cost of finding a defect (Ex: 4 defects / person day)

Process Capability Measurements:


These measurements are used by quality analyst and test management to improve the capability
of testing process for upcoming projects testing. (It depends on old projects maintenance level
feedback)

Test Efficiency
Type-Trigger Analysis
Requirements Coverage

Defect Escapes
Type-Phase analysis.
(What type of defects my testing team missed in which phase of testing)
Test Effort
Cost of finding a defect (Ex: 4 defects / person day)
This topic looks at Static Testing techniques. These techniques are referred to as "static" because
the software is not executed; rather the specifications, documentation and source code that
comprise the software are examined in varying degrees of detail.
There are two basic types of static testing. One of these is people-based and the other is toolbased. People-based techniques are generally known as reviews but there are a variety of
different ways in which reviews can be performed. The tool-based techniques examine source
code and are known as "static analysis". Both of these basic types are described in separate
sections below.

What are Reviews?


Reviews is the generic name given to people-based static techniques. More or less any activity
that involves one or more people examining something could be called a review. There are a
variety of different ways in which reviews are carried out across different organisations and in
many cases within a single organisation. Some are very formal, some are very informal, and
many lie somewhere between the two. The chances are that you have been involved in reviews of
one form another.
One person can perform a review of his or her own work or of someone elses work. However, it
is generally recognised that reviews performed by only one person are not as effective as reviews
conducted by a group of people all examining the same document (or whatever it is that is being

reviewed).

Review techniques for individuals


Desk checking and proof reading are two techniques that can be used by individuals to review a
document such as a specification or a piece of source code. They are basically the same
processes: the reviewer double-checks the document or source code on their own. Data stepping
is a slightly different process for reviewing source code: the reviewer follows a set of data values
through the source code to ensure that the values are correct at each step of the processing.

Review techniques for groups


The static techniques that involve groups of people are generically referred to as reviews.
Reviews can vary a lot from very informal to highly formal, as will be discussed in more detail
shortly. Two examples of types of review are walkthroughs and Inspection. A walkthrough is a
form of review that is typically used to educate a group of people about a technical document.
Typically the author "walks" the group through the ideas to explain them and so that the
attendees understand the content. Inspection is the most formal of all the formal review
techniques. Its main focus during the process is to find faults, and it is the most effective review
technique in finding them (although the other types of review also find some faults). Inspection
is discussed in more detail below.
Reviews and the test process

Benefits of reviews
There are many benefits from reviews in general. They can improve software development
productivity and reduce development timescales. They can also reduce testing time and cost.
They can lead to lifetime cost reductions throughout the maintenance of a system over its useful
life. All this is achieved (where it is achieved) by finding and fixing faults in the products of
development phases before they are used in subsequent phases. In other words, reviews find
faults in specifications and other documents (including source code) which can then be fixed
before those specifications are used in the next phase of development.
Reviews generally reduce fault levels and lead to increased quality. This can also result in
improved customer relations.

Reviews are cost-effective

There are a number of published figures to substantiate the cost-effectiveness of reviews.


Freedman and Weinberg quote a ten times reduction in faults that come into testing with a 50%
to 80% reduction in testing cost. Yourdon in his book on Structured Walkthroughs found that
faults were reduced by a factor of ten. Gilb and Graham give a number of documented benefits
for software Inspection, including 25% reduction in schedules, a 28 times reduction in
maintenance cost, and finding 80% of defects in a single pass (with a mature Inspection process)
and 95% in multiple passes.

What can be Inspected?


Anything written down can be Inspected. Many people have the impression that Inspection
applies mainly to code (probably because Fagan's original article was on "Design and code
inspection"). However, although Inspection can be performed on code, it gives more value if it is
performed on more "upstream" documents in the software development process. It can be
applied to contracts, budgets, and even marketing material, as well as to policies, strategies,
business plans, user manuals, procedures and training material. Inspection also applies to all
types of system development documentation, such as requirements, feasibility studies and
designs. It is also very appropriate to apply to all types of test documentation such as test plans,
test designs and test cases. In fact even with Fagan's original method, it was found to be very
effective applied to testware.

What can be reviewed?


Anything that can be Inspected can also be reviewed, but reviews can apply to more things than
just those ideas that are written down. Reviews can be done on visions, strategic plans and "big
picture" ideas. Project progress can be reviewed to assess whether work is proceeding according
to the plans. A review is also the place where major decisions may be made, for example about
whether or not to develop a given feature.
Reviews and Inspections are complementary. Inspection excludes discussion and solution
optimising, but these activities are often very important. Any type of review that tries to combine
more than one objective tends not to work as well as those with a single focus. It works better to
use Inspection to find faults and to use reviews to discuss, come to a consensus and make
decisions.

What to review / Inspect?


Looking at the V life cycle diagram that was discussed in Session 2, reviews and Inspections

apply to everything on the left-hand side of the V-model. Note that the reviews apply not only to
the products of development but also to the test documentation that is produced early in the life
cycle. We have found that reviewing the business needs alongside the Acceptance Tests works
really well. It clarifies issues that might otherwise have been overlooked. This is yet another way
to find faults as early as possible in the life cycle so that they can be removed at the least cost.

Costs of reviews
You cannot gain the benefits of reviews without investing in doing them, and this does have a
cost. As a rough guide, something between 5% and 15% of project effort would typically be
spent on reviews. If Inspections are being introduced into an organisation, then 15% is a
recommended guideline. Once the Inspection process is mature, this may go down to around 5%.
Note that 10% is half a day a week.
Remember that the cost of reviews always needs to be balanced against the cost of not doing
them, and finding the faults (which are already there) much later when it will be much more
expensive to fix them.
The costs of reviews are mainly in people's time, i.e. it is an effort cost, but the cost varies
depending on the type of review. The leader or moderator of the review may need to spend time
in planning the review (this would not be done for an informal review, but is required for
Inspection). The studying of the documents to be reviewed by each participant on their own is
normally the main cost (although in practice this may not be done as thoroughly as it should). If
a meeting is held, the cost is the length of the meeting times the number of people present. The
fixing of any faults found or the resolution of issues found may or may not be followed up by the
leader. In the more formal review techniques, metrics or statistics are recorded and analysed to
ensure the continued effectiveness and efficiency of the review process. Process improvement
should also be a part of any review process, so that lessons learned in a review can be folded
back into development and testing processes. (Inspection formally includes process
improvement; most other forms of review do not.)

Types of review
We have now established that reviews are an important part of software testing. Testers should be
involved in reviewing the development documents that tests are based on, and should also review
their own test documentation.
In this section, we will look at different types of reviews, and the activities that are done to a
greater or lesser extent in all of them. We will also look at the Inspection process in a bit more
detail, as it is the most effective of all review types.

Characteristics of different review types


Informal review
As its name implies, this is very much an ad hoc process. Normally it simply consists of
someone giving their document to someone else and asking them to look it over. A document
may be distributed to a number of people, and the author of the document would hope to receive
back some helpful comments. It is a very cheap form of review because there is no monitoring of
metrics, no meeting and no follow--up. It is generally perceived to be useful, and compared to
not doing any reviews at all, it is. However, it is probably the least effective form of review
(although no one can prove that since no measurements are ever done!)

Technical review or Peer review


A technical review may have varying degrees of formality. This type of review does focus on
technical issues and technical documents. A peer review would exclude managers from the
review. The success of this type of review typically depends on the individuals involved - they
can be very effective and useful, but sometimes they are very wasteful (especially if the meetings
are not well disciplined), and can be rather subjective. Often this level of review will have some
documentation, even if just a list of issues raised. Sometimes metrics will be kept. This type of
review can find important faults, but can also be used to resolve difficult technical problems, for
example deciding on the best way to implement a design.
Decision-making review
This type of review is closely related to the previous one (in fact the syllabus does not
distinguish them). In this type of review, which may be technical or managerial, the focus is on
discussing the issues, coming to a consensus and making decisions, for example about whether a
given feature should be included in the next release or not.

Walkthrough
A walkthrough is typically led by the author of a document, for the purpose of educating the
participants about the content so that everyone understands the same thing. A walkthrough may
include "dry runs" of business scenarios to show how the system would handle certain specific
situations. For technical documents, it is often a peer group technique.

Inspection

An Inspection is the most formal of the formal review techniques. There are strict entry and exit
criteria to the Inspection process, it is led by a trained Leader or moderator (not the author), there
are defined roles for searching for faults based on defined rules and checklists. Metrics are a
required part of the process.
Characteristics of reviews in general

Objectives and goals


The objectives and goals of reviews in general normally include the verification and validation of
documents against specifications and standards.
Some types of review also have an objective of achieving a consensus among the attendees (but
not Inspection).Some types of review have process improvement as a goal (this is formally
included in Inspection).

Activities
There are a number of activities that may take place for any review.
The planning stage is part of all except informal reviews.
In Inspection (and possibly other reviews), an overview or kickoff meeting is held to put
everyone "in the picture" about what is to be reviewed and how the review is to be conducted.
This pre-meeting may be a walkthrough in its own right.

The preparation or individual checking is usually where the greatest value is gained from a
review process. Each person spends time on the review document (and related documents),
becoming familiar with it and/or looking for faults. In some reviews, this part of the process is
optional (at least in practice). In Inspection it is required.

Most reviews include a meeting of the reviewers. Informal reviews probably do not, and
Inspection does not hold a meeting if it would not add economic value to the process. Sometimes
the meeting time is the only time people actually look at the document. Sometimes the meetings
run on for hours and discuss trivial issues. The best reviews (of any level of formality) ensure
that value is gained from the meeting.
The more formal review techniques include follow-up of the faults or issues found to ensure that
action has been taken on everything raised (Inspection does, as do some forms of technical or
peer review).

The more formal review techniques collect metrics on cost (time spent) and benefits achieved.
Roles and responsibilities

For any of the formal reviews (i.e. not informal reviews), there is someone responsible for the
review of a document (the individual review cycle). This may be the author of the document
(walkthrough) or an independent Leader or moderator (formal reviews and Inspection). The
responsibility of the Leader is to ensure that the review process works. He or she may distribute
documents, choose reviewers, mentor the reviewers, call and lead the meeting, perform followup and record relevant metrics.

The author of the document being reviewed or Inspected is generally included in the review,
although there are some variants that exclude the author. The author actually has the most to gain
from the review in terms of learning how to do their work better (if the review is conducted in
the right spirit!).
The reviewers or Inspectors are the people who bring the added value to the process by helping
the author to improve his or her document. In some types of review, individual checkers are
given specific types of fault to look for to make the process more effective.

Managers have an important role to play in reviews. Even if they are excluded from some types
of peer review, they can (and should) review management level documents with their peers. They
also need to understand the economics of reviews and the value that they bring. They need to
ensure that the reviews are done properly, i.e. that adequate time is allowed for reviews in project
schedules.
There may be other roles in addition to these, for example an organisation-wide co-ordinator who
would keep and monitor metrics, or someone to "own" the review process itself - this person
would be responsible for updating forms, checklists, etc.
Deliverables

The main deliverable from a review is the changes to the document that was reviewed. The
author of the document normally edits these. For Inspection, the changes would be limited to
faults found as violations of accepted rules. In other types of review, the reviewers suggest
improvements to the document itself. Generally the author can either accept or reject the changes
suggested.

If the author does not have the authority to change a related document (e.g. if the review found
that a correct design conflicted with an incorrect requirement specification), then a change
request may be raised to change the other document(s).

For Inspection and possibly other types of review, process improvement suggestions are a
deliverable. This includes improvements to the review or Inspection process itself and also
improvements to the development process that produced the document just reviewed. (Note that
these are improvements to processes, not to reviewed documents.)
The final deliverable (for the more formal types of review, including Inspection) is the metrics
about the costs, faults found, and benefits achieved by the review or Inspection process.

Pitfalls
Reviews are not always successful. They are sometimes not very effective, so faults that could
have been found slip through the net. They are sometimes very inefficient, so that people feel
that they are wasting their time. Often insufficient thought has gone into the definition of the
review process itself - it just evolves over time.

One of the most common causes for poor quality in the review process is lack of training, and
this is more critical the more formal the review.
Another problem with reviews is having to deal with documents that are of poor quality. Entry
criteria to the review or Inspection process can ensure that reviewers' time is not wasted on
documents that are not worthy of the review effort.

A lack of management support is a frequent problem. If managers say that they want reviews to
take place but don't allow any time in the schedules for the, this is only "lip service" not
commitment to quality.
Long-term, it can be disheartening to become expert at detecting faults if the same faults keep on
being injected into all newly written documents. Process improvements are the key to long-term
effectiveness and efficiency.

Static analysis
What can static analysis do?Static analysis is a form of automated testing. It can check for

violations of standards and can find things that may or may not be faults. Static analysis is
descended from compiler technology. In fact, many compilers may have static analysis facilities
available for developers to use if they wish. There are also a number of stand-alone static
analysis tools for various different computer programming languages. Like a compiler, the static
analysis tool analyses the code without executing it, and can alert the developer to various things
such as unreachable code, undeclared variables, etc.
Static analysis tools can also compute various metrics about code such as cyclomatic complexity.

Data flow analysis:


Data flow analysis is the study of program variables. A variable is basically a location in the
computer's memory that has a name so that the programmer can refer to it more conveniently in
the source code. When a value is put into this location, we say that the variable is "defined".
When that value is accessed, we say that it is "used".

For example, in the statement "x = y + z", the variables y and z are used because the values that
they contain are being accessed and added together. The result of this addition is then put into the
memory location called x, so x is defined.
The significance of this is that static analysis tools can perform a number of simple checks. One
of these checks is to ensure that every variable is defined before it is used. If a variable is not
defined before it is used, the value that it contains may be different every time the program is
executed and in any case is unlikely to contain the correct value. This is an example of a data
flow fault. Another check that a static analysis tool can make is to ensure that every time a
variable is defined it is used somewhere later on in the program. If it isnt, then why was defined
in the first place? This is known as a data flow anomaly and although can be a perfectly harmless
fault, it can also indicate something more serious is at fault.

Control flow analysis


Control flow analysis can find infinite loops, inaccessible code, and many other suspicious
aspects. However, not all of the things found are necessarily faults; defensive programming may
result in code that is technically unreachable.

Cyclomatic complexity:

Cyclomatic complexity is related to the number of decisions in a program or control flow graph.
The easiest way to compute it is to count the number of decisions (diamond-shaped boxes) on a
control flow graph and add 1. Working from code, count the total number of IF's and any loop
constructs (DO, FOR, WHILE, REPEAT) and add 1. The cyclomatic complexity does reflect to
some extent how complex a code fragment is, but it is not the whole story.

Other static metrics:


Lines of code (LOC or KLOC for 1000s of LOC) is a measure of the size of a code module.
Operands and operators is a very detailed measurement devised by Halstead, but not much used
now. Fan-in is related to the number of modules that call (in to) a given module. Modules with
high fan-in are found at the bottom of hierarchies, or in libraries where they are frequently called.
Modules with high fan-out are typically at the top of hierarchies, because they call out to many
modules (e.g. the main menu). Any module with both high fan-in and high fan-out probably
needs re-designing.

Nesting levels relate to how deeply nested statements are within other IF statements. This is a
good metric to have in addition to cyclomatic complexity, since highly nested code is harder to
understand than linear code, but cyclomatic complexity does not distinguish them.
Other metrics include the number of function calls and a number of metrics specific to objectoriented code.

Limitations and advantages:


Static analysis has its limitations. It cannot distinguish "fail-safe" code from real faults or
anomalies, and may create a lot of spurious failure messages. Static analysis tools do not execute
the code, so they are not a substitute for dynamic testing, and they are not related to real
operating conditions.

However, static analysis tools can find faults that are difficult to see and they give objective
quality information about the code. We feel that all developers should use static analysis tools,
since the information they can give can find faults very early when they are very cheap to fix.

Potrebbero piacerti anche