Sei sulla pagina 1di 32

Why good Bug report?

If your bug report is effective, chances are higher that it will get fixed. So fixing a bug
depends on how effectively you report it. Reporting a bug is nothing but a skill and I will
tell you how to achieve this skill.

“The point of writing problem report(bug report) is to get bugs fixed” – By Cem
Kaner. If tester is not reporting bug correctly, programmer will most likely reject this bug
stating as irreproducible. This can hurt testers moral and some time ego also. (I suggest
do not keep any type of ego. Ego’s like “I have reported bug correctly”, “I can reproduce
it”, “Why he/she has rejected the bug?”, “It’s not my fault” etc etc..)

What are the qualities of a good software bug report?


Anyone can write a bug report. But not everyone can write a effective bug report. You
should be able to distinguish between average bug report and a good bug report. How to
distinguish a good or bad bug report? It’s simple, apply following characteristics and
techniques to report a bug.

1) Having clearly specified bug number:


Always assign a unique number to each bug report. This will help to identify the bug
record. If you are using any automated bug-reporting tool then this unique number will
be generated automatically each time you report the bug. Note the number and brief
description of each bug you reported.

2) Reproducible:
If your bug is not reproducible it will never get fixed. You should clearly mention the
steps to reproduce the bug. Do not assume or skip any reproducing step. Step by step
described bug problem is easy to reproduce and fix.

3) Be Specific:
Do not write a essay about the problem. Be Specific and to the point. Try to summarize
the problem in minimum words yet in effective way. Do not combine multiple problems
even they seem to be similar. Write different reports for each problem.

How to Report a Bug?

Use following simple Bug report template:


This is a simple bug report format. It may vary on the bug report tool you are using. If
you are writing bug report manually then some fields need to specifically mention like
Bug number which should be assigned manually.

Reporter: Your name and email address.

Product: In which product you found this bug.

Version: The product version if any.

Component: These are the major sub modules of the product.

Platform: Mention the hardware platform where you found this bug. The various
platforms like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.

Operating system: Mention all operating systems where you found the bug. Operating
systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different OS versions
also if applicable like Windows NT, Windows 2000, Windows XP etc.
Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with
highest priority” and P5 as ” Fix when time permits”.

Severity:
This describes the impact of the bug.
Types of Severity:

• Blocker: No further testing work can be done.


• Critical: Application crash, Loss of data.
• Major: Major loss of function.
• Minor: minor loss of function.
• Trivial: Some UI enhancements.
• Enhancement: Request for new feature or some enhancement in existing one.

Status:
When you are logging the bug in any bug tracking system then by default the bug status
is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.
Click here to read more about detail bug life cycle.

Assign To:
If you know which developer is responsible for that particular module in which bug
occurred, then you can specify email address of that developer. Else keep it blank this
will assign bug to module owner or Manger will assign bug to developer. Possibly add the
manager email address in CC list.

URL:
The page url on which bug occurred.

Summary:
A brief summary of the bug mostly in 60 or below words. Make sure your summary is
reflecting what the problem is and where it is.

Description:
A detailed description of bug. Use following fields for description field:

• Reproduce steps: Clearly mention the steps to reproduce the bug.


• Expected result: How application should behave on above mentioned steps.
• Actual result: What is the actual result on running above steps i.e. the bug
behavior.

These are the important steps in bug report. You can also add the “Report type” as one
more field which will describe the bug type.

The report types are typically:


1) Coding error
2) Design error
3) New suggestion
4) Documentation issue
5) Hardware problem

Some Bonus tips to write a good bug report:

1) Report the problem immediately: If you found any bug while testing, do not wait
to write detail bug report later. Instead write the bug report immediately. This will ensure
a good and reproducible bug report. If you decide to write the bug report later on then
chances are high to miss the important steps in your report.

2) Reproduce the bug three times before writing bug report: Your bug should be
reproducible. Make sure your steps are robust enough to reproduce the bug without any
ambiguity. If your bug is not reproducible every time you can still file a bug mentioning
the periodic nature of the bug.

3) Test the same bug occurrence on other similar module:


Sometimes developer use same code for different similar modules. So chances are high
that bug in one module can occur in other similar modules as well. Even you can try to
find more severe version of the bug you found.

4) Write a good bug summary:


Bug summary will help developers to quickly analyze the bug nature. Poor quality report
will unnecessarily increase the development and testing time. Communicate well through
your bug report summary. Keep in mind bug summary is used as a reference to search
the bug in bug inventory.

5) Read bug report before hitting Submit button:


Read all sentences, wording, steps used in bug report. See if any sentence is creating
ambiguity that can lead to misinterpretation. Misleading words or sentences should be
avoided in order to have a clear bug report.

6) Do not use Abusive language:


It’s nice that you did a good work and found a bug but do not use this credit for criticizing
developer or to attack any individual.

Conclusion:
No doubt that your bug report should be a high quality document. Focus on writing good
bug reports, spend some time on this task because this is main communication point
between tester, developer and manager. Mangers should make aware to their team that
writing a good bug report is primary responsibility of any tester. Your efforts towards
writing good bug report will not only save company resources but also create a good
relationship between you and developers.

For better productivity write a better bug report. Typical bug report.

Lets assume in your application you want to create a new user with his/her information,
for that you need to logon into the applicataion and navigate to USERS menu > New
User, then enter all the details in the User form like, First Name, Last Name, Age,
Address, Phone etc. Once you enter all these need to click on SAVE button in order to
save the user and you can see a success message saying “New User has been created
successfully”.

Now you entered into your application by logging in and navigete to USERS menu > New
user, entered all the information and clicked on SAVE button and now the application
creashed and you can see one error page

on the screen, now you would like to report this BUG.


BUG REPORT:

Bug Name: Application crash on clicking the SAVE button while creating a new user.

Bug ID : It will be automatically created by the BUG Tracking tool once you save this.

Area Path: USERS menu > New Users

Build Number:/Version Number 5.0.1

Severity: HIGH (High/Medium/Low)

Priority: HIGH (High/Medium/Low)

Assigned to: Developer-X

Created By: Your Name

Cerated On: Date

Reason: Defect

status: New/Open/Active – Depends on the Tool you are using

Environment: Windows 2003/SQL Server 2005

Description:

Application crash on clicking the SAVE button while creating a new

user, hence unable to create a new user in the application.

Steps To Reproduce:

1) Logon into the application

2) Navigate to the USers Menu > New User

3) Filled all the fields

4) Clicked on Save button

5) Seen an error page “ORA1090 Exception: Insert values Error…”

6) See the attached logs for more information

7) And also see the attached screenshot of the error page.

Expected: On clicking SAVE button should be prompted to a success message “New User
has been created successfully”.
Save the defect/bug in the BUG TRACKING TOOL.

Bug life cycle

What is Bug/Defect?

Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake,
failure, or fault in a computer program that prevents it from working correctly or
produces an incorrect result. Bugs arise from mistakes and errors, made by people, in
either a program’s source code or its design.”

Other definitions can be:


An unwanted and unintended property of a program or piece of hardware, especially one
that causes it to malfunction.

or
A fault in a program, which causes the program to perform in an unintended or
unanticipated manner.

Lastly the general definition of bug is: “failure to conform to specifications”.

If you want to detect and resolve the defect in early development stage, defect tracking
and software development phases should start simultaneously.

We will discuss more on Writing effective bug report in another article. Let’s concentrate
here on bug/defect life cycle.

Life cycle of Bug:

1) Log new defect


When tester logs any new bug the mandatory fields are:
Build version, Submit On, Product, Module, Severity, Synopsis and Description to
Reproduce

In above list you can add some optional fields if you are using manual Bug submission
template:
These Optional Fields are: Customer name, Browser, Operating system, File Attachments
or screenshots.

The following fields remain either specified or blank:


If you have authority to add bug Status, Priority and ‘Assigned to’ fields them you can
specify these fields. Otherwise Test manager will set status, Bug priority and assign the
bug to respective module owner.

Look at the following Bug life cycle:


[Click on the image to view full size] Ref: Bugzilla bug life cycle

The figure is quite complicated but when you consider the significant steps in bug life
cycle you will get quick idea of bug life.

On successful logging the bug is reviewed by Development or Test manager. Test


manager can set the bug status as Open, can Assign the bug to developer or bug may be
deferred until next release.

When bug gets assigned to developer and can start working on it. Developer can set bug
status as won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.

If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds
with specific action. If bug is fixed then QA verifies the bug and can set the bug status as
verified closed or Reopen.
Bug status description:
These are various stages of bug life cycle. The status caption may vary depending on the
bug tracking system you are using.

1) New: When QA files new bug.

2) Deferred: If the bug is not related to current build or can not be fixed in this release
or bug is not important to fix immediately then the project manager can set the bug
status as deferred.

3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to
developer.

4) Resolved/Fixed: When developer makes necessary code changes and verifies the
changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing
team.

5) Could not reproduce: If developer is not able to reproduce the bug by the steps
given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to
check if bug is reproduced and can assign to developer with detailed reproducing steps.

6) Need more information: If developer is not clear about the bug reproduce steps
provided by QA to reproduce the bug, then he/she can mark it as “Need more
information’. In this case QA needs to add detailed reproducing steps and assign bug
back to dev for fix.

7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix
then QA can mark it as ‘Reopen’ so that developer can take appropriate action.

8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved
then QA can mark bug as ‘Closed’.

9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected
or invalid if the system is working according to specifications and bug is just due to some
misinterpretation.

Do you know “Most of the bugs in software are due to incomplete or inaccurate
functional requirements?” The software code, doesn’t matter how well it’s written, can’t
do anything if there are ambiguities in requirements.

It’s better to catch the requirement ambiguities and fix them in early development life
cycle. Cost of fixing the bug after completion of development or product release is too
high. So it’s important to have requirement analysis and catch these incorrect
requirements before design specifications and project implementation phases of SDLC.

How to measure functional software requirement specification (SRS) documents?

Well, we need to define some standard tests to measure the requirements. Once each
requirement is passed through these tests you can evaluate and freeze the functional
requirements.
Let’s take an example. You are working on a web based application. Requirement is as
follows:

“Web application should be able to serve the user queries as early as possible”

How will you freeze the requirement in this case?

What will be your requirement satisfaction criteria? To get the answer, ask this question
to stakeholders: How much response time is ok for you?

If they say, we will accept the response if it’s within 2 seconds, then this is your
requirement measure. Freeze this requirement and carry the same procedure for next
requirement.

We just learned how to measure the requirements and freeze those in design,
implementation and testing phases.

Now let’s take other example. I was working on a web based project. Client
(stakeholders) specified the project requirements for initial phase of the project
development. My manager circulated all the requirements in the team for review. When
we started discussion on these requirements, we were just shocked! Everyone was
having his or her own conception about the requirements. We found lot of ambiguities in
the ‘terms’ specified in requirement documents, which later on sent to client for
review/clarification.

Client used many ambiguous terms, which were having many different meanings,
making it difficult to analyze the exact meaning. The next version of the requirement doc
from client was clear enough to freeze for design phase.

From this example we learned “Requirements should be clear and consistent”

Next criteria for testing the requirements specification is “Discover missing


requirements”

Many times project designers don’t get clear idea about specific modules and they
simply assume some requirements while design phase. Any requirement should not be
based on assumptions. Requirements should be complete, covering each and every
aspect of the system under development.

Specifications should state both type of requirements i.e. what system should do and
what should not.
Generally I use my own method to uncover the unspecified requirements. When I read
the software requirements specification document (SRS), I note down my own
understanding of the requirements that are specified, plus other requirements SRS
document should supposed to cover. This helps me to ask the questions about
unspecified requirements making it clearer.

For checking the requirements completeness, divide requirements in three sections,


‘Must implement’ requirements, requirements those are not specified but are ‘assumed’
and third type is ‘imagination’ type of requirements. Check if all type of requirements are
addressed before software design phase.

Check if the requirements are related to the project goal.

Some times stakeholders have their own expertise, which they expect to come in system
under development. They don’t think if that requirement is relevant to project in hand.
Make sure to identify such requirements. Try to avoid the irrelevant requirements in first
phase of the project development cycle. If not possible ask the questions to stakeholders:
why you want to implement this specific requirement? This will describe the particular
requirement in detail making it easier for designing the system considering the future
scope.

But how to decide the requirements are relevant or not?

Simple answer: Set the project goal and ask this question: If not implementing this
requirement will cause any problem achieving our specified goal? If not, then this is
irrelevant requirement. Ask the stakeholders if they really want to implement these types
of requirements.

In short requirements specification (SRS) doc should address following:

Project functionality (What should be done and what should not)

Software, Hardware interfaces and user interface

System Correctness, Security and performance criteria

Implementation issues (risks) if any

Conclusion:

I have covered all aspects of requirement measurement. To be specific about


requirements, I will summarize requirement testing in one sentence:

“Requirements should be clear and specific with no uncertainty, requirements should be


measurable in terms of specific values, requirements should be testable having some
evaluation criteria for each requirement, and requirements should be complete, without
any contradictions”

Testing should start at requirement phase to avoid further requirement related bugs.
Communicate more and more with your stakeholder to clarify all the requirements before
starting project design and implementation.

What you need to know about BVT (Build Verification Testing)

What is BVT?

Build Verification test is a set of tests run on every new build to verify that build is
testable before it is released to test team for further testing. These test cases are core
functionality test cases that ensure application is stable and can be tested thoroughly.
Typically BVT process is automated. If BVT fails that build is again get assigned to
developer for fix.

BVT is also called smoke testing or build acceptance testing (BAT)

New Build is checked mainly for two things:

•Build validation

•Build acceptance

Some BVT basics:

•It is a subset of tests that verify main functionalities.

•The BVT’s are typically run on daily builds and if the BVT fails the build is rejected and a
new build is released after the fixes are done.

•The advantage of BVT is it saves the efforts of a test team to setup and test a build
when major functionality is broken.

•Design BVTs carefully enough to cover basic functionality.

•Typically BVT should not run more than 30 minutes.

•BVT is a type of regression testing, done on each and every new build.

BVT primarily checks for the project integrity and checks whether all the modules are
integrated properly or not. Module integration testing is very important when different
teams develop project modules. I heard many cases of application failure due to
improper module integration. Even in worst cases complete project gets scraped due to
failure in module integration.

What is the main task in build release? Obviously file ‘check in’ i.e. to include all the new
and modified project files associated with respective builds. BVT was primarily introduced
to check initial build health i.e. to check whether – all the new and modified files are
included in release, all file formats are correct, every file version and language, flags
associated with each file.

These basic checks are worth before build release to test team for testing. You will save
time and money by discovering the build flaws at the very beginning using BVT.

Which test cases should be included in BVT?

This is very tricky decision to take before automating the BVT task. Keep in mind that
success of BVT depends on which test cases you include in BVT.

Here are some simple tips to include test cases in your BVT automation suite:

•Include only critical test cases in BVT.

•All test cases included in BVT should be stable.

•All the test cases should have known expected result.

•Make sure all included critical functionality test cases are sufficient for application test
coverage.

Also do not includes modules in BVT, which are not yet stable. For some under-
development features you can’t predict expected behavior as these modules are
unstable and you might know some known failures before testing for these incomplete
modules. There is no point using such modules or test cases in BVT.

You can make this critical functionality test cases inclusion task simple by
communicating with all those involved in project development and testing life cycle. Such
process should negotiate BVT test cases, which ultimately ensure BVT success. Set some
BVT quality standards and these standards can be met only by analyzing major project
features and scenarios.

Example: Test cases to be included in BVT for Text editor application (Some sample tests
only):

1) Test case for creating text file.


2) Test cases for writing something into text editor

3) Test case for copy, cut, paste functionality of text editor

4) Test case for opening, saving, deleting text file.

These are some sample test cases, which can be marked as ‘critical’ and for every minor
or major changes in application these basic critical test cases should be executed. This
task can be easily accomplished by BVT.

BVT automation suits needs to be maintained and modified time-to-time. E.g. include test
cases in BVT when there are new stable project modules available.

What happens when BVT suite run:

Say Build verification automation test suite executed after any new build.

1) The result of BVT execution is sent to all the email ID’s associated with that project.

2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of
BVT.

3) If BVT fails then BVT owner diagnose the cause of failure.

4) If the failure cause is defect in build, all the relevant information with failure logs is
sent to respective developers.

5) Developer on his initial diagnostic replies to team about the failure cause. Whether
this is really a bug? And if it’s a bug then what will be his bug-fixing scenario.

6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is
passed to test team for further detail functionality, performance and other testes.

This process gets repeated for every new build.

Why BVT or build fails?

BVT breaks sometimes. This doesn’t mean that there is always bug in the build. There
are some other reasons to build fail like test case coding error, automation suite error,
infrastructure error, hardware failures etc.

You need to troubleshoot the cause for the BVT break and need to take proper action
after diagnosis.

Tips for BVT success:


1) Spend considerable time writing BVT test cases scripts.

2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will
help developer team to debug and quickly know the failure cause.

3) Select stable test cases to include in BVT. For new features if new critical test case
passes consistently on different configuration then promote this test case in your BVT
suite. This will reduce the probability of frequent build failure due to new unstable
modules and test cases.

4) Automate BVT process as much as possible. Right from build release process to BVT
result – automate everything.

5) Have some penalties for breaking the build Some chocolates or team coffee party
from developer who breaks the build will do.

Conclusion:

BVT is nothing but a set of regression test cases that are executed each time for new
build. This is also called as smoke test. Build is not assigned to test team unless and until
the BVT passes. BVT can be run by developer or tester and BVT result is communicated
throughout the team and immediate action is taken to fix the bug if BVT fails. BVT
process is typically automated by writing scripts for test cases. Only critical test cases
are included in BVT. These test cases should ensure application test coverage. BVT is
very effective for daily as well as long term builds. This saves significant time, cost,
resources and after all no frustration of test team for incomplete build.

What is actual testing process in practical or company environment?

Today I got interesting question from reader, How testing is carried out in company i.e in
practical environment? Those who get just out of college and start for searching the jobs
have this curiosity, How would be the actual working environment in the companies?

Here I focus on software Testing actual working process in the companies. As of now I got
good experience of software testing career and day to day testing activities. So I will try
to share more practically rather than theoretically.

Whenever we get any new project there is initial project familiarity meeting. In this
meeting we basically discuss on who is client? what is project duration and when is
delivery? Who is involved in project i.e manager, Tech leads, QA leads, developers,
testers etc etc..?

From the SRS (software requirement specification) project plan is developed. The
responsibility of testers is to create software test plan from this SRS and project plan.
Developers start coding from the design. The project work is devided into different
modules and these project modules are distributed among the developers. In meantime
testers responsibility is to create test scenario and write test cases according to assigned
modules. We try to cover almost all the functional test cases from SRS. The data can be
maintained manually in some excel test case templates or bug tracking tools.

When developers finish individual modules, those modules are assigned to testers.
Smoke testing is performed on these modules and if they fail this test, modules are
reassigned to respective developers for fix. For passed modules manual testing is carried
out from the written test cases. If any bug is found that get assigned to module
developer and get logged in bug tracking tool. On bug fix tester do bug verification and
regression testing of all related modules. If bug passes the verification it is marked as
verified and marked as closed. Otherwise above mentioned bug cycle gets repeated.(I
will cover bug life cycle in other post)

Different tests are performed on individual modules and integration testing on module
integration. These tests includes Compatibility testing i.e testing application on different
hardware, OS versions, software platform, different browsers etc. Load and stress
testing is also carried out according to SRS. Finally system testing is performed by
creating virtual client environment. On passing all the test cases test report is prepared
and decision is taken to release the product!

So this was a brief outline of process of project life cycle.

Here is detail of each step what testing exactly carried out in each software quality and
testing life cycle specified by IEEE and ISO standards:

Review of the software requirement specifications

Objectives is set for the Major releases

Target Date planned for the Releases

Detailed Project Plan is build. This includes the decision on Design Specifications

Develop Test Plan based on Design Specifications

Test Plan : This includes Objectives, Methodology adopted while testing, Features to

be tested and not to be tested, risk criteria, testing schedule, multi-


platform support and the resource allocation for testing.

Test Specifications

This document includes technical details ( Software requirements )

required prior to the testing.

Writing of Test Cases

Smoke(BVT) test cases

Sanity Test cases

Regression Test Cases

Negative Test Cases

Extended Test Cases

Development – Modules developed one by one

Installers Binding: Installers are build around the individual product.

Build procedure :

A build includes Installers of the available products – multiple platforms.

Testing

Smoke Test (BVT) Basic application test to take decision on further testing

Testing of new features

Cross-platform testing

Stress testing and memory leakage testing.

Bug Reporting

Bug report is created


Development – Code freezing

No more new features are added at this point.

Testing

Builds and regression testing.

Decision to release the product

Post-release Scenario for further objectives.

Sample bug report


Below sample bug/defect report will give you exact idea of how to report a bug in bug
tracking tool.

Here is the example scenario that caused a bug:

Lets assume in your application under test you want to create a new user with user
information, for that you need to logon into the application and navigate to USERS menu
> New User, then enter all the details in the ‘User form’ like, First Name, Last Name, Age,
Address, Phone etc. Once you enter all these information, you need to click on ‘SAVE’
button in order to save the user. Now you can see a success message saying, “New User
has been created successfully”.

But when you entered into your application by logging in and navigated to USERS menu
> New user, entered all the required information to create new user and clicked on SAVE
button. BANG! The application crashed and you got one error page on screen. (Capture
this error message window and save as a Microsoft paint file)

Now this is the bug scenario and you would like to report this as a BUG in your bug-
tracking tool.

How will you report this bug effectively?

Here is the sample bug report for above mentioned example:

(Note that some ‘bug report’ fields might differ depending on your bug tracking system)
SAMPLE BUG REPORT:

Bug Name: Application crash on clicking the SAVE button while creating a new user.

Bug ID: (It will be automatically created by the BUG Tracking tool once you save this bug)

Area Path: USERS menu > New Users

Build Number: Version Number 5.0.1

Severity: HIGH (High/Medium/Low) or 1

Priority: HIGH (High/Medium/Low) or 1

Assigned to: Developer-X

Reported By: Your Name

Reported On: Date

Reason: Defect

Status: New/Open/Active (Depends on the Tool you are using)

Environment: Windows 2003/SQL Server 2005

Description:

Application crash on clicking the SAVE button while creating a new

user, hence unable to create a new user in the application.

Steps To Reproduce:

1) Logon into the application

2) Navigate to the Users Menu > New User

3) Filled all the user information fields

4) Clicked on ‘Save’ button

5) Seen an error page “ORA1090 Exception: Insert values Error…”

6) See the attached logs for more information (Attach more logs related to bug..IF any)

7) And also see the attached screenshot of the error page.

Expected result: On clicking SAVE button, should be prompted to a success message


“New User has been created successfully”.
(Attach ‘application crash’ screen shot.. IF any)

Save the defect/bug in the BUG TRACKING TOOL. You will get a bug id, which you can
use for further bug reference.

Default ‘New bug’ mail will go to respective developer and the default module owner
(Team leader or manager) for further action.

Testing Checklist

Are you going to start on a new project for testing? Don’t forget to check this Testing
Checklist in each and every step of your Project life cycle. List is mostly equivalent to
Test plan, it will cover all quality assurance and testing standards.

Testing Checklist:

1 Create System and Acceptance Tests [ ]

2 Start Acceptance test Creation [ ]

3 Identify test team [ ]

4 Create Workplan [ ]

5 Create test Approach [ ]

6 Link Acceptance Criteria and Requirements to form the basis of

acceptance test [ ]

7 Use subset of system test cases to form requirements portion of

acceptance test [ ]

8 Create scripts for use by the customer to demonstrate that the system meets

requirements [ ]

9 Create test schedule. Include people and all other resources. [ ]

10 Conduct Acceptance Test [ ]

11 Start System Test Creation [ ]

12 Identify test team members [ ]

13 Create Workplan [ ]

14 Determine resource requirements [ ]

15 Identify productivity tools for testing [ ]

16 Determine data requirements [ ]


17 Reach agreement with data center [ ]

18 Create test Approach [ ]

19 Identify any facilities that are needed [ ]

20 Obtain and review existing test material [ ]

21 Create inventory of test items [ ]

22 Identify Design states, conditions, processes, and procedures [ ]

23 Determine the need for Code based (white box) testing. Identify conditions. [ ]

24 Identify all functional requirements [ ]

25 End inventory creation [ ]

26 Start test case creation [ ]

27 Create test cases based on inventory of test items [ ]

28 Identify logical groups of business function for new sysyem [ ]

29 Divide test cases into functional groups traced to test item inventory [ ] 1.30 Design
data sets to correspond to test cases [ ]

31 End test case creation [ ]

32 Review business functions, test cases, and data sets with users [ ]

33 Get signoff on test design from Project leader and QA [ ]

34 End Test Design [ ]

35 Begin test Preparation [ ]

36 Obtain test support resources [ ]

37 Outline expected results for each test case [ ]

38 Obtain test data. Validate and trace to test cases [ ]

39 Prepare detailed test scripts for each test case [ ]

40 Prepare & document environmental set up procedures. Include back up and

recovery plans [ ]

41 End Test Preparation phase [ ]

42 Conduct System Test [ ]

43 Execute test scripts [ ]

44 Compare actual result to expected [ ]

45 Document discrepancies and create problem report [ ]


46 Prepare maintenance phase input [ ]

47 Re-execute test group after problem repairs [ ]

48 Create final test report, include known bugs list [ ]

49 Obtain formal signoff [ ]

Test plan sample: SoftwareTesting and Quality assurance Templates

Test plan is in high demand. Ya it should be! Test plan reflects your entire project testing
schedule and approach. This article is in response to those who have demanded sample
test plan.

In my previous article I have outlined Test plan Index. In this article I will elaborate that
index to what each point mean to do. So this Test plan will include the purpose of test
plan i. e to prescribe the scope, approach, resources, and schedule of the testing
activities. To identify the items being tested, the features to be tested, the testing tasks
to be performed, the personnel responsible for each task, and the risks associated with
this plan.

Find what actually you need to include in each index point.

I have included link to download PDF format of this test plan template at the end of this
post.

Test Plan Template:

(Name of the Product)

Prepared by:

(Names of Preparers)

(Date)

TABLE OF CONTENTS
1.0 INTRODUCTION

2.0 OBJECTIVES AND TASKS

2.1 Objectives

2.2 Tasks

3.0 SCOPE

4.0 Testing Strategy

4.1 Alpha Testing (Unit Testing)

4.2 System and Integration Testing

4.3 Performance and Stress Testing

4.4 User Acceptance Testing

4.5 Batch Testing

4.6 Automated Regression Testing

4.7 Beta Testing

5.0 Hardware Requirements

6.0 Environment Requirements

6.1 Main Frame

6.2 Workstation

7.0 Test Schedule

8.0 Control Procedures

9.0 Features to Be Tested

10.0 Features Not to Be Tested


11.0 Resources/Roles & Responsibilities

12.0 Schedules

13.0 Significantly Impacted Departments (SIDs)

14.0 Dependencies

15.0 Risks/Assumptions

16.0 Tools

17.0 Approvals

1.0 INTRODUCTION

A brief summary of the product being tested. Outline all the functions at a high level.

2.0 OBJECTIVES AND TASKS

2.1 Objectives

Describe the objectives supported by the Master Test Plan, eg., defining tasks and
responsibilities, vehicle for communication, document to be used as a service level
agreement, etc.

2.2 Tasks

List all tasks identified by this Test Plan, i.e., testing, post-testing, problem reporting, etc.

3.0 SCOPE

General
This section describes what is being tested, such as all the functions of a specific
product, its existing interfaces, integration of all functions.

Tactics

List here how you will accomplish the items that you have listed in the “Scope” section.
For example, if you have mentioned that you will be testing the existing interfaces, what
would be the procedures you would follow to notify the key people to represent their
respective areas, as well as allotting time in their schedule for assisting you in
accomplishing your activity?

4.0 TESTING STRATEGY

Describe the overall approach to testing. For each major group of features or feature
combinations, specify the approach which will ensure that these feature groups are
adequately tested. Specify the major activities, techniques, and tools which are used to
test the designated groups of features.

The approach should be described in sufficient detail to permit identification of the major
testing tasks and estimation of the time required to do each one.

4.1 Unit Testing

Definition:

Specify the minimum degree of comprehensiveness desired. Identify the techniques


which will be used to judge the comprehensiveness of the testing effort (for example,
determining which statements have been executed at least once). Specify any additional
completion criteria (for example, error frequency). The techniques to be used to trace
requirements should be specified.

Participants:

List the names of individuals/departments who would be responsible for Unit Testing.

Methodology:

Describe how unit testing will be conducted. Who will write the test scripts for the unit
testing, what would be the sequence of events of Unit Testing and how will the testing
activity take place?
4.2 System and Integration Testing

Definition:

List what is your understanding of System and Integration Testing for your project.

Participants:

Who will be conducting System and Integration Testing on your project? List the
individuals that will be responsible for this activity.

Methodology:

Describe how System & Integration testing will be conducted. Who will write the test
scripts for the unit testing, what would be sequence of events of System & Integration
Testing, and how will the testing activity take place?

4.3 Performance and Stress Testing

Definition:

List what is your understanding of Stress Testing for your project.

Participants:

Who will be conducting Stress Testing on your project? List the individuals that will be
responsible for this activity.

Methodology:

Describe how Performance & Stress testing will be conducted. Who will write the test
scripts for the testing, what would be sequence of events of Performance & Stress
Testing, and how will the testing activity take place?

4.4 User Acceptance Testing

Definition:
The purpose of acceptance test is to confirm that the system is ready for operational use.
During acceptance test, end-users (customers) of the system compare the system to its
initial requirements.

Participants:

Who will be responsible for User Acceptance Testing? List the individuals’ names and
responsibility.

Methodology:

Describe how the User Acceptance testing will be conducted. Who will write the test
scripts for the testing, what would be sequence of events of User Acceptance Testing,
and how will the testing activity take place?

4.5 Batch Testing

4.6 Automated Regression Testing

Definition:

Regression testing is the selective retesting of a system or component to verify that


modifications have not caused unintended effects and that the system or component still
works as specified in the requirements.

Participants:

Methodology:

4.7 Beta Testing

Participants:

Methodology:

5.0 HARDWARE REQUIREMENTS

Computers

Modems
6.0 ENVIRONMENT REQUIREMENTS

6.1 Main Frame

Specify both the necessary and desired properties of the test environment. The
specification should contain the physical characteristics of the facilities, including the
hardware, the communications and system software, the mode of usage (for example,
stand-alone), and any other software or supplies needed to support the test. Also specify
the level of security which must be provided for the test facility, system software, and
proprietary components such as software, data, and hardware.

Identify special test tools needed. Identify any other testing needs (for example,
publications or office space). Identify the source of all needs which are not currently
available to your group.

6.2 Workstation

7.0 TEST SCHEDULE

Include test milestones identified in the Software Project Schedule as well as all item
transmittal events.

Define any additional test milestones needed. Estimate the time required to do each
testing task. Specify the schedule for each testing task and test milestone. For each
testing resource (that is, facilities, tools, and staff), specify its periods of use.

8.0 CONTROL PROCEDURES

Problem Reporting

Document the procedures to follow when an incident is encountered during the testing
process. If a standard form is going to be used, attach a blank copy as an “Appendix” to
the Test Plan. In the event you are using an automated incident logging system, write
those procedures in this section.

Change Requests
Document the process of modifications to the software. Identify who will sign off on the
changes and what would be the criteria for including the changes to the current product.
If the changes will affect existing programs, these modules need to be identified.

9.0 FEATURES TO BE TESTED

Identify all software features and combinations of software features that will be tested.

10.0 FEATURES NOT TO BE TESTED

Identify all features and significant combinations of features which will not be tested and
the reasons.

11.0 RESOURCES/ROLES & RESPONSIBILITIES

Specify the staff members who are involved in the test project and what their roles are
going to be (for example, Mary Brown (User) compile Test Cases for Acceptance Testing).
Identify groups responsible for managing, designing, preparing, executing, and resolving
the test activities as well as related issues. Also identify groups responsible for providing
the test environment. These groups may include developers, testers, operations staff,
testing services, etc.

12.0 SCHEDULES

Major Deliverables

Identify the deliverable documents. You can list the following documents:

- Test Plan

- Test Cases

- Test Incident Reports

- Test Summary Reports

13.0 SIGNIFICANTLY IMPACTED DEPARTMENTS (SIDs)


Department/Business Area Bus. Manager Tester(s)

14.0 DEPENDENCIES

Identify significant constraints on testing, such as test-item availability, testing-resource


availability, and deadlines.

15.0 RISKS/ASSUMPTIONS

Identify the high-risk assumptions of the test plan. Specify contingency plans for each
(for example, delay in delivery of test items might require increased night shift
scheduling to meet the delivery date).

16.0 TOOLS

List the Automation tools you are going to use. List also the Bug tracking tool here.

17.0 APPROVALS

Specify the names and titles of all persons who must approve this plan. Provide space for
the signatures and dates.

Name (In Capital Letters) Signature Date

1.

2.

3.

4.

http://www.softwaretestinghelp.com/
Test Plan Template:
(Name of the Product)
Prepared by:
(Names of Preparers)
(Date)
TABLE OF CONTENTS
1.0 INTRODUCTION
2.0 OBJECTIVES AND TASKS
2.1 Objectives
2.2 Tasks
3.0 SCOPE
4.0 Testing Strategy
4.1 Alpha Testing (Unit Testing)
4.2 System and Integration Testing
4.3 Performance and Stress Testing
4.4 User Acceptance Testing
4.5 Batch Testing
4.6 Automated Regression Testing
4.7 Beta Testing
5.0 Hardware Requirements
6.0 Environment Requirements
6.1 Main Frame
6.2 Workstation
7.0 Test Schedule
8.0 Control Procedures
9.0 Features to Be Tested
10.0 Features Not to Be Tested
11.0 Resources/Roles & Responsibilities
12.0 Schedules
13.0 Significantly Impacted Departments (SIDs)
14.0 Dependencies
15.0 Risks/Assumptions
16.0 Tools
17.0 Approvals
1.0 INTRODUCTION
A brief summary of the product being tested. Outline all the functions at a high level.
2.0 OBJECTIVES AND TASKS
2.1 Objectives
Describe the objectives supported by the Master Test Plan, eg., defining tasks and
responsibilities, vehicle for communication, document to be used as a service level
agreement, etc.
2.2 Tasks
List all tasks identified by this Test Plan, i.e., testing, post-testing, problem reporting, etc.
3.0 SCOPE
General
This section describes what is being tested, such as all the functions of a specific
product,
its existing interfaces, integration of all functions.
Tactics
List here how you will accomplish the items that you have listed in the "Scope" section.
For example, if you have mentioned that you will be testing the existing interfaces, what
would be the procedures you would follow to notify the key people to represent their
respective areas, as well as allotting time in their schedule for assisting you in
accomplishing your activity?
4.0 TESTING STRATEGY
Describe the overall approach to testing. For each major group of features or feature
combinations, specify the approach which will ensure that these feature groups are
adequately tested. Specify the major activities, techniques, and tools which are used to
test the designated groups of features.
The approach should be described in sufficient detail to permit identification of the major
testing tasks and estimation of the time required to do each one.
4.1 Unit Testing
Definition:
Specify the minimum degree of comprehensiveness desired. Identify the techniques
which will be used to judge the comprehensiveness of the testing effort (for example,
determining which statements have been executed at least once). Specify any additional
completion criteria (for example, error frequency). The techniques to be used to trace
requirements should be specified.
Participants:
List the names of individuals/departments who would be responsible for Unit Testing.
Methodology:
Describe how unit testing will be conducted. Who will write the test scripts for the unit
testing, what would be the sequence of events of Unit Testing and how will the testing
activity take place?
4.2 System and Integration Testing
Definition:
List what is your understanding of System and Integration Testing for your project.
Participants:
Who will be conducting System and Integration Testing on your project? List the
individuals that will be responsible for this activity.
Methodology:
Describe how System & Integration testing will be conducted. Who will write the test
scripts for the unit testing, what would be sequence of events of System & Integration
Testing, and how will the testing activity take place?
4.3 Performance and Stress Testing
Definition:
List what is your understanding of Stress Testing for your project.
Participants:
Who will be conducting Stress Testing on your project? List the individuals that will be
responsible for this activity.
Methodology:
Describe how Performance & Stress testing will be conducted. Who will write the test
scripts for the testing, what would be sequence of events of Performance & Stress
Testing, and how will the testing activity take place?
4.4 User Acceptance Testing
Definition:
The purpose of acceptance test is to confirm that the system is ready for operational use.
During acceptance test, end-users (customers) of the system compare the system to its
initial requirements.
Participants:
Who will be responsible for User Acceptance Testing? List the individuals' names and
responsibility.
Methodology:
Describe how the User Acceptance testing will be conducted. Who will write the test
scripts for the testing, what would be sequence of events of User Acceptance Testing,
and
how will the testing activity take place?
4.5 Batch Testing
4.6 Automated Regression Testing
Definition:
Regression testing is the selective retesting of a system or component to verify that
modifications have not caused unintended effects and that the system or component still
works as specified in the requirements.
Participants:
Methodology:
4.7 Beta Testing
Participants:
Methodology:
5.0 HARDWARE REQUIREMENTS
Computers
Modems
6.0 ENVIRONMENT REQUIREMENTS
6.1 Main Frame
Specify both the necessary and desired properties of the test environment. The
specification should contain the physical characteristics of the facilities, including the
hardware, the communications and system software, the mode of usage (for example,
stand-alone), and any other software or supplies needed to support the test. Also specify
the level of security which must be provided for the test facility, system software, and
proprietary components such as software, data, and hardware.
Identify special test tools needed. Identify any other testing needs (for example,
publications or office space). Identify the source of all needs which are not currently
available to your group.
6.2 Workstation
7.0 TEST SCHEDULE
Include test milestones identified in the Software Project Schedule as well as all item
transmittal events.
Define any additional test milestones needed. Estimate the time required to do each
testing task. Specify the schedule for each testing task and test milestone. For each
testing resource (that is, facilities, tools, and staff), specify its periods of use.
8.0 CONTROL PROCEDURES
Problem Reporting
Document the procedures to follow when an incident is encountered during the testing
process. If a standard form is going to be used, attach a blank copy as an "Appendix" to
the Test Plan. In the event you are using an automated incident logging system, write
those procedures in this section.
Change Requests
Document the process of modifications to the software. Identify who will sign off on the
changes and what would be the criteria for including the changes to the current product.
If the changes will affect existing programs, these modules need to be identified.
9.0 FEATURES TO BE TESTED
Identify all software features and combinations of software features that will be tested.
10.0 FEATURES NOT TO BE TESTED
Identify all features and significant combinations of features which will not be tested and
the reasons.
11.0 RESOURCES/ROLES & RESPONSIBILITIES
Specify the staff members who are involved in the test project and what their roles are
going to be (for example, Mary Brown (User) compile Test Cases for Acceptance
Testing). Identify groups responsible for managing, designing, preparing, executing, and
resolving the test activities as well as related issues. Also identify groups responsible for
providing the test environment. These groups may include developers, testers,
operations
staff, testing services, etc.
12.0 SCHEDULES
Major Deliverables
Identify the deliverable documents. You can list the following documents:
- Test Plan
- Test Cases
- Test Incident Reports
- Test Summary Reports
13.0 SIGNIFICANTLY IMPACTED DEPARTMENTS (SIDs)
Department/Business Area Bus. Manager Tester(s)
14.0 DEPENDENCIES
Identify significant constraints on testing, such as test-item availability, testing-resource
availability, and deadlines.
15.0 RISKS/ASSUMPTIONS
Identify the high-risk assumptions of the test plan. Specify contingency plans for each
(for example, delay in delivery of test items might require increased night shift
scheduling to meet the delivery date).
16.0 TOOLS
List the Automation tools you are going to use. List also the Bug tracking tool here.
17.0 APPROVALS
Specify the names and titles of all persons who must approve this plan. Provide space for
the signatures and dates.
Name (In Capital Letters) Signature Date
1.
2.
3.
4.
End.
Ask me your Software Testing, Job, Interview queries at
www.softwaretestinghelp.com
Passion For Testing, Passion For Quality!