Sei sulla pagina 1di 71

MANUAL TESTING

Why did you choose Testing?

Scope of getting job is very high


No need to depend up on any other technology
Testing will be their forever
I Want to be consist throughout my carrier thats why I choose Testing.

Software components are two types:


They are:
1) Project (service based company)
2) Product
Project:

A software application which is developed for a particular customer that Application is


called project.
Project based Company is called as Services based company.

Product:

A software application which is to release a quality product in the Market.


Quality is nothing but meet the costumer expectation Such as
Company side Quality Attributes:
MCR
MCE
CP
TR

(Meet customer requirement)


(Meet customer expectations)
(Cost to purchase)
(Time to release)

Customer side Quality Attributes:

Safety
Security
Usability
Understandability
Modularity
Compatibility
Reliability
Testability
Adaptability
Performance

Software Development Life Cycle (SDLC):It is a frame work it describes the activities which we are performing by the time of development
of the software application.

SDLC contains six phases they are:


1. Initial Phase or Requirements Phase
2. Analysis Phase
3. Design Phase
4. Coding Phase
5. Testing Phase
6. Delivery & Maintenance Phase
I. Initial Phase or Requirements Phase:
(a) Tasks: Interaction with the customer and gathering the requirements.
(b) Roles: Business Analyst (B.A), Engagement Manager (E.M)
Task: First of all, the business analyst will take an appointment from the customer, collects the
templates from the company, meets the customer on appointed day, gathers the requirements with
the help of template and comes back to the company with the requirements documents.
Once the requirement document has come to the company the engagement manager will check
whether the customer gives any extra requirements or confused requirements. In case of extra
requirements he deals the excess cost of the project. In case of confused requirements he is the
responsible for prototype demonstration and gathering the clear requirements.
Proof: The proof document of this phase is Requirements Document. This is called with
different names in different companies.
1. FRS (Functional Requirements Specification)
2. CRS (Customer Requirement Specification)
3. URS (User Requirement Specification)
4. BDD (Business Design Document)
5. BD (Business Document)
6. BRS (Business Requirement Specification)
Some companies may maintain the overall business flow information in one document and the
detailed functional requirement information in the other document
Templates: It is a pre defined format, which contains the predefined fields, and used for
preparing a document in an easy, comfort and perfect manner.

Prototype: -Defined as a roughly & rapidly developed model which is used for demonstrating
to the client, In order to gather the clear requirements and to win the confidence of a customer.
II.

Analysis Phase:

(a) Tasks:
1.
2.
3.
4.

Feasibility Study
Tentative planning
Technology Selection
Requirement Analysis

(b) Roles: System Analyst, Project Manager, and Team Manager.


Process:
1. Feasibility Study: - It is detailed study of the requirements in order to check whether the
requirements are possible or not.
2. Tentative Planning: In this section the resource planning and the time planning
(scheduling) is done temporarily.
3. Technology Selection: - The list of all the technologies that are required to accomplish
this project. Successfully will be analyzed and listed out in this section.
4. Requirement Analysis: - The list of all the requirements that are required to accomplish
this project. Successfully will be analyzed and listed out here in this section.
SRS- System requirement specification

Proof: - The proof of this phase is System Requirement Specification.


III.

Design Phase: -

Tasks:
1. High level designing
2. Low level designing
Roles: High-level designing is done by the chief Architect & Low level designing is done by the
Technical Lead.
Process: The Chief architect will be drawing some diagrams using unified modeling language in
order to divide the whole project into modules.
The Technical lead will also draw some diagrams in order to divide the modules into sub
modules.

The technical lead will also develop the PSEUDO code in order to make developers comfortable
while developing the actual code.
Proof: The proof document of this phase is Technical Design Document.
IV.

Coding phase:

(a) Task: Developing or Programming


(b) Roles: Developers or Programmers
Process: Developers will develop the actual code by using the technical design document as well
as following the coding standards like Proper indentation, color coding, proper commenting and
etc..,
Proof: The proof document of this phase is Source Code Document.
E.g.: The programmer will develop some programs everyone will develop his program in
different colors but the soft ware companies will ask the developers to develop the program
according to the company standards using proper color, coding, commenting. So as to understand
it easily.

V.

Testing Phase:
(a) Task: Testing
(b) Roles: Test engineers
Process:
1. First of all the test engineers will collect the requirements document
and try to understand all the requirements
2. While understanding it at all they get any doubts they will list out all
of them in a review report.
3. They will send the review report to the author of the requirements
document for clarifications.
4. Once the clarifications are given and after understanding all the
requirements clearly, they will take the test case template and writes
the test cases.
5. Once the first build is released then they will execute the test cases.
6. If at all any defects are found. They will list out all of them in a defect
profile template then.
7. They will send the Defect Profile Document to the development
department and then will be waiting for the next build to be released.

8. Once the next build is released then they re-execute the test cases.
9. If at all any defects are found they will update the profile document
and sent it to the development department and will be waiting for the
next build to be released.
10. This process continuous till the product is Quality
Proof: The proof of the testing phase is Quality Product.

Test Case: (def) Test case is an idea of a test engineer based on the customers requirements in
order to test a particular feature (or) a function.
VI.

Delivery & Maintenance Phase:


Delivery:
(a) Task: - Installing the application into the clients environment
(b) Rolls: -Deployment engineer or Senior test engineers.

Process: - The senior test engineer or deployment engineer will go to the clients place and install
the application in their environment with the help of the guidelines provide in the deployment
document.
Maintenance:
After delivering the software, while using if at all any problem occurs then that
problem becomes a task, based on that problem corresponding roles will be appointed, the roles
will defined the process and solve that problem.
Some clients may request for the continuous maintenance in such situations a group of
people from the software company will be continuously working on the clients place and taking
care of the soft ware.

SOFTWARE ARCHITECTURE
There are 4 types of architecture
1)
2)
3)
4)

One tier Architecture (stand alone environment).


Two tier
(client server)
Three tier
(web based)
N tier
(distributed )

Every application is having three layers.


Presentation layer: This layer is the mediators between user and acceptance.
Business layer: Whatever the structure (code) is mandatory to complete our business
requirement that layer is called business layer.
Data base layer: The information (data) which is mandatory for login, for storage (data storage)
where we are going to store , that layer is called data base layer.
ONE TIER ARCHITECTURE:
In this architecture presentation layer, business layer and database layer will be stored in one tier
architecture.
This application is best suitable for single user.
Ex: LAPTOP DESKTOP

TWO TIER ARCHITECTURE:


In this architecture, presentation layer and business layer at the client side database layer will be
available at server side.
Whenever we want to retrieve the information always we need to interact with database only.
This architecture is best suitable for college application, school, and small organization.

THREE TIER ARCHITECTURE:


In this architecture, presentation layer will be available at client side, business layer available at
application server side and database layer available at server side.
Whenever there is limited user who wants to the application anywhere from the global this is an
architecture called Three tier architecture.
Ex: Banking application.

N- TIER ARCHITECTURE:
In this architecture environment, presentation layer available at client side, application layer,
business layer, database layer, will be available at server side. But the main difference between
three tire architecture and distributed environment in this multiple database layer will be
maintain for fast accessing purpose.

SOFTWARE DEVELOPMENT PROCESS MODEL (SDPM):


OR
SOFTWARE DEVELOPMENT LIFE CYCLE MODEL (SDLC):
Difference between SDPM and SDLC
SDPM: it describes how exactly SDLC is implementing in the application developing.

Basically SDPM has been derived into two types.

1) Sequential model
2) Incremental model

SEQUENTIAL MODEL:

These models are best suitable for small application development purpose.
All requirements must be known before developing the application.
It is two types
1. Waterfall model
2. V model

WATERFALL MODEL:

REQUIREMENT:
Defines needed information, function, behavior, performance and installation.
DESIGN:
Data structure, software architecture, interface representation, algorithms detail.
IMPLEMENTATION:
Source code, database, user documentation testing.
ADVANTAGES OF WATERFALL MODEL:

Easy to understand ,easy to use


Provides structure to the in experienced stuff
Milestones are well understand
Sets requirement stability
Good for management control(plan, stuff, track)

Works when quality is more important than cost or schedule

DISADVANTAGES:

All requirements must be known upfront.


Deliverables are create for each phase are considered frozen.
(In simple words every output must be finalized in this model)
Can give a false impression of progress.
Does not reflect problems solving nature of software development iteration phases.
Integration is one of the big bang at the end.
Little opportunity for customer to preview the system (until it may be too late).

WHEN TO USE THE WATERFALL MODEL:

Requirements are very well known


Product definition is stable
Technology is understandable
New versions from the existing version
Porting an existing product to a new platform.

V MODEL:
Architecture of v model:

Process of v model:
1) Project requirement planning:
Resource allocation will be done. Such as project manager, team leader, team manager.
2) Product requirement and specification analysis:
In this entire system will be analyzed
3) Architecture and high level designs:
Design application architecture (1, 2, 3, N tier) and define the functionality for the
application.
4) Low level designs:
In this design sub module and algorithms for the entire architecture of the system (high and
low level).
5) Coding: Developer transforms algorithms into code.
6) Unit testing: Checking or testing each and every developed unit.
7) Integration testing: Establishing connection between unit and conduct checking among
the relation.
8) System and acceptance testing: Check entire software system in company environment
and customer environment.
9) Production and maintains: Deploy application into the customer environment and
provide to the customer gathering new requirements for the enhancement.

ADVANTAGES OF V MODEL:

It emphasize both verification and validation for the entire development


Every deliverable must be tested
Project manager track progress by milestone
Easy to use

DISADVANTAGES:

Does not handle parallel events in some times


Can t allow dynamic requirements
Cant estimate risk analysis

WHEN WE CAN GO FOR V MODEL:

When time is high reliability of the application (ex: hospital manager application)
All requirements must be known upfront
Technology is understandable

INCREMENTAL MODELS:

These models are best suitable for the big application development purpose.
No need to known all requirements before developing
Requirement may not be stable

Some of the Incremental models:


1. Proto type model(rapid application development)
2. Spiral model
3. Agile model

PROTO TYPE MODEL:

Process:

Company people develop a proto type during requirement phase


Proto type be evaluated by a user
User gives corrective feedbacks
Developers define the prototype code
When the user is satisfied the corresponding requirement will become base for original
application

INTERNAL PROCESS:

A preliminary proper model developed.


The model is the partial requirement specification for the prototype
Designers build Database, UIT, Algorithms
Designers demonstrate the project to the customer for getting requirement suggestion
and early feedback
This loop will continue until the user is satisfied

ADVANTAGES OF PROTOTYPE MODEL:

Customer can see the system requirement as they are being gathered function
requirement
Developer learns from the customer
A more accurate of end product
Unexpected requirement are accommodate(dynamic)
Annoys flexibility designing and development awareness of additional needs
functionality

DISADVANTAGES:

Bad reputation from the customer for quick and dirty methods
Over all maintains may be overload.
The customer may want the prototype developed
Process may continue for ever

WHEN WE CAN GO FOR PROTO TYPE MODEL:

Customer is not sure with his requirement


Once the requirement are clarified
If the application has user interface
When customer needs a shot and live demonstration
When the application is object oriented

RAD MODEL:
Flow:-

PROCESS:

1) Requirement planning phase:


A work is utilizing structure to discuss of business problems
In this customer And customer representation people and company business analysis
and subject matter export, project manager, team manager will available
2) USER DESCRIPATION PHASE:
By the time of gathering requirement some company employees has to work at customer
side and they will gathering the requirement from the customer.
All requirement are be stored in a modularized tool this tool are capturing requirement
as well as generate screens (GUI)
What are the information gathering by the tool the information will sent to the company
for a particular period time
3) Contraction phase:
What are information is gathering from the tools development are going to generate the
code
4) Customer over phase:
Application will be deployed in customer environment
Training will be provided by the test engineers to customer or customer representatives
people

Acceptance testing will be carried out in this phase only.

ADVANTAGES OF RAD MODEL:

Reduce the cycle time


Entire work must be time box
Customer involves throughout life cycle
Focus moves from document to code(WYSIWYG)
Modularized tools captured the information about the requirement data, behavior, and
navigation.

DISADVANTAGES:

Excel rated process can gives false impression


Risks are never achieving closer
Hard to use with legacy system
We need to have a modularized tool to implement this process
Developers and customer must be committed with rapid the activists

WHEN WE CAN GO FOR RAD MODEL:

If we know basic requirements.


Whenever client want to see all SDLC activists.
Application must be time boxed
Whenever functionality is need to be added new requirement.
High performance is not required for rat model
Load technical risk is enough.

SPIRAL MODEL:
Flow:-

ARCHITECTURE:

This model is 4GL to rat prototyping into waterfall model


Each cycle involving same sequence of the steps are the waterfall model

OBJECTIVES OF SPIRAL MODEL:


1)
2)
3)
4)

Functionality
s/w and h/w application
critical success factor areas
GUI etc.

ALTERNATIVES:

Build the s/w application


Renewal SLA service level agreement
Re usability etc.

CONSTRAINTS:

COST
Time schedule

Interface (GUI)

PROCESS:

Gathering new requirement


Develop the prototype demonstration to the customer
Identifying risk analysis
Conduct verification on the application
Deliver to the application to customer

ADVANTAGES:

Provide easy identification of risk without much cost


User see the system because of rat prototyping tools
The design does not have to be perfect
User can be closely tied to all development activities.
Early and frequently feedback from the user

DISADVANTAGES:

Time spent model for evaluating risks


Risks analysis is mandatory for low and high level projects
Risk easement purpose expertise people are required
So complex model

WHEN TO USE SPIRAL MODEL:

For maintenance projects


User or unsecured or uses with the requirement
For high level projects
When a prototype is available
Requirements are complex

Software testing:

It is a process of modeling activities


It is a process of both verification and validation

VERIFICATION:
It is a process verifying document (TEAM MEMBER) and process(MANAGER) to check
whether we are developing right system or not.
VALIDATION:

It is a process of checking conduct on coding (developer), functionality (team engineers) on the


application to check did we developed right system or not.

Architecture:

Always left side is the base for right side activity such as system requirement are base
line
For system testing FRS document is the base line for acceptance testing
Testing is conduct properly to identify the defects as soon as possible

VERIFICATION:
Quality Assurances;
Monitoring and measuring the strength of
development process is called quality
assurance

VALIDATION:
Quality control:
The verification of s/w product (code level
validity and functional level validity) is called
QC

Un conventional testing:
It is a process of testing conduct on the
document and company process this testing is
conducted by quality assurance people (pm,
tm).

conventional testing:
It is a process of conducting testing by the test
engineer and developer to check whether it is
working properly or not

QA and QC:

1) Code level validation:


Checking the code whether it is working according to the develop expectance or not
2) Functionality level validation:
Checking the functionality whether it is working according to the customer expectance or not
Testing:

Testing is process in which defect are indentifying


Isolated the defects to the developer
Again testing on the reflect areas
To make a quality s/w and to be realize in the market

Testing Principles:
1)
2)
3)
4)
5)
6)
7)

Early testing
Exhaustive testing
Pesticide paradox
Testing is context dependent
Presence of errors
Absence of errors
Defect clustering

Early testing:
Testing conduct in the initial stage s/w development then that is called early testing the main
intension of early testing is the reduce cost of fixing defects.

Exhaustive testing:

If we test functionality in the system with all possible valid inputs and invalid outputs then that is
Exhaustive testing
Ex: first name edit box accepting 4 to 20 characters

Testing is context dependent:

Testing is context dependent it is nothing but we cant implement same testing activities
for all applications
According to the application type the implementation of testing will be changing

Ex: banking application testing different from gaming application testing.


Pesticide paradox:

We are writing some test cases to check the application functionality


By the time of executing test cases we can identify some defects
If we execute same test cases every time we cant able to identify more defects in the
same functionality to overcome this problem we are adding some new requirements (test
cases) steps to the existing test case to identify more defects in the application the
process is called Pesticide paradox

Presence of errors:

As a test engineer need to have test to break attitude nothing but that is if an application
under production (live environment) if application has been given to us again we are
ready to find in the application
As a test engineer always we need to have positive approach and negative approach they
only we can able to N number of defects
If we are identifying in n number defects automatically that application is quality
product.

Absence of errors:
Whenever there are unused areas in the application and that areas having defects we no need to
concentrate on that areas that is called absence of errors
Defect clustering:
Whenever if we identifying a defect if has to cover more functionality (group)
Software testing technical / methodology:
1) static testing
white box testing
Unit testing
Integration testing

Black box testing


System testing
User acceptance testing (UAT)
Grey box testing (ABT AND BBT)
What do an s/w application will have defects?

Incorrect requirements
Wrong design
Poor coding
Complex business logics and complex technology

What are the most common defects we can identify in an application?

Incorrect functionality
Incorrect data edits
Poor usability
Poor performance
Un compatibility

Architecture of s/w testing:

Static testing:

It is a process in where we are going to understand company process and guide lines
By the time of conducting static testing we can able to identify what model the company
are using

In static testing developers will not execute the system


System testing will be carried out with the help of reviews and walkthroughs

Reviews:

Examine process related work and document related work is called review
Reviews are different types
They are:
Management review
Technical review
Format review
Information review

Management review:

This review will be conducted by top level or middle level management to monitor the
project status.
These reviews are help full for the management to take the necessary corrective action if
there are any slippages.

Corrective action:
If at all roles commit a repairable, mistake then the corrective these mistakes.
Preventive action:
If at all roles commit a mistake which is not possible for repair the preventive such type of
mistakes at least in the future.
Slip pages:
The deviation between planned efforts to actual effort is called slip pages.
Note: daily or weekly project status meeting are called as management reviews.
Formal review:
In a formal reviews meeting is conducted with a plane, document and procedure then those
meetings are called formal review meeting.
Architecture:

Author : write a document(BA)


Moderator / Inspection leader: A main person who leads the review activity is call
moderate(TEAM MANAGER)
Reviewer/ Inspection: Participant of a review process(TESTERS)
Scribe / recorder: A person who involved in recording defects during review meeting is
called scribe.

Phase of formal reviews:


1)
2)
3)
4)
5)
6)

Planning
Kick of meeting (any start up meeting)
Preparation
Review meeting
Re work
Follow up

1. Planning: The first phase of the formal review is the Planning phase. In this phase the
review process begins with a request for review by the author to the moderator (or inspection
leader). A moderator has to take care of the scheduling like date, time, place and invitation of
the review.
The documents should not reveal a large number of major defects.

The documents to be reviewed should be with line numbers.

The documents should be cleaned up by running any automated checks that apply.

The author should feel confident about the quality of the document so that he can join the
review team with that document.

2. Kick-off: This kick-off meeting is an optional step in a review procedure. The goal of this
step is to give a short introduction on the objectives of the review and the documents to
everyone in the meeting.
3. Preparation: In this step the reviewers review the document individually using the related
documents, procedures, rules and checklists provided. Each participant while reviewing

individually identifies the defects, questions and comments according to their understanding
of the document and role.
Usually the checking rate is in the range of 5 to 10 pages per hour.
4. Review meeting: The review meeting consists of three phases:

Logging phase: In this phase the issues and the defects that have been identified during the
preparation step are logged page by page. The logging is basically done by the author or by
a scribe. Scribe is a separate person to do the logging and is especially useful for the formal
review types such as an inspection. Every defects and its severity should be logged in any of
the three severity classes given below:
Critical: The defects will cause downstream damage.
Major: The defects could cause a downstream damage.
Minor: The defects are highly unlikely to cause the downstream damage.

Discussion phase: If any issue needs discussion then the item is logged and then handled in the
discussion phase. As chairman of the discussion meeting, the moderator takes care of the people
issues and prevents discussion from getting too personal and calls for a break to cool down the
heated discussion. The outcome of the discussions is documented for the future reference.
Decision phase: At the end of the meeting a decision on the document under review has to be
made by the participants, sometimes based on formal exit criteria. Exit criteria are the average
number of critical and/or major defects found.
5.Rework: In this step if the number of defects found per page exceeds the certain level then the
document has to be reworked. Not every defect that is found leads to rework.
It is the authors responsibility to judge whether the defect has to be fixed.
If nothing can be done about an issue then at least it should be indicated that the author has
considered the issue.
6.Follow-up: In this step the moderator check to make sure that the author has taken action on
all known defects. If it is decided that all participants will check the updated documents then the
moderator takes care of the distribution and collects the feedback.
It is the responsibility of the moderator to ensure that the information is correct and stored for
future analysis.
Inspection and audits are the example of formal review.
Inspection: If a formal review is conducted while executing a task then it is called inspection.
Audit: If a formal review is conducted after completion of a task it is called audit
9.Technical review:

It is less formal review

It is led by the trained moderator but can also be led by a technical expert

It is often performed as a peer review without management participation

Defects are found by the experts (such as architects, designers, key users) who focus on
the content of the document.

In practice, technical reviews vary from quite informal to very formal

The goals of the technical review are:


1. To ensure that an early stage the technical concepts are used correctly
2. To access the value of technical concepts and alternatives in the product
3. To have consistency in the use and representation of technical concepts
4. To inform participants about the technical content of the document

Informal reviews:
If a review is conducted without following any procedure and documentation then these
reviews are called informal

Peer reviews:
Review conduct among collection is called a peer review objective of reviews

To find defects in requirement


To find defects in design
To identify deviation in any process
To provide valuable suggestion to improve to process
Walkthroughs:

A step by step presentation which is given by business analyst or by domain experts and
subject matter experts.
Ex: KT is best ex of walkthroughs
Difference between static and dynamic testing:
Static testing
1) Testing is done without executing program
and functionality
2)This testing does verification process
3) This testing is to prevent the defects

Dynamic testing
1)This testing with done executing program
and functionality
2)This testing does validation
3) This testing is finding and fixing the defects

4) This testing gives easements of code and


document
5) It involve check list and document to be
followed
6) This testing id formatted before completion
of the system
7) It covers structural and statement testing
8) Cost of fixing defects is less
9) More reviews and comments are highly
recommended for good quality
10) Required leads for meeting

4) It gives bugs / bottle neck the s/w system


5) In involves test case for execution
6) This testing performed after completion of
the system
7) It covers the executable of the code
8) Cost of fixing defects in high
9) More defects highly recommended for good
quality
10) Comparatively less meetings are required

Dynamic testing:
WHITE BOX TESTING

It is conducted by developers
It is conducts on source code of the application
The main intention if WBT is to ensure 100% code coverage
Whether written code is working according to developer expectation or not
By the time of conducting WBT developer are conducting unit and integration testing
This testing as also called as clear box testing and structural testing and glass box
testing.

Need of WBT:

Identifying defects or easy because the code is visible


To ensure 100% code coverage
To reduce time for fixing the defects within identifier by tester.

Unit testing:

Unit is nothing but small testable program in the source code of the application such as
class, function, objects, methods and procedure etc.

If all these things are working according to the design execution that is called unit
testing.
WBT testing:
1) Basic path testing
2) Program technique testing
3) Control structural testing

4) Mutation testing
Basic path testing:

In this technique developers are following bellow steps


Identify designing in technical document
Identify all independent areas prepare for the script for independent paths (systematic
complexity)
Execute all the written code at least once.

Control structural testing:

In this developers are going to check the completeness and correctness of the program
by implementing loops and conditions.

Programming technique testing:

In this developers are going to monitor execution time


If the code is not executed according to developers expected time they are going to
change properties and value in the code make the execution fast this technique

Mutation testing:
By the time of changing program whether their change is impacting on unchanged areas then that
technique is called mutation testing.
Integration testing:

After completion of unit testing developers are going to considering on integration


areas.
In integration testing developers are going to establish the relation between related units.
And they will check whether the inter connection of the module are working as
expectation or not
By the time of conducting integration testing developers are going to follow approach
1) Top down approach
2) Bottom down approach
3) Big bang approach
4) Sandwich approach

Top down approach:


This approach is recommended if the there is any completed program at bottom level. In this
approach integration testing will be carried out level will be replaced incomplete program at
bottom level will be replaced with stubs.

STUB:
A simulated program that replaces a called program is called a stub.
Bottom up approach:
This approach is recommended when there are incomplete program at this top level. In this
approach integration testing will be replaced with drivers

Driver:
A simulated program that replaces a calling program is called a driver.
Big bang testing:
This approach is recommended when all source code units are available and unit testing. In this
approach all source code units will be combined together as large

System then integration among all these will be validated it takes very less time for conducting
integration testing but if any defects are accounted finding the root cause of the defect will
become a difficult task.

Sandwich approach:
This approach combine the top down and bottom up down approaches of the integration testing.
In this middle level module are testing using the driver and stubs.

Called: A dummy program is being called by a developed program then it is called stub
Calling: A dummy program is calling a developed program then that is called driver.
Note:
What is unit testing?
Does the piece of work by itself it is called unit testing
What is integration testing?
Do different piece working together is called integration testing.

Different between white and black box testing:


White Box Testing
1)Testing conducted by developers
2) It is check internal source code based on the
designing
3) White box testing time unit and integration
testing carried out
4) It is used to ensure 100% code coverage
5)I.e. the code is working as per developers
expatiations or not
6) Programming knowledge is required
7) The most exhaustive and expensive time
consuming
8) WBT is also called Structural testing, CBT,
GBT

Black Box Testing


1)Testing conducted testers, domain expert ,
end user
2) Testing the external expectation based on
the internal behavior of the system
3) System testing and user acceptation will be
carried out
4) It is used to ensure 100% customer
requirement
5) It is used to check whether the application is
working according to customer expectation or
not
6) Functional knowledge is required
7 The most exhaustive and less time
consuming
8) BBT is also called closer box testing,
specification based testing

Black box testing:

It is a process to check whether it is working according to the customer required or not


This testing will be initiated after integration testing
By the time of conducting system testing we need to check functional, non functional
requirement
By the time conducting system testing we need to apply manual and automation testing
Here we need to verify whether the given input is processing output or not.
In this testing we need to conduct the below testing such as
1) Usability
2) Regression testing
3) Functional testing
4) Recovery testing etc.

System testing is broadly classified into


1) Functional system testing
2) Nonfunctional system testing
Functional system testing:
Validity functional business requirement of the system is called Functional system testing
Nonfunctional system testing:
Validating nonfunctional system testing requirement such as performance, load, compatibility,
user, interface, usability etc it is called nonfunctional system testing
System testing approach:
As system testing should be carried out with the end use perspective we need to cover all
possible action carried and by end user. To cover all possible operations we have to conduct
because positive and negative testing.
Positive testing:

Testing conducted on the application with a positive perspective to check what system
supposed to do is called the positive testing.
Entering a valid username and valid password and click on submit button. To determine
what login supposed to do is called positive testing
The objective of the positive testing is conformance to requirement

Negative testing:

Testing conduct on the application with negative to prospection to determine what


system not supposes to do is called negative testing.
Enter valid username or invalid password click on submit button to determine what
login not suppose to do is called negative testing
The objective of negative is finding defects.

Difference between positive and negative testing:


Positive testing:
1) It means giving valid data into the
required inputs
2) In this testing testers always checks for
only valid set of values
3) It is done by keeping positive
perspective Ex: checking mobile
number functionality 9440247045
4) It is always done to verify the known
test conductions
5) The testing checks how the project and
product is behavior by valid set of data

Negative testing:
1) It means to test the application by giving
invalid data
2) In this testing testers always check for
invalid set of values.
3) It is done by keeping negative perspective.
Ex: checking mobile number with invalid
inputs. 9440456abc.
4) It is always done to break the project and
product with unknown set of test
conditions.
5) This testing is to check how the product or
project not behave (defects) by proceeding
invalid data.

6) The main AIM of positive testing is to


check the system is working according
to customer requirement or not.

6) The main of negative testing is to break the


application is having defects or not.

7) Always we need to perform negative


testing to prove application I working
properly.

7) Always we need to conduct negative


testing to identifying defects.

Black Box Testing Techniques


Exhaustive testing is not possible at the same time we need to ensure 100% customer
requirement. There we are going to use BBT technique.
1)
2)
3)
4)
5)

Equivalence class partition (ECP)


Boundary value analysis (BVA)
Decision table testing(DTT)
State transition testing (STT)
Use case testing (UST)

Equivalence class partition:

According to ECP at first analyzing entire possible valid and invalid inputs to divide
into groups while making groups make sure that every input data that belong to a group
is producing the same output.
As every input that belongs to a group is producing the same output every input will
take equal amount of property for testing. So we are no need to test with every input
consider one input from each class property middle value for testing.

Ex: prepare input data using ECP by technique to check above functionality that is does the
system displaying appropriate message or not based on the type of character.

Ex2: In a shopping mall application employee salary validation are given below.
1. Salary minimum 5000 to 50000
2. Numeric only mandatory prepare the test data for the given requirement

VALID
INVALID
Sal Bet Salary Salary
Non
>5K <50K<5000 >50K
NULL Numeric
5000

4999
4998

25000

-1
-2
-3
Infinity

50000

50001 <BLANK>

abc
abc123

Infinity

Ex: 3: in a bank application services checks for fund transaction are given below
1)
2)
3)
4)

1000 to 10000 Rs 500 Rs service charge.


10001 to 50000 Rs 700 Rs service charge
50001 to 100000 Rs 1000 Rs service charge
Less than 1000 greater than 100000 is not transferable.

Boundary value analysis:


It has observed that most of the time programs are committing mistake while specification
boundary conditions such as (>, <, >=, <=) to identify this kind of defects BVA is introduced in
BBT.
According to BVA to portions where there are ranges the determine outer boundary any inner
boundaries if any consider lower boundary value, upper boundary value forever inner boundary
as valid inputs and consider LBV-1 for the OBT as invalid inputs.
Evident advantages of BVA are improving code robustness and preparing the system for worst
case scenarios. Robustness improved because clean and dirty test cases are being utilized in

testing clean cases representation those with in allowable range dirty cases capability to
handle worst cases conditions
Note: EC/EP and BVA together ensure 100% requirement coverage.
Ex: In a login processing page user id password fields are their user id allows 6-18
Uppercases, password allows 5-17 alpha numeric, special character prepare test data.
User Name Table
BVA
Valid
6
7
13
17
18

Invalid
5
19

ECP
Valid
A-Z

Invalid
0-9
Lower case
Alpha numeric
Special character

Password Table
BVA
Valid
5
6
11
16
17

Invalid
4
18

ECP
Valid
A-Z
Lower case
Alpha numeric
Special character

Invalid
null

Ex: in an insurance application visitors can able to see insurance policies types upon existing
then age 16 to 80 years prepare test cases
BVA
Valid
2

Invalid
1
3

ECP
Valid
0-9

Invalid
null
A-Z
Lower case
Alpha numeric
Special character

Ex: in a shopping mall application can able to bill amount upon existing the quantity up to 10
prepare test cases.
BVA

ECP

Valid
1
2
5
9
10

Invalid
0
11

Valid
0-9

Invalid
null
A-Z
Lower case
Alpha numeric
Special character

Ex: in an e bank application customer able into login by existing password 6 digit numeric, area
code 3 digit number prefix 4 character alphanumeric suffix 5 digit number should be start with
0,1
Password Table
BVA
Valid
6

Invalid
5
7

3 Digit number table


ECP
Valid
0-9

Invalid
UC
LC
Special
Numeric
Alpha
numeric

BVA
Valid
3

Invalid
2
4

ECP
Valid
0-9

Prefix 4 character table

Suffix 5 Digit number table

BVA
Valid
4

BVA
Valid
5

Invalid
3
5

ECP
Valid
0-9
UC
LC

Invalid
Special
Numeric
Alpha n

Invalid
4
6

ECP
Valid
0-9
Should
not(0,1)

invalid
UC
LC
Special
Numeric
Alpha
numeric

invalid
UC
LC
Special
Numeric
Alpha n

Decision table testing:

Most of the time test engineers are going to give multiple sets of input to execute
functionality
There we are going to write number of test cases to execute functionality
Before writing test cases we are going to make decision table to write the test case in a
simple format there we are going to use decision tables.

Ex: write a decision table for login functionality

Test case id
TC 01
TC 02

TC 03
TC 04
TC 05

Test case description


Enter valid UN, PSWD click
on OK button
Enter invalid UN(<4chr),
valid PSWD click on Ok
button
Enter invalid UN(blank), valid
PSWD click on Ok button
Enter valid UN, Invalid
PSWD click on Ok button
Enter valid AN , blank pwd
click Ok button

Expected value
System should Display FR
window
System should Display a msg
as An must be 4chr
System should Display a msg
As PLZ enter AN
System should Display a msg
as please enter valid PWD
System should display a msg
as please enter password

State transaction testing:

It is a type of dynamic testing


In this testing we are checking the system which is defined transaction in the code level
or effecting functionality

this is one type of security testing to check whether the system is finite for the
transaction

Ex: in a login window if you perform the operation up to 5 times if allow the user after
completion of 5th execution the corresponding application has been terminated

Use case testing:

Most of times all offshore testing department people are depending up on use case
document for writing test case
By the time of execution we are going to check whether the application is developed
according to use case or not.
Use case is nothing but user action and system response.

User acceptance testing:


It is a process of testing conducted application to determine does the application is ready for
user or not use acceptance testing will be initiated after testing domain experts or the end
users the UAT can be at 2 levels they are
1) Alpha testing
2) Beta testing
Alpha testing:
In this first level of acceptance testing conducted at development premises. In this type of testing
the user invited at the development center where there use the application and the developers
note every particular input or action carried out by the user. Any of the abnormal behavior of the
system is noted rectified by the developer
Beta system:
In this level of acceptance testing conducted at the customer premises. In this type of testing the
s/w is distributed as a beta version to the users and tests the application at their sites. As the users
explore the s/w in case if any exception defects or accorded to the developers.

Difference between alpha and beta testing:


Alpha testing:

Beta testing:

1) It is performed by testers who are


usually internally resource of an
organization

1) It is conducted by client or end user.

2) It is performed at development site (s/w

2) It is performed at client or end user

company) staging environment

location

3) Reliability and security testing are not


performed in alpha testing

3) Reliability, security, robustness choked


during beta testing

4) It involves both WBT and BBT

4) It typically involves BBT testing only.

5) It requires a separate environment

5) It does not require any separate


environment. An s/w will available in
the web sites we can download and
execute the files.

6) Long execution cycle may be required


for alpha testing.
7) Critical issues will be fixed by
developers immediately

8) It is ensure the quality of the product


before moving to the beta testing

6) Few weeks of execution will required


for beta testing
7) If any defects or feedbacks got by the
end users there are going to file in next
verification.
8) Beta testing also concentrated on the
quality of the product and gather user
input on the product on their ensure the
product is ready for use or not

Gray Box Testing:

It is combination of both BBT and WBT


Usually this testing conducted testing deportment people or development people.
Database testing is the best ex. For gray box testing
As a test engineer we are going to perform some operations. At the front end path
application and we need to check front end functionality is working as expectation.
By time of conducting this testing we are not at all consulting on break end database
then we dont have any conformation whether the data inserted or not.

In the above ex, to check does the login displaying right module to the right user or not. We need
to interact with both the database and application which is called GBT.
Note:
Database testing is a best ex. for GBT.

Database testing:
Validating various operations performed in front end it backend validating various operations
backend at front end validating the database design such as field data type , field size,
constraints, and also validating SQL script such as store procedure and trigger is collectively
called as database testing.
Need of database testing in general a test engineer will confirm the functionality by seeing the
appropriate the application.
Ex: The check the above employee registration functionality a test engineer will input a valid
employee number employee name designation salary and click on submit button. In this
application displayed a message employee created successfully he assumes that functionality is
justified in the technique. But here message box is a programming technique not a configuration
from the database.
So it is not generate that really the data storing in the data base. In order to confirm their database
testing is required.
Difference between user interface testing and database testing:
User interface testing:

Database testing:

1) It is also called front end testing

1) It is also called as back end testing

2) This testing mainly deals with (editors,


draft down, tables, images, links, etc.)
which are created by writing program
(java, .net, pup, Delphi, etc)

2) This testing mainly deals with all tables


items which as hidden in the database by
created by developer (sol, oracle, myself,
db2, sibs, etc.)

3) This type of testing includes validating


the data front end such as (edit box,
draft down, tables, calendar, buttons,
etc.)

3) This type testing involves validating the


data the schema, data table columns key
indorse constraints triggers procedures
validating data duplicate etc.

4) Testers must be knowledge about


functionality (business process). If can
conducted by manually or automation

4) The tester in order to be able to perform


back end testing must have strong
knowledge in the database

Software Testing Life Cycle Practical Session


Role
1) Project / test manager

Phase
Test Planning

Deliverable
Test strategy
Test plane

2) Test engineer

Test analysis

3) Test engineer

Test design

4) Test engineer

Test execution

5) Team / project manager

Test closer

BRS /SRS study


RCN preparation
Test scenarios
Test ceases / input data
Testability Matrices
Execute test case
Bug reporting
Bug tracking
Re test
Test summery report

1) Test plan:

It is a document it will describe how to perform testing in an efficiently, effectively and


optimized way.
It is derived from test strategy.

Test strategy:
It is a high level document it describe what are the testing approaches we need to
implement by the time of conducting in the organization
Test plane document is different from application to application.
Test plan template:
Phase
Test plan

Entry criteria
1)Requirement
documents
2)Requirement
traceability
matrices
3)test automation
feasibility
document

Test plan content:

Activity criteria
1)Identification
various testing
approaches
2)select best suitable
approach for the
project
3) preparing test plan
for various types of
testing
4) tools selection
5) effort estimation
6) resource allocation

Exit criteria
Deliverable
1)Approved test Test plan
plan document
document
by project
manager and
Clint side
approvals
2)Approved
effort
estimation
document

Section 1: Document revision


1.1 Company document approval
1.2 Customer document approval

Section 2: Introduction
2.1Purpose
2.2 Scope
2.3 Over view
2.4 Definition, Acronyms and abbreviation (terminology)
2.5 Reference

Section 3: Scope of testing


3.1 Production over view
3.2 Scope of testing
3.2.1 Within scope of testing
3.2.2Out of scope of testing
3.3 Requirement criticality classification guideline (requirement priority)
3.4 Functional requirements
3.5 Non functional requirements

Section 4: Assumption and risk


4.1 Assumption and schedule
4.2 Risk manager

Section 5: Deliverable and schedule


5.1 Test design
5.2 Reporting

5.3 Escalations
5.3.1 Configuration management
5.3.2 Test phase
5.3.3 Test activity

Section 6: Smoke testing


6.1 Testing activity
6.2 Smoke testing
6.2.1 Objective
6.2.2 Entry criteria
6.2.3 Exit criteria
6.2.4 Test suspension resumption
6.3 User interface testing
6.3.1Objective
6.3.2 Entry criteria
6.3.3 Exit criteria
6.3.4 Test suspension resumption
6.3.5 Special consideration
6.4 Functional testing
6.4.1Objective
6.4.2 Entry criteria
6.4.3 Exit criteria
6.4.4 Test suspension resumption
6.4.5 Special consideration
6.5 Regression testing
6.5.1Objective
6.5.2 Entry criteria

6.5.3 Exit criteria


6.5.4 Test suspension resumption
6.5.5 Special consideration
6.6 Test automation
6.6.1Objective
6.6.2 Critical for inclusion of test cases in automation
6.6.3 Entry criteria
6.6.4 Test suspension resumption
6.6.5 Automation areas
6.7 Performance testing
6.7.1Objective
6.7.2 Entry criteria
6.7.3 Exit criteria
6.7.4 Test suspension resumption
6.7.5 Special consideration

Section 7: Test team


7.1 Roles and responsibility
7.2 Training needs
7.3 Acceptance
Section 8: Test environment
8.1 Tools
8.2 Test requirement
Section 9: Test execution
9.1 Test execution
9.2 Quality gates
9.3 Test Matrices

Section 10: Defect management


10.1 Priority and severity guidelines
10.2 Remark status

2) Test analysis or Requirement analysis


Phase
Test analysis
Or
Requirement
analysis

Entry criteria
1)requirement
documents must be
available
2)Accept criteria
defined
3)application
document
Must be available.

Activity criteria
1)understand function
and non functional
requirement of the
system
2)identify navigation
in the module and user
properties
3) gather in task flow
diagram information
4) identify test to be
formed
5) gather detailed
about requirement
property
6)identifying test
environment details
6) identifying
automated areas

Exit criteria
1)RTM is sign
off test
automation areas
2)approve
automation areas
sign by the client

Deliverable
RTM and
automation
feasibility report
(if applicable)

FRS document
1.1.0
1.1.1
1.1.2

Over view: It describe the over view of the corresponding module


Prototype: It is a snapshot of corresponding application
Page element: It describe corresponding object and object type ex: buttons, images, links
etc.

1.1.3
1.1.4
1.1.5

Input validation and error status: It describes what are the inputs we need to give in
the corresponding functionality and data structure also
Task flow diagram: It describe the corresponding user flow in the application
Use case diagram: It describe user action and system response

RCN (requirement clarification note)


We are using this template when we get an doubts in the FRS document
We report / exalt the document to template for clarification that is called RCN

Requirement Clarification Note

RTM (Requirement tradability matrices):


What is a Traceability Matrix?
The focus of any testing engagement is and should be maximum test coverage. By coverage,
it simply means that we need to test everything there is to be tested. The aim of any testing
project should be 100% test coverage.
Requirements Traceability Matrix to begin with, establishes a way to make sure we place
checks on the coverage aspect. It helps in creating a snap shot to identify coverage gaps.
How to Create a Traceability Matrix?
To being with we need to know exactly what is it that needs to be tracked or traced.
Testers start writing their test scenarios/objectives and eventually the test cases based on
some input documents Business requirements document, Functional Specifications
document and Technical design document (optional).
Important Points to Note About Traceability Matrix
The following are the important points to note about this version of the Traceability Matrix:

1) The execution status is also displayed. During execution, it gives a consolidated snapshot
of how work is progressing.
2) Defects: When this column is used to establish the backward traceability we can tell that
the New user functionality is the most flawed. Instead of reporting that so and so test cases
failed, TM provides a transparency back to the business requirement that has most defects
thus show casing the Quality in terms of what the client desires.
3) As a further step, you can color code the defect ID to represent their states. For example,
defect ID in red can mean it is still Open, in green can mean it is closed. When this is done,
the TM works as a health check report displaying the status of the defects corresponding to a
certain BRD or FSD functionality is being open or closed.
4) If there is a technical design document or use cases or any other artifacts that you would
like to track you can always expand the above created document to suit your needs by adding
additional columns.
1. Ensuring 100% test coverage
2. Showing requirement/document inconsistencies
3. Displaying the overall defect/execution status with focus on business requirements.
4. If a certain business and/or functional requirement were to change, a TM helps estimate
or analyze the impact on the QA teams work in terms of revisiting/reworking on the test
cases.
Additionally,
1. A TM is not a manual testing specific tool, it can be used for automation projects as well.
For an automation project, the test case ID can indicate the automation test script name.
2. It is also not a tool that can be used just by the QAs. The development team can use the
same to map BRD/FSD requirements to blocks/units/conditions of code created to make
sure all the requirements are developed.
3. Test management tools like HP ALM come with the inbuilt traceability feature.
An important point to note is that, the way you maintain and update your Traceability
Matrix determines the effectiveness of its use. If not updated often or updated incorrectly the tool
is a burden instead of being a help and creates the impression that the tool by itself it not worthy
of using.

RTM Template:

3) Test designing:
Test scenario:
A scenario is nothing but a situation or a plane what to be tested in the application
(or)
A scenario is nothing but an item or functionality to be tested in an application is
called a scenario.
By the time of writing scenario we need to use verify or check that words only
We should not use enter, click, select etc.
A scenario is nothing but a response from the application
Phase
Entry criteria
Test Design 1)requirement
documents must be
available
2) RTM and test
plane
3)Automation
analysis report
.

Activity criteria
1)prepare test
scenario
2)prepare test cases
3)list out information
in RTM
4) Review test cases
(peer review)
5) prepare
automation test script
5) prepare test data

Exit criteria
1)test cases
approved by
test manager

Deliverable
Test cases

EXAMPLES OF TEST SCENARIOS

TEST SCENARIOS FOR WATER BOTTLE:


1. Check the dimension of the bottle. See if it actually looks like a water bottle or a
cylinder, a bowl, a cup, a flower vase, a pen stand or a dustbin! [Build Verification
Testing!]
2. See if the cap fits well with the bottle. [Install ability Testing!]

3. Test if the mouth of the bottle is not too small to pour water. [Usability Testing!]
4. Fill the bottle with water and keep it on a smooth dry surface. See if it leaks.
[Usability Testing!]
5. Fill the bottle with water, seal it with the cap and see if water leaks when the
bottle is tilted, inverted, squeezed (in case of plastic made bottle)! [Usability
Testing!]
6. Take water in the bottle and keep it in the refrigerator for cooling. See what
happens. [Usability Testing!]
7. Keep a water-filled bottle in the refrigerator for a very long time (say a week). See
what happens to the water and/or bottle. [Stress Testing!]
8. Keep a water-filled bottle under freezing condition. See if the bottle expands (if
plastic made) or breaks (if glass made). [Stress Testing!]
9. Try to heat (boil!) water by keeping the bottle in a microwave oven! [Stress
Testing!]
10. Pour some hot (boiling!) water into the bottle and see the effect. [Stress
Testing!]
11. Keep a dry bottle for a very long time. See what happens. See if any physical or
chemical deformation occurs to the bottle.
12. Test the water after keeping it in the bottle and see if there is any chemical
change. See if it is safe to be consumed as drinking water.
13. Keep water in the bottle for some time. And see if the smell of water changes.
14. Try using the bottle with different types of water (like hard and soft water).
[Compatibility Testing!]
15. Try to drink water directly from the bottle and see if it is comfortable to use. Or
water gets spilled while doing so. [Usability Testing!]
16. Test if the bottle is ergonomically designed and if it is comfortable to hold. Also
see if the center of gravity of the bottle stays low (both when empty and when filled
with water) and it does not topple down easily.
17. Drop the bottle from a reasonable height (may be height of a dining table) and
see if it breaks (both with plastic and glass model). If it is a glass bottle then in most
cases it may break. See if it breaks into tiny little pieces (which are often difficult to
clean) or breaks into nice large pieces (which could be cleaned without much
difficulty). [Stress Testing!] [Usability Testing!]

18. Test the above test idea with empty bottles and bottles filled with water. [Stress
Testing!]
19. Test if the bottle is made up of material, which is recyclable. In case of plastic
made bottle test if it is easily crushable.
20. Test if the bottle can also be used to hold other common household things like
honey, fruit juice, fuel, paint, turpentine, liquid wax etc. [Capability Testing!]

TEST SCENARIOS OF PEN:


1. Verify the type of pen- whether it is ball point pen, ink pen or gel pen.
2. Verify the outer body of the pen- whether it should be metallic, plastic or any
other material as per the specification.
3. Verify that length, breadth and other size specifications of the pen.
4. Verify the weight of the pen.
5. Verify if the pen is with cap or without cap.
6. Verify if the pen has rubber grip or not.
7. Verify the color of the ink of the pen.
8. Verify the odor of the pen.
9. Verify the size of the tip of the pen.
10.Verify the company name or logo of the maker is correct and at desired place.
11.Verify if the pen is smooth.
12.Verify if the pen's ink gets leaked in case it is tilted upside down.
13.Verify if the pen's gets leaked at higher altitude.
14.Verify the type of surfaces the pen can write at.
15.Verify if the text written by pen is erasable or not.
16.Verify pen's and its ink condition at extreme temperature is as per the
specification.
17.Verify the pressure up to which the pen's tip can resist and work correctly.
18.Verify the pen is breakable or not at a certain height as the specification.
19.Verify text written by pen doesn't get faded before a certain time as per the
specification.
20.Verify the effect of water, oil and other liquid on the text written by pen.
21.Verify the condition of ink after long period of time is as per permissible
specification or not.
22.Verify the total amount of text that can be written by the pen at one go.
23.Verify the pen's ink is waterproof or not.
24.Verify if the pen is able to write when used against the gravity- upside down.
25.Verify that in case of ink pen, the pen's ink can be refilled again.

TEST SCENARIOS OF ATM MACHINE:


1. Verify the slot for ATM Card insertion is as per the specification.
2. Verify that user is presented with options when card is inserted from proper
side.

3. Verify that no option to continue and enter credentials is displayed to user


when card is inserted correctly.
4. Verify that font of the text displayed in ATM screen is as per the
specifications.
5. Verify that touch of the ATM screen is smooth and operational.
6. Verify that user is presented with option to choose language for further
operations.
7. Verify that user asked to enter pin number before displaying any card/bank
account detail.
8. Verify that there are limited number of attempts up to which user is allowed
to enter pin code.
9. Verify that if total number of incorrect pin attempts gets surpassed then user
is not allowed to continue further- operations like blocking of card etc gets
initiated.
10.Verify that pin is encrypted and when entered.
11.Verify that user is presented with different account type options like- saving,
current etc.
12.Verify that user is allowed to get account details like available balance.
13.Verify that user same amount of money gets dispatched as entered by user
for cash withdrawal.
14.Verify that user is only allowed to enter amount in multiples of denominations
as per the specifications.
15.Verify that user is prompted to enter the amount again in case amount
entered is not as per the specification and proper message should be
displayed for the same.
16.Verify that user cannot fetch more amount than the total available balance.
17.Verify that user is provided the option to print the transaction/enquiry.
18.Verify that user user's session timeout is maintained and is as per the
specifications.
19.Verify that user is not allowed to exceed one transaction limit amount.
20.Verify that user is not allowed to exceed one day transaction limit amount.
21.Verify that user is allowed to do only one transaction per pin request.
22.Verify that user is not allowed to proceed with expired ATM card.
23.Verify that in case ATM machine runs out of money, proper message is
displayed to user.
24.Verify that in case sudden electricity loss in between the operation, the
transaction is marked as null and amount is not withdrawn from user's
account.

TEST SCENARIOS LOGIN PAGE:


1. Verify that the login screen is having option to enter username and password
with submit button and option of forgot password.
2. Verify that user is able to login with valid username and password.
3. Verify that user is not able to login with invalid username and password.

4. Verify that validation message gets displayed in case user leaves username
or password field as blank.
5. Verify that validation message is displayed in case user exceeds the
character limit of the user name and password fields.
6. Verify that there is reset button to clear the field's text.
7. Verify if there is checkbox with label "remember password" in the login page.
8. Verify that the password is in encrypted form when entered.
9. Verify that there is limit on the total number of unsuccessful attempts.
10.For security point of view, in case of in correct credentials user is displayed
the message like "incorrect username or password" instead of exact message
pointing at the field that is incorrect. As message like "incorrect username"
will aid hacker in brute forcing the fields one by one.
11.Verify the timeout of the login session.
12.Verify if the password can be copy-pasted or not.
13.Verify that once logged in, clicking back button doesn't logout user.
14.Verify if SQL Injection attacks works on login page.
15.Verify if XSS vulnerability work on login page.

TEST SCENARIOS OF LIFT:


1.
2.
3.
4.
5.

Verify the dimensions of the lift.


Verify the type of door of the lift is as per the specification.
Verify the type of metal used in the lift interior and exterior.
Verify the capacity of the lift in terms of total weight.
Verify the buttons in the lift to close and open the door and numbers as per
the number of floors.
6. Verify that lift moves to the particular floor as the button of the floor is
clicked.
7. Verify that lift stops when up/down buttons at particular floor are pressed.
8. Verify if there is any emergency button to contact officials in case of any
mischief.
9. Verify the performance of the floor - time taken to go to a floor.
10.Verify that in case of power failure, lift doesn't free-fall and get halted in the
particular floor.
11.Verify lifts working in case button to open the door is pressed before reaching
the destination floor.
12.Verify that in case door is about to close and an object is placed between the
doors, whether the doors senses the object and again open or not.
13.Verify the time duration for which door remain open by default.
14.Verify if lift interior is having proper air ventilation.
15.Verify lighting in the lift.
16.Verify that at no point lifts door should open while in motion.
17.Verify that in case of power loss, there should be a backup mechanism to
safely get into a floor or a backup power supply.
18.Verify that in case multiple floor number button are clicked, lift should stop at
each floor.

19.Verify that in case capacity limit is reached users are prompted with warning
alert- audio/visual.
20.Verify that inside lift users are prompted with current floor and direction
information the lift is moving towards- audio/visual prompt.

TEST CASES OF TV REMOTE CONTROL:


1. Verify that all the buttons are present- 0 to 9, volume, channel up-down and
other audio-video functionality etc buttons.
2. Verify the functionality of power ON-OFF button.
3. Verify that the Remote Control should work for a particular TV set model
numbers only.
4. Verify that user can navigate to different single digit and multi digit channels.
5. Verify that user can increase or decrease the volume.
6. Verify that user can navigate up and down the channel using channel up and
down buttons.
7. Verify that functioning of audio-video and other auxiliary buttons.
8. Verify the maximum distance from the Television set up to which the remote
works smoothly.
9. Verify the button press event that triggers the functionality i.e. an event gets
triggered on button down press, button release etc.
10.Verify the arc/different directions the remote control works correctly.
11.Verify the battery requirement of the remote control.
12.Verify the material of the remote's body and its button.
13.Verify the weight of the remote control.
14.Verify the dimensions of remote control.
15.Verify the spacing between two buttons, the spacing between the two buttons
should be optimum distance apart so that user can press a button
comfortably.
16.Verify that there should be contrast between button's color and remote's
outer body color.
17.Verify the remote's functioning on pressing more than one button
simultaneously.
18.Verify that the font - style and size of the numbers and other information
should be readable.
19.Verify that on battery discharge, the remote should work normally on
inserting new batteries.
20.Verify the pressure required to press the button.
21.Verify the strength of the remote's outer body, if it works normally on
dropping from a certain height.
22.Verify that any operation performed on the remote control while the TV is
switched off should not make any difference to TV's functioning when
switched on.
23.Verify if the remote control is water proof or not, if its water proof, check if it
works normally after immersing it in water.

TEST SCENARIOS OF CAR:


Positive Test Cases of Car
1. Verify that car should get unlocked and start smoothly on unlocking with its
key.
2. Verify that car gets driven smoothly at normal speed on road and under
normal climatic condition.
3. Verify that clutch, break and accelerator functions are working correctly.
4. Verify the engine type of car - whether it is Petrol, Diesel or CNG engine.
5. Verify the car's performance on different types of roads- charcoal, cement
etc.
6. Verify car's performance and fuel consumption on plains, hills and slops.
7. Verify that the mileage of the car is as per the specification.
8. Verify that the dimensions of the car are as per the specification.
9. Check if the car is sports car or luxury car.
10.Check that the fuel capacity is as per the specification.
11.Check if the steering is power steering or not.
12.Check if gears are automatic or manual.
13.Verify if the reverse gear of the car works correctly.
14.Check if the height of the car's floor is at an optimum distance from road.
15.Verify the top speed of the car under normal conditions.
16.Verify the maximum acceleration of the car.
17.Verify the car's outer body material.
18.Check if the car's pane are made of tempered glass or not.
19.Check the number of seats in the car.
20.Check if the hand brakes are functional or not.
21.Verify that brakes work correctly and gets applied in a timely manner or not.
22.Verify the type and power of battery.
23.Check if the headlights are working fine and give proper lighting when
applied at night/dark.
24.Verify the shock absorber of the car.
25.Verify if the air bags are present or not and are functional if present.
26.Check if centre locking is present or not and is functional if present.
27.Check if the seat belts are present and are functioning correctly.
28.Verify car's interior- spacing, material, quality etc.
29.Verify if the speedometer, fuel meter and other indicators are working fine or
not.
30.Verify cars performance, tires grip on driving the car on rainy day.
31.Verify that car should get started and run smoothly on using it after several
days.
32.Check the automatic car lock functionality.
33.Verify that car's back light should get lightened on reversing the car.
34.Verify that left and right indicators should function correctly.
35.Check if anti-theft alarm is working correctly or not.
36.Negative Test Cases of Car.
37.Verify the car's functioning on filling it with non-prescribed fuel type.
38.Drive car at high speed on first gear only.

39.Keep the air pressure different on all the four tires and then drive the car.
40.Use hand break while driving the car.
41.Try to start the car with some other key.
42.Check the condition of types on filling them at pressure higher than
prescribed.
43.Check the condition, speed and fuel consumption of car on filling the types
with pressure less than prescribed.
44.Check car's speed, performance and fuel consumption on driving the car on
roads not conducive for driving.

TEST CASES OF GOOGLE SEARCH:


1. Verify that the response fetched for a particular keyword is correct and
related to the keyword, containing links to the particular webpage.
2. Verify that the response are sorted by relevancy in descending order i.e. most
relevant result for the keyword are displayed on top.
3. Verify that response for multi word keyword is correct.
4. Verify that response for keywords containing alphanumeric and special
characters is correct.
5. Verify that the link title, URL and description have the keyword highlighted in
the response.
6. Verify auto-suggestion in Google e.g. providing input as 'fac' should give
suggestions like 'face book', 'face book messenger', 'face book chat' etc.
7. Verify that response fetched on selecting the suggested keyword and on
providing the keyword directly should be same.
8. Verify that the suggestion provided by Google are sorted by most
popular/relevant suggestions.
9. Verify that user can make search corresponding to different categories - web,
images, videos, news, books etc and response should correspond to the
keyword in that category only.
10.Verify that misspelled keyword should get corrected and response
corresponding to the correct keyword should get displayed.
11.Verify that multi word misspelled keywords also get corrected.
12.Verify the performance of search- check if the time taken to fetch the
response is within the ballpark.
13.Verify that total number of results fetched for a keyword.
14.Verify that the search response should be localized that is response should be
more relevant to the country/area from which the search request is initiated.
15.Verify Google calculator service- make any arithmetic request, calculator
should get displayed with correct result.
16.Verify Google converter service- make request like- 10USD in INR and check if
the result is correct.
17.Verify search response for a large but valid strings.
18.Verify that incorrect keywords - keywords not having related result should
lead to "did not match any documents" response.
19.Verify that user can make search using different languages.

20.Verify that for a keywords, some related search terms are also displayed to
aid user's search.
21.Verify that for number of results more than the limit on a single page,
pagination should be present, clicking on which user can navigate to
subsequent page's result.
22.Verify Google's advanced search options like- searching within a website,
searching for files of specific extension.
23.Verify if the search is case-insensitive or not.
24.Verify the functionality of "I'm feeling Lucky" search- the top most search
result should get directly returned (but as of now Google doodle page link is
displayed).
25.Front End - UI Test Cases of Google Search.
26.Verify that Google Logo is present and centre aligned.
27.Verify that the search textbox is centre aligned and editable.
28.Verify that search request should get hit by clicking on search button or
hitting enter after writing the search term.
29.Verify that in the search result- webpage's title, URL and description are
present.
30.Verify that clicking the search result will lead to the corresponding web page.
31.Verify that pagination is present in case number of results are greater than
the maximum results allowed in a page.
32.Verify that user can navigate to a page number directly or move to previous
or next page using the links present.
33.Verify that different languages links are present and gets applied on clicking
the same.
34.Verify that the total number of results for the keyword is displayed.
35.Verify that the time taken to fetch the result is displayed.

TEST SCENARIOS OF COMPUTER MOUSE:


1.
2.
3.
4.

Check if the mouse is optical mouse or not.


Verify that left click and right click buttons are working fine.
Check if double click is working fine.
Verify the time duration between two left clicks, in order to consider it as
double click.
5. Check if scrolled is present at the top or not.
6. Verify the speed of mouse pointer.
7. Check the pressure required for clicking the mouse buttons.
8. Verify the acceleration of mouse pointer.
9. Verify that clicking the button and dragging the mouse operation is working
fine(drag and drop functionality).
10.Check the dimension of the mouse, if its suitable to grip and work.
11.Verify that mouse works in all the allowed surfaces.
12.Check if the mouse is wireless mouse or corded mouse.
13.In case of wireless mouse, check the range up to which the mouse remains
operational.
14.In case of wireless mouse, check the battery requirement of the mouse.

15.Check if there is option to switch on or mouse.

TEST SCENARIOS OF KEYBOARD:


1. Check if all the keys- characters, numeric, function, special characters and
arrow keys are present.
2. Verify the ordering of the keys is as per the QWERTY standard.
3. Check the functioning of each type of key-characters, numeric, function,
special characters and arrow keys.
4. Verify the working of the keys that work in combination like- shift+{other
keys}.
5. Check if the dimension of the key is as per the specification.
6. Check the color of both keyboard body as well as the text written over the
buttons.
7. Check if the font type and size is as per the specification and legible.
8. Check if the pressure required to press a key is not too high.
9. Check the spacing between two keys, keys should not be congested and at
the same time not too widely placed.
10.Verify that in case of caps lock and other similar keys- an indicator lights
glows.
11.Check if keys doesn't make too much noise when clicked.
12.Verify if the keyboard is wireless or wired keyboard.
13.In case the keyboard is wireless, verify the range of keyboard.
14.In case of wired keyboard, check the length of the keyboard.
15.Verify if the keyboard contains multimedia functions as well.

Test scenario template:

TEST CASES:

It is nothing but it is a detailed discretion of what to test and how to test (or) a test case is a set of
preconditions, expected values, input fields, result, etc. it is called test case.
Types of test cases:
There are 3 types of test cases:
1) Positive test case:
If we write of test case to check the business flow of the application those test
cases are called positive test cases.
In the test cases we need to prepare valid data only by following BVA, ECP
conditions.
By the time of writing this test case we need to have end user mindset.
2) Negative test case:
If we write a test case to identify defects in the application then we are writing
negative test case.
3) GUI test case:
If we prepare test case to check the look and feel of the application those test cases
are called GUI test case
Principle of test cases:
1)
2)
3)
4)
5)
6)
7)

Should be clear and transparent


Dont assume
Dont repeat
Conduct Peer reviews
Use 100% requirement coverage
Use BBT technologies
Write the test cases in end user prospective

Test case format:


Test case _ project name_ module name_ sub module name (if required) _ id

Test case template:

Test execution:
In this phase we need to concentrate on test execution process and test environment setup,
deliverables we need to check.

Test case environment setup:


Phase
Test
environmen
t

Entry criteria
1)System design
and architecture
document
available
2) Environment
plane is available

Activity criteria
1)Understand the
requirement architecture to
setup the environment
2) Prepare h/w and s/w req.
list
3) Finalized all
environment and setup req.
4) Setup environment and
test data
5) Prepare smoke testing
6) Accept or reject the
build depending on smoke
testing
7) Identify SRN,

Exit criteria
1)Environmen
t setup
working as for
the plane and
guideline
2)Smoke test s
successfully

Deliverable
1)Deploy
document
2)Smoking
test result

QRN,BRN documents

Smoke Testing:
The main intension of conducting smoking testing is to check the released the application is
eligible for derailed testing or not
Smoking testing situation to check the below contestation
1)
2)
3)
4)

We need to check whether the application is properly deployed or not


Check for the database connectivity
Check for guidelines are available to deploy the application into the environment
Check major functionality are working properly or not

Test execution:
Phase

Entry criteria

Activity criteria

Test
execution

1)Base line RTM,


test plane, test case,
test script
2)Test environment
is ready
3)Test data setup is
done
Until, integration
testing report for
the build to tested is
available
4)Smoking testing
is successful

1)Execute test cases based on


the plane
2)Arrange test cases in
priority base order
3)List out required test cases
to be executed on the
4) Identify SRN document
5)Execute test cases setup by
step order
6) Identify actual value and
document that actual value
into test case template
7) Compare execute value
with both actual value if the
status as passed or failed
8) Report failed test cases to
the developers as defect
9) Report the defect into a
defect tracking template or a
bug tracking tool
10) Conduct re testing on the
fixed defect
11) It is passed give the defect

Exit
criteria
1)All test
cases must
be
executed
2) All
defects
must be
fixed and
closed

Deliverable
1)Complete
RTM with
execution
status
2) Test cases
updated with
result
3) Defect
report

status as closed

Software / Build / Assurance Release Note:


Before executing test cases testing department is going to get s/w release note from the
development this release note having the below information:
1) The release note document provides information about the feature implement in the
specific release / version of the system.
2) In addition it specify how the current varies from the previous with respect to
functionality.
3) It also lists of the relevant information such as defects fixed since last release known
defect in the current release.
4) Environment used to conduct testing on the release and configuration etc.
Content of s/w release note:
1) Introduction
1 .1 Changes to the release notes
1.2 Scope
2) System requirement
2.1 Operating system supported requirement
2.2 H/w requirement
3) New feature
4) Dropped feature
5) Fixed issues defect
5.1 include hot fixe
5.2 include maintain fixes
6) Known issues limitation and regression
6.1 known information
6.2 limitation information
6.3 regression information
7) Deployment interaction
7.1 item 1 deployment introductions
7.2 item 2 deployment introductions

7.3 item 3 deployment introductions


8) Caveats (warnings)
9) Related document
9.1 Updating
10) Approvals
How to use release notes:
1.1)
1.2)
1.3)
1.4)
1.5)
1.6)
1.7)
1.8)
1.9)

How use to the template


Opening log files
Verify change requirement
Observe headers
Verify the updating in the left file
Observe notes
Observe warnings
Add comments
Insert list numbers

Sanity testing:

It is a detailed testing conduct on each and every functionality to check the functionality
is justifying all customer or not.
In this type of the testing we are going to test the application functionality in detail
manner by providing different inputs into the functionality.

Difference between sanity and smoke testing:


Smoke testing
1) It is a overall testing is conducted to
check the stability
2) It will be conducted on every new
release
3) It is performed by developers and
testers
4) It is usually documented (deployment
FRS test plane document etc)
5) It is a subset of regression testing
6) It is a exercise entire system testing
7) It is like a test drive

Re testing:

Sanity testing
1) It is a detailed testing to check the
stability of the functionality in detailed
manner
2) It is done or conduct on new features
and after bugs fixes
3) This testing is performed by testers
4) It is a usually not documented based on
test cases we are conducting this testing
5) It is a subset of acceptance
6) It is a exercise only particular
functionality in the system
7) It is like to check detailed test drive
(mail age, engine checkup, breaks etc)

Testing the functionality again and again (with multiple set of test data) then that is called
Re testing.
Re testing is conducted on bug fixed areas to check whether the bug is properly fixed or
not.

Regression testing:

In this we are conducting on already working functionality to know whether the changes
areas or impacting on unchanged areas or already working functionality got effect or not.
This testing will be carried out on new functionality are modified functionality.

Re testing
1) Retesting is done to make sure that, that
cases which are failed earlier are passed
after modification for that we are
conducting re testing
2) It is carried out based on defect fixes
3) Defect verification will come under this
testing
4) In this testing we can include failed test
cases also
5) Re testing is unplanned testing
6) Re testing cant be automated
7) Re testing is having more property then
regression testing

Regression testing
1) In this type of testing the s/w
functionaries are enhanced are
modified it is showing any effect on
already working functionality
2) It is carried out based on defect fixes or
enhance
3) Defect verification it is not a part of
regression testing
4) In this we can include passed test cases
5) Regression testing is a planned testing
if we want to conduct this testing we
need to understand all functionality
requirement and the related module
with passed test cases we can carried
out in regression testing
6) Automation testing tools are designed
for regression testing purpose only
7) Regression testing we are conducting
based on the availability resource by
the time re testing part we can go for re
testing

Adhoc testing:
It is a also called as Random testing. In this type of testing ,we are conducting on the application
to our interest, the main intention of adhoc testing to find tricky defects in the application.
This testing has deferent types it is conduct due to lack of time or due to lack resource we can
conduct this testing.
1) Monkey testing
2) Buddy testing

3) Pair testing
4) Exploratory testing

1) Monkey testing:
In this type of testing we need to concentrate on main functionality when there is a lock of time
we can able to perform this testing.
2) Buddy testing:
In this type of testing developers and test engineers are grouped together and conducting
developers and test pearly this testing mostly recommended in incremental module when
there is a less time.
3) Pair testing:
In this junior test engineer are grouped with seniors and conduct on the application when there is
a lack of knowledge on the application.
4) Exploratory testing:
This testing is conducted by domain experience people that is the person who are having relevant
domain or entire application knowledge that people will conduct this testing.
Explore Meaning: having some basic knowledge on doing some operation on the application,
knowing some information from the application.
Note: In some situation in their areas lack of documentation we are conducting explore testing.
End to End testing:
This testing is conduct before releasing the application to the customer .
In this type of testing a simulated (similar to exact customer) environment will be created in the
organization and end to end test cases are executed to check entire functionality in the system
and system testing.
Difference between System and End to End Testing:
System testing
1) It is conducted to check all functional
requirement system
2) Functional and non functional will be
carried out in this testing
3) It is used to check whether the
functional is working according to the
customer requirement or not(MCR)

End to End testing


1) It is conducted to check system and
system components (sub systems)
2) All interfaces and back end system will
be consider for this testing
3) It is used to whether the system and
system components are working
according to the customer expectation

4) System testing will be conduct after


integration testing
5) Both manual and automation testing
can be performed is system testing

or not
4) In this testing will be conducted after
system testing
5) It is conducted to check external
interface of the system which can be to
automated hence manual testing is
performed

Non functional testing types:


Usability testing:
In these types of testing we need to check user friendliness of the application such as Useful,
Accessible, Usable, Findable, and Describe.

User interface testing:


In this testing we need to check look and feel of the application such as
1)
2)
3)
4)

Check for the alignment


Check for the consistence
Check spell mistake
Check for ground color

COMPATABILITY TESTING: Validating does the application is compatible with various


hardware and software environments (Operating System compatibility, browser compatibility) .
RECOVERY TESTING : Checking does the system is having a provision of backup and restore
options or not and also how does the system is handing un predictable situations such as power
failures & system crashes etc.
INSTALLATION TESTING or DOCUMENTATION TESTING or DEPLOYMENT TESTING:
Validating does the application successfully installable or not as per guidelines provided in
installation document is called Installation Testing.
UNINSTALLATION TESTING: Checking or we able to uninstall the product successfully from
the system of not is called UN installation Testing.
GLOBALISATION TESTING: Validating does the application is having a provision of changing
(language, currency, date time format etc. if it is designed for global users.
LOCALISATION TESTING: Validating the default language, currency, date / time format etc,

when an application designed for a particular locality of users is called LOCALISATION


TESTING.
Performance Testing:Software performance testing is a means of quality assurance (QA). It involves testing software
applications to ensure they will perform well under their expected workload.
Features and Functionality supported by a software system is not the only concern. A software
application's performance like its response time, do matter. The goal of performance testing is
not to find bugs but to eliminate performance bottlenecks.
The focus of Performance testing is checking a software program's

Speed - Determines whether the application responds quickly.

Scalability - Determines maximum user load the software application can handle.

Stability - Determines if the application is stable under varying loads.


Types of performance testing:

Load testing - checks the application's ability to perform under anticipated user loads.
The objective is to identify performance bottlenecks before the software application goes
live.

Stress testing - involves testing an application under extreme workloads to see how it
handles high traffic or data processing .The objective is to identify breaking point of an
application.

Endurance testing - is done to make sure the software can handle the expected load over a
long period of time.

Spike testing - tests the software's reaction to sudden large spikes in the load generated by
users.

Volume testing - Under Volume Testing large no. of. Data is populated in database and
the overall software system's behavior is monitored. The objective is to check software
application's performance under varying database volumes.

Scalability testing - The objective of scalability testing is to determine the software


application's effectiveness in "scaling up" to support an increase in user load. It helps plan
capacity addition to your software system.

Defect Reporting

Status: New: - Whenever the defect is found for the first time the test engineer will set the status as
New.
Open: -Whenever the developer accepts the raised defect then he will set the status as Open.
Fixed for verification Or Fixed for rectified: - Whenever the developer rectifies the raised defect
then he will change the status to Fixed.
Re open and Closed: -Whenever the defects are rectified, next build is released to the testing dept
then the test engineers will check whether the defects are rectified properly or not. If the defect is
rectified properly then the test engineer will set the status as Closed. If the defect is not
rectified properly then the test engineer will set the status as Re open.
Hold: - Whenever the developer is confused to accept or reject the defect then he will set the
status of the defect as Hold.
Testers Error or Testers Mistake or Rejected: - Whenever the developer is confirmed it is not at
all a defect then he will set the status of the defect as Rejected.
As Per Design: - Whenever the test engineer is not aware of new requirements and if he raises
defects related to the new features then the developer will set the status As Per Design.

Note: This is a rare case not usually Occurs.


Severity: How serious the defect is defined in terms of severity, Severity is classified in to four
types:
1. Fatal

--------(Sev1) or S1 or 1

2. Major -------(Sev2) or S2 or 2
3. Minor -------(Sev3) or S3 or 3
4. Suggestion (Sev4) or S4 or 4
1. Fatal: - All Run time errors, Show Stopper Defects

In the above example there is no FUNCTIONALITY


2. Major: -

Non Conformance to requirements

In the above example FUNCTIONALITY IS NOT JUSTIFYING Customers Requirement


3. Minor: -Requirement is justified still there is a minor deviation.

4. LOW: -User Interface/ Usability

Priority: Priority defines the sequence in which the defects has to be rectified. It is classified in to
four types:
1.Critical

(Pri1) or P1 or 1

2.High

(Pri2) or P2 or 2

3.Medium

(Pri3) or P3 or 3

4.Low

(Pri4) or P4 or 4

Usually the Fatal defects are given critical priority, Major defects are given High priority,
Minor defects are given Medium Priority and suggestions are given Low Priority, But depending
up on the situations the priority will be changing.
I - Case: Low severity-High Priority Case
Up on customer visit to the company all the look and feel defects are given
highest priority.
II - Case: High severity Low Priority Case
When ever 80% of the application is released to testing department as 20% is
missing the test engineers will treat them as Fatal defect but the development lead will give least
priority for those defects as features are under development.

TEST CLOSURE
After completion of reasonable cycles of test execution, test lead concentrates on test closure to
estimate completeness and correctness of test execution and bugs resolved. In review meeting,
the test lead is considering some factors to review testing team to responsibility.
1.

Coverage analysis: requirement coverage or modules coverageTesting technique


coverage.

2.

Defect density:
Module name

3.

no of defects

20%

20%

40% need for regression

20%

Analysis of deferred (postponed) defects: Whether deferred defects are postponed or not?
After completion, closure review by testing team concentrates on postmortem testing or final
regression testing or pre acceptance testing, if required.
Select high defect
density module

Test
reporting

Effort
estimation

Plan regression

Regression testing

4. User acceptance testing: After completion of testing and their reviews, project management
concentrates on user acceptance testing to collect feedback from real customer model customers.
There are two way to conduct UAT such as -testing and -testing.
5. Sign off: After completion of user acceptance testing and modification, project management
declares release team and CCB. In both teams few developer and test engineers are involved
along with project manager. In sign off stage testing team submits all prepared testing documents
to project manager.

Test strategy
Test plan
Test case title/ test scenario
Test case document
Test logo
Test defect reports above all documents combination is also known as a Final Test
Summary Report(FTSR)

Sl.No
1

Informal Review
Conducted on an as-needed I.e.
Informal agenda

Formal Review
Conducted at the end of each life cycle
phase I.e. Formal Agenda

The date and the time of the agenda


for the Informal Review will not be
addressed in the Project Plan
The developer chooses a review panel
and provides and/or presents the
material to be reviewed

The agenda for the formal review must


be addressed in the Project Plan

The acquirer of the software appoints


the formal review panel or board, who
may make or affect a go/no-go decision
to proceed to the next step of the life
cycle.
The material may be as informal as a The material must be well prepared.
computer listing or hand-written
documentation.
Ex Formal reviews include the
Software Requirements Review, the
Software Preliminary Design Review,
the Software Critical Design Review,
and the Software Test Readiness
Review.

Potrebbero piacerti anche