Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Software Quality:
Technical:
Meeting Customer Requirements
Meeting Customer Expectations (User friendly, Performance, Privacy)
Non-Technical:
Cost of Product
Time to Market
Software Project:
Software related problems solved by software engineers through a software engineering process.
Analysis
Design
Coding
Testing
Maintenance
Testing:
Verification & Validation of software is called testing.
Fish Model of Software Development:
System
Information S/W RS Testing
Gathering (FRS + SRS) Programs
LLDs
(BRS)
Black Box
Testing
LCT Reviews
Reviews White Box
Test S/W Change
Prototype
Testing
Verification Validation
BRS defines requirements of the customer to be developed as a software. This type of documents
developed by business analyst category people.
Reviews:
It is a static testing technique to estimate completeness and correctness of a document.
Design
High Level Design Document (HLD):
This document is also known as external design. This document defines hierarchy of all possible
functionalitys as modules.
This document is also known as internal design. This document defines structural logic of every
sub module.
Example:
Coding:
White Box Testing:
It is a coding level testing technique. During this test, test engineers verifies completeness and
correctness of every program.
This testing is also known as Glass Box Testing or Clear Box Testing.
System Testing:
V Stands for Verification & Validation. This model defines mapping between development stages
& Testing Stages.
Development Testing
Development Plan --Assessment of Development plan
I/f gathering & Analysis -- Prepare Test Plane
-- Requirements phase testing
-- Test Documentation
-- Port Testing
Maintenance
-- Test S/W Changes
-- Test Efficiency
DRE = A / (A+B)
Defect Removal Efficiency (DRE):
It also known as Defect Deficiency.
DRE = A / (A+B)
Where
A = No of defects found by testing teem during testing
process.
B = No of defects found by customer during Maintenance.
Reviews
S/W RS Function & System Testing
(BB Testing)
HLD
Integration Testing
Coding
From the above refinement form of V-Model, Small & Medium scale organisations are maintaining
separate testing team for functional & System testing stage to decrease cost of testing.
In this stage they are conducting reviews for completeness and correctness of design documents. In
this review they are using below factors.
Are they understandable?
Are they met right requirements?
Are they complete?
Are they followable?
Does they handle Errors?
2. Operations Testing:
Run on customer expected platforms (OS, Browser, Compiler etc.).
3. Mutation Testing:
It means that a change in program. White Box Testers are performing this change in
program to estimate test coverage on the program.
Tests Retests Tests Retests
Change Change
Stub
Sub 1 Sub 2
From the above model, stub is a temporary program instead of under construction sub module. It is
also known as called program.
2. Bottom Up Approach:
Conduct testing on sub modules with out coming from main module is called Bottom
Up Approach.
From the above model, Driver is a temporary program instead of main module. This
program is also known as calling program.
Main
Driver
Sub 1
Sub 2
3. Sandwich Approach:
The combination of Top Down and Bottom-UP approaches is called Sandwich Approach.
Main
Driver
Sub 1
Stub
Sub 2 Sub 3
BUILD:
A finally intigrated all modules set .EXE form is called Build.
1. Usability Testing:
In general testing team starts test execution with Usability testing. During this test, testing team
validates User Friendliness of screens of build.
During Usability Testing, testing teams are applying two types of sub tests.
a) User Interface Test (UI):
Ease of use ( Understandable Screens)
Look & Feel ( Attractive or pleasantness)
Speed Interface ( Less no of events to complete a task)
b) Manuals Support testing:
Context sensitiveness of user manuals.
Receive Build from developers
UI Testing
Usability
Testing
Remaining System Tests
2) Functional Testing:
A major part of BB testing is Functional Testing. During this test testing team concentrate on Meet
Customer Requirements. This functional testing classified into below tests.
Pass Fail
Example1:
A login process allows user ID and Password to validate users. User ID allows Alpha Numerics in
lower case from 4 to 16 characters long. Password allows alpha bits in lower case 4 to 8 characters
long. Prepare BVA and ECP for user ID and password.
User ID
BVA ECP
4 pass Valid Invalid
3 fail a to z A to Z
5 pass 0 to 9 Special characters
16 pass Blank
15 pass
17 - Fail
Password
BVA ECP
4 pass Valid Invalid
3 fail a to z A to Z
5 pass 0 to 9
8 pass Special characters
7 pass Blank
9 - Fail
Example 2:
Prepare BVA & ECP for the following text box.
A text box allows 12 digit numbers along with * as mandatory and sometimes it allows also.
BVA ECP
Min = Max = 12 pass Valid Invalid
11 fail 0 to 9 with * A to Z
13 - Fail 0 to 9 with *, - a to z
0 to 9
Special characters other than *, -
Blank
c) Recovery Testing:
It is also known as reliability testing. During this test, test engineers validates whether the
application change from abnormal state to normal state.
Normal
Abnormal
Using
Normal Backup & Recovery
d) Compatibility Testing:
It is also known as portability Testing. During this test, testing team validates whether our
application build run on customer expected platform (OS, Compiler, Browser and other system
software) are not?
Build OS
Build OS
e) Configuration Testing:
It is also known as hardware compatibility testing. During this test, testing team validates whether
our application build supports different technology hardware devices are not?
EX: Different types of LANs, different topologies, different technology printers etc.
EBD
Server
WBA Local
DB
TBA
ITA
New Server
New Application
g) Installation Testing:
During this test, testing team validates installation of our application build along with supported
softwares into customer site like configured systems. During this test, testing team observe below
factors:
Setup program execution to start installation.
Easy interface during installation.
Occupy disk space after installation.
h) Parallel Testing:
It is also known as comparative testing and applicable to software products only. During this test,
testing team compare our application build with competitors products in the market.
i) Sanitation Testing:
It is also known as garbage testing. During this test, testing team try to find extra features in our
application build w.r.t customer requirements.
Defect:
During testing, testing team reports defects to developers in terms of bellow categories.
Mismatch between expected and actual.
Missing functionality.
Extra functionality w.r.t CRS.
When defects are accepted by development team to solve, they called defects as BUGs.
Some times defects are known as issues. Defects raise in application due to errors in coding
3) Performance Testing:
It is an advanced testing technique and expensive to apply because testing team have to create huge
environment to conduct this testing. During this test, testing team validates Speed of Processing.
During this performance testing, testing team conduct below sub tests.
a) Load Testing:
The execution of our application under customer expected configuration and customer expected
load to estimate performance is called Load Testing.
b) Stress Testing:
Execution of our application under customer expected configuration and uninterval loads to
estimate performance is called stress testing.
c) Storage Testing:
The execution of application under huge amounts of resources to estimate storage limitations is
called Storage Testing.
Break Event Analysis
Resources
EX: MS-Access 2 GB database as maximum.
d) Data Volume Testing:
The execution of our application under customer expected configuration to estimate peak limits of
data is called data volume testing.
4) Security Testing:
It is also an advanced testing technique and complex to conduct. During this security testing,
testing team validates Privacy to User Operations. During this test, testing team applies below
sub tests.
a) Authorization (Whether user is Authorised are not )
b) Access Control (Whether valid user have permission to specific service or not)
c) Encryption/Decryption (Data conversion in between Clint process and Server process)
Clint Server
Note: In small and medium scale organisations, test engineers are covering Authorization and
Access Control during functional testing. Encryption and decryption process covered by
development people.
VI) User Acceptance Testing (UAT):
After completion of Functional & System testing, organization invites customer site people to
collect feedback. There are two methods to conduct UAT such as test and test.
TEST TEST
Software applications Software products
By real customers By customer site like people
In development site In customer site like environments
Collect Feed Back
After completion of port testing, release team provides training sessions to customer site people
and comes back.
During software maintenance customer site people are sending Change request (CR) to the
organization.
2. Exploratory Testing:
3. Sanity Testing:
It is also known as Tester Acceptance Testing (TAT) or Build Verification Test (BVT).
After receiving build from development team, testing team estimates stability of that build before
starts testing.
4. Smoke Testing:
It is an extra shakeup in sanity process. In this test, tester try to trouble shoots when that build is
not working before start testing.
A testing team conducts single stage testing, after completion of entire system development instead
of multiple stages.
6. Incremental Testing:
A multiple stages of testing process from unit level to system level is called incremental testing. It
is also known as formal testing.
7. Manual Vs Automation:
A tester conducts any test on application build without using any Testing tool / Software is called
manual testing.
A tester conducts a test on application build with the help of Testing tool / Software is called
Automation testing.
In common testing process, test engineers are using test Automation w.r.t test Impact and
Criticality. Impact means that test repetition & Criticality means that complex to apply test
manually. Due to these two reasons, testing people are using test Automation.
8. Re-Testing :
The re-execution of a test with multiple test data to validate a function is called Re-Testing.
Ex: To validate multiplication, test engineers use different combination of inputs in terms of
Minimum, Maximum, Integer, Float, +ve and ve ect.
9. Regression Testing:
The re-execution of tests on modified build to ensure bug fix work and occurrences of side effects
called Regression Testing (Previously failed test and previously related passed tests).
Note:
1) Re-Testing on same build & regression testing on modified build but both are indicating re-
execution.
2) From the definitions of Re-Testing and Regression Testing, test repetition is mandatory in
test engineer job. Due to this reason test engineers are concentrating on test Automation.
A mistake in code is called Error. Due to errors in coding, test engineers are getting mismatches in
application called defects. If defected accepted by development to solve called Bug.
WINRUNNER 7.0
TEST PROCESS:
Learning
Record Script
Edit Script
Run Script
Analyze Script
1. Learning:
2. Record Script:
Test engineer creates automated test script to record our business operations. WinRunner record
manual test operations in TSL (Test Script Language) like as C.
3. Edit Script:
Test engineers are inserting required check points into the record script.
4. Run Script:
During test execution, test engineers run the script instead of manual testing.
5. Analyze Results:
During automation script execution on application build, WinRunner returns results in terms of
passed & failed. Depends on that results, test engineers are concentrating on defect tracking.
Note: WinRunner only run on windows family operating systems. If we want to conduct
functionality testing on application build in Unix, Linex platform, we can use Xrunner
CASE STUDY:
Login
UID
Focus to login
OK Disabled
Enter UID
OK Disabled
Enter PWD
OK Enabled
Automation Process:
set_window(login, 5);
button_check_info(OK, enabled, 0);
edit_set(UID, xxxx);
button_check_info(OK, enabled, 0);
password_edit_set(PWD, encrypted PWD);
button_check_info(OK, enabled, 1);
button_press(OK);
Test Script :
An automated manual test program is called test script. This program consists of two types of
statements such as Navigational statements to operate project and Check points to conduct testing.
Add-In Manager (Window):
1. Start Recording
2. Run from top
4. Pause
Recording Modes:
WinRunner records manual operations in two types of modes such as Context Sensitive Mode and
Analog Mode.
In this mode WinRunner records mouse and keyboard operations w.r.t objects and windows in
application build. It is a default mode in WinRunner.
Note: TSL is a case sensitive language and it allows entire scripting in lower case but maintains
Flags in upper case.
b) Analog Mode:
To record mouse pointer movements w.r.t desktop coordinates. We can use this mode in
WinRunner.
Analog Recording:
In Analog mode WinRunner maintains bellow TSL statements.
1. move_locator_track( ); :
WinRunner use this function to record mouse pointer movements on the desktop in one unit (one
sec) of time.
Syntax:
move_locator_track(Track No);
By default track no starts with 1.
2. mtype( ); :
Syntax:
mtype(<T track no><kleft/kright>+/-);
3. type( ); :
Syntax:
type(Typed text / ASCII notation);
CHECK POINTS:
After completion of required navigation recording, test engineers are insisting check points into the
script to cover below sub tests.
1. Behavioral Coverage
2. I/P Domine coverage
3. Error handling coverage
4. Calculation coverage
5. Backend coverage
6. Service levels coverage
To automate above sub tests, we can use four types of checkpoints in WinRunner.
Example:
Object: Update
Navigation:
Select position in script Create menu GUI check point For single property Select
testable object select required property with expected Click Paste.
Test Script
Example :
Sample
Input Expecting:
Focus to window
input is focused
OK OK disabled
Fill input
OK enabled
Create script.
Script
Example 3:
Student Expected:
Focus to window
Roll No Roll No focused
Name OK disabled
Select Roll No
OK Name focused
OK disabled
Enter Name
OK enabled
Script
set_window ( Student , 5 );
edit_check_info ( Roll NO , focused , 1);
button_check_info ( OK , enabled , 0);
list_select_item ( Roll No , XXXX );
edit_check_info ( Name , focused , 1);
button_check_info ( OK , enabled , 0);
edit_set ( Name , XXXX );
button_check_info ( OK , enabled , 1);
button_press ( OK );
Case Study:
Object Type Testable Properties
Push Button Enabled, Focused
Radio Button Enabled Status
Check Box Enabled Status
List / Combo Box Enabled, Focused, Count, Value.
Menu Enable, Count
Table Grid Rows Count, Column Count, Table Content
Enabled, Focused, Value, Range, Regular
Edit Box / Text Box
Expression, Date Format, Time format.
Example 4:
Journey
Expected:
Fly
No ofFrom
items in Fly To equal to
No
of items in Fly From-1, when you
select 1 item in Fly From.
Fly TO set_window(Journey, 5);
list_select_item(Fly From, xxxx);
list_get_info(Fly From, count, n);
list_check_info(Fly To, count, n-1);
Example 5:
Sample 1 Sample 2
Display
List
OK Text Box
Expected :
Selected item in list box is equal to text box value when you click display.
set_window(Sample 1, 5);
list_select_item(Item, xxxx);
list_get_info(Item, value, x);
button_press(OK);
set_window(Sample 2, 5);
button_press(Display);
edit_check_info(Text, value, x);
Example 6:
Student
Roll No
OK
Percentage
Grade
Expected :
set_window(Student, 5);
list_select_item(Roll No, xxx);
button_press(OK);
edit_get_info(Percentage, value, P);
if (P > = 80)
edit_check_info(grade, value, A);
else if (P < 80 && P > = 70)
edit_check_info(grade, value, B);
else if (P < 70 && P > = 60)
edit_check_info(grade, value, C);
else
edit_check_info(grade, value, D);
Example 7:
Insurance
Expected :
Type If type is A Age is focused
If type is B Gender is focused
Age
Any other type Qualification is focused
Gender
Qualification
set_window(Insurance, 5);
list_select_item(Type, X);
list_get_info(Type, Value, x);
if (x = = A)
edit_check_info(Age, focused , 1);
else if (x = = B)
list_check_info(Gender, focused , 1);
else
list_check_info(Qualification, focused , 1);
To test more than one properties of single object, we can use this option.
Example 8:
Navigation:
Select Position Script Create Menu GUI check point For Object or Window Select
testable object(Double Click) Select required property with expected click OK.
Syntax:
obj_check_gui(object name, checklist file.ckl, expected values file, Time to create)
In the above syntax checklist file specifies list of properties to be tested. Expected values file
specifies expected values for those properties. This two files created by WinRunner during
checkpoint creation.
To verify more than one properties of more than one objects, we are using this checkpoint in
WinRunner.
Example 9:
Objects Insert Order Update Order Delete Order
Focus to window Disabled Disabled Disabled
Open a record Disabled Disabled Enabled
Perform change Disabled Enabled, Focused Enabled
Navigation:
Select position in script create menu GUI check point for multiple objects click add
select testable objects right click to quit selected require properties with expected for every
object click OK.
Syntax:
win_check_gui(window, checklist file.ckl, expected values file, Time to create)
Example 10:
Sample
Age Expected:
Range 16 to 80 years
Create menu gui check point for object or window select age object select range
property enter from & to values click OK.
set_window(sample, 5);
obj_check_gui(Age, list1.ckl, gui1, 1);
Example 11:
Sample
Navigation:
Create menu gui check point for obj/win select name obj select regular expression
enter expected expression ( [a-z]*) click OK.
set_window(sample, 1);
obj_check_gui(name, list1.ckl, gui1, 1);
Example 12:
Example 13:
[a-zA-Z] [a-zA-Z0-9]*
Example 14:
Example 16:
Name object allows alphabets in lower case and that value starts with R and end with O.
[R][a-z]*[O]
Example 17:
Prepare Regular expression for the following text box. A text box allows 12 digit numbers along
with * as mandatory and sometimes it allows also.
[[0-9][*]]*
Due to test engineer mistake or requirement change, test engineers perform changes in expected
values through below navigation.
Navigation:
Run script Open result Change expected value Re-execute test to get correct results.
Some times test engineers are adding extra properties to existing checkpoints due to tester mistake
or requirement enhancements.
Navigation:
Create menu Edit GUI check list select checklist file name click OK select new
properties to test click OK Click OK to overwrite click OK after reading suggestion
Change run mode to update click run run in verify mode to get results open the result
analyze the result and perform changes required.
To compare our expected image with actual image in our application build, we can use this option.
Example1:
dd New
. .
$ $
Expected Actual
= = Pass
! = fail
Example2:
10000
10000
5000
5000
0
No of items = 10000 0
Expected No of items = 10005
Actual
= = Fail
! = pass
Navigation:
Create menu Bitmap checkpoint for object or window selected expected image (double
Click).
Syntax:
obj_check_bitmap(Image object name, Image file name.bmp, Time to create);
Create menu bitmap checkpoint for screen area select required image region right click
to release.
Syntax:
obj_check_bitmap(Image object name, Image file name.bmp, Time to create, x, y, width,
height);
Note:
1) TSL functions supports variable size of parameters to call like as c language.(No
functional overloading)
ARITY no of arguments in a function.
2) In functionality test automation GUI checkpoint is mandatory but bitmap check point is
optional because all applications doesnt allows images as contents.
Back end testing is a part of functionality testing. It is also known as Database Testing.
During testing test engineers are validating impact of front-end operations on back end tables
content in terms of data validation and data integrity. Data validation means that weather the front-
end side values are correctly storing into back end tables are not. Data Integrity means that weather
the impact of front end operations working on back end table contents (Updating / Deletion ).
To automate above backend testing using WinRunner, test engineers are following database
checkpoint concept in create menu.
In this backend test automation, test engineers are collecting this information from development
team.
DSN
Front End
1
Database Check Excel Sheet
Point Wizard 2 Select
3 x x x
a) Default Check:
Test Engineers are conducting back end testing depending upon database tables contents using this
checkpoint.
In the above syntax checklist specifies content is the property. Query result file specifies results of
the query in terms of content.
b) Custom Check:
Test engineers are conducting backend testing depending on rows contents, column contents and
content of database tables.
But test engineers are not using this option because default check content also showing no of rows
and column names.
From Expected :
a b
X 10
DSN Xa
Y 20 20 Yb
30
40 50
To automate above like mapping testing, test engineers are using Run Time Record Checkpoint in
WinRunner.
Navigation:
Create menu database checkpoint runtime record check click next click create to select
DSN write select statement with doubtful columns (ex: select orders.order_number,
orders.customer_name from orders) click next select doubtful front end objects for that
columns click next select any one of three options ( exactly one matching record, one or
more matching records and no matching records) click finish.
Syntax :
db_record_check(check list file name.crr, DVR_ONE_MATCH /
DVR_ONE_OR_MORE_MATCH / DVR_NO_MATCH, Variable);
In the above syntax checklist file specifies expected mapping between back end columns and
front end objects.
Flags specifies type of matching
Variable specifies that number of records matched
To conduct calculations and other text based tests, we can use get_text option in WinRunner. This
option consists of two sub options.
Navigation:
Create menu Get text From Object / window Select object (D Click).
Syntax:
obj_get_text(name of the object, Variable);
Example :
Sample
Expected :
Input
Out put = In Put * 100
Out put
set_window(sample, 5);
obj_get_text(input,x);
obj_get_text(output,y);
if(y = = x * 100)
printf(test is pass);
else
printf(test is fail);
To capture static text from screen area we can use this option.
Navigation:
Create menu get text from screen area select required region right click to release.
Syntax:
obj_get_text(object name, variable, x1, y1, x2, y2 );
Example 1:
Getting text from object / window by using sub strings to cut some area of string.
if (tot = = t * p)
printf(test is pass);
else
printf(test is fail);
Example 2:
Shopping
Expected: Total = price * qty
QTY xx
Price Rs:xxx/-
Total Rs:xxx/-
set_window(shopping);
obj_get_text(QTY, q);
obj_get_text(price, p);
p=substr(p,4,length(p)-5);
obj_get_text(Total, tot);
tot=substr(tot,4,length(tot)-5);
if (tot = = q * p)
printf(test is pass);
else
printf(test is fail);
tl_step( ):
To create our own pass / fail result in result window, we can use this statement.
Syntax:
tl_step(step name, 0 / 1, description);
DDT is nothing but a retest. To executive one test more than one time on same application build
with multiple test data.
From the above model test engineers are submitting test data through keyboard. To record value
from keyboard during test execution, we can use below TSL statement.
Syntax:
create_input_dialog(message);
Example 1:
for(i=1; i<=5; i++)
{
x = creat_input_dialog( Enter order No);
set_window ("Flight Reservation", 3);
menu_select_item ("File;Open Order...");
set_window ("Open Order", 1);
button_set ("Order No.", ON);
edit_set ("Edit_1", x);
button_press ("OK");
}
Example 2:
Multiply
Expected:
Input 1 Result = input 1 * input 2
Test data in paper: 10 pairs of inputs
Input 2
OK
Result
if(temp = = X * Y)
tl_step(step,0, Pass);
else
tl_step(step,1,fail);
}
Example 3:
Shopping
Item No Expected:
Login
Expected:
User ID
If next enabled user is authorised
Pwd If next is disabled user is unauthorised
Some times test engineers are conducting re-testing depends on multiple test data from flat file.
Test Data
BUILD
. txt
Test Screen
To prepare above model automated test scripts, test engineers are using few file functions in
WinRunner.
1. file_open( ):
we can use this function to open file into RAM with required permissions.
Syntax:
file_open(File Path, FO_MODE_READ / FO_MODE_WRITE / FO_MODE_APPEND);
2. file_getline( ):
We can use this function to read a line from opened file in READ MODE.
Syntax :
file_getline(path of file, Variable);
3. file_close( ):
we can use this function to sweep out a opened file from Ram .
Syntax:
file_close(path of file);
Example1:
Example 2:
Multiply
Input 1 Expected:
Result = input 1 * input 2
Input 2 Test data in file: c:\\My Documents data.txt
OK xx xxx
xxx xxxx
xxx xxx
Result .
..
tl_step(step,0,Pass);
else
tl_step(step,1, fail);
}
file_close(f);
Example 3:
Shopping
Item No Expected:
Total = Price * QTY
QTY
Test data in file: c:\\My Documents data.txt
OK
Ram purchase 101 items as 10 pieces
Price $Total $
Login
Expected:
User ID
If next enabled user is authorised
Pwd If next is disabled user is unauthorised
Test data in file: c:\\My Documents data.txt
OK Next xxxx@xxx xx
4. file_printf( ):
We can use this function to print specified text into a file. If file is opened in WRITE / APPEND
MODE.
Syntax :
EX:
a b
xx xx a = xx and b == xx
Syntax:
file_compare(path of file1, path of file2, path of file3);
In the above syntax third argument is optional. It specifies concatenated content of both compared
files.
Some times test engineers are conducting re-testing depends on multiple data objects such as list,
menu, ActiveX, table, data window.
Test Data
BUILD
Test Screen
Example 1:
Journey
Expected:
Fly Fromitem does not availablein
Selected
fly to.
Fly TO set_window(Journey, 5);
list_get_info(Fly From, count, n);
for(i=0; i<n; i++)
{
list_get_item(Fly From, i, x);
list_select_item(Fly From, x);
if (list_select_item(Fly To, x) !=E_OK)
tl_step(step, 0 , Does no appears);
else
tl_step(step, 1 , Appears and Test is fail );
}
In WinRunner every TSL returns E_OK when the statement successfully executed on our
build.
Example 2:
Sample 1 Sample 2
Display
Name
OK Text
Expected :
Selected item in list box appears in text box as below model
My Name is XXXXX.
set_window(Sample 1, 5);
list_get_info(Name, count, n);
Example 3:
Employee
Expected:
EMP No
If bsal >= 15000 than gsal = bsal + 10% of bsal
OK If bsal < 15000 and > = 8000 than gsal = bsal +
bsal gsal 5% of bsal
If bsal < 8000 than gsal = bsal + 200
set_window(Employee,5);
list_get_info(EMP No, count, n);
Example 4:
Insurance
Expected :
Type If type is A Age is focused
If type is B Gender is focused
Age Any other type Qualification is focused
Gender
Qualification
set_window(Insurance, 5);
list_get_info(Type, count, n);
for (i = 0; i < n; i++)
{
list_get_item(Type, i,x);
list_select_item(Type, x);
if (x == A)
edit_check_info(Age, focused , 1);
else if (x == B)
list_check_info(Gender, focused , 1);
else
list_check_info(Qualification, focused , 1);
}
Example 5:
AUDIT
File_store
Expected:
S.NoFile
PathTypeSize1XX10kb2XX20kb3XX30kb4X Total = Sum of size column
X40kb5xx50kb
Total
xxxkb
Sum = 0
set_window(AUDIT, 5);
tbl_get_rows_count(file_store, n);
if (tot == sum)
tl_step(step1, 0 , calculation is pass);
else
tl_step(step1, 1 , calculation is fail);
6. list_get_item( ):
We can use this function to capture specified list item through Item number. Here item
number starts with 0.
Syntax:
list_get_item(list box name, Item No, Variable);
7. tbl_get_rows_count( ):
Syntax:
tbl_get_rows_count(Table grid name, variable ):
8. tbl_get_cell_data( ):
We can use this function to capture specified cell value into a variable through row no &
column no.
Syntax:
tbl_get_cell_data(Table Grid name, # row no, # column no, variable);
In general testing engineers are conducting data driven test using Excel Sheet data. This
method is default method in data driven testing. To create this type of automated script
WinRunner provides special navigation.
Navigation:
Create test for one script tools menu dada driven wizard click next browse the
path of excel sheet (c:\PF\MI\WR\Temp\testname\default.xls) specify variable name to
assignee path (by default table) select import data from database click next select
type of data base connection (ODBC or Data Junction) select specify SQL statement
(c:\PF\MI\WR\Temp\testname\msqrl.sql) click next click create to select data source
name write SQL statement (select order_number from order) click next select
excel sheet column names in required place of test script select show data table now
click finish click run analyse results manually
Example1:
table = "default.xls";
rc = ddt_open(table, DDT_MODE_READWRITE);
if (rc!= E_OK && rc != E_FILE_OPEN)
pause("Cannot open table.");
ddt_update_from_db(table, "msqr1.sql", count);
ddt_save(table);
ddt_get_row_count(table,n);
for(i = 1; i < = n; i++)
{
ddt_set_row(table,i);
set_window ("Flight Reservation", 6);
menu_select_item ("File;Open Order...");
set_window ("Open Order", 1);
button_set ("Order No.", ON);
edit_set ("Edit", ddt_val(table,"order_number"));
button_press ("OK");
}
ddt_close(table);
1. ddt_open( ):
We can use this function to open a test data excel sheet into RAM with specified
permissions.
Syntax:
ddt_open(path of excel file, DDT_MODE_READ / READWRITE);
2. ddt_update_from_db( ):
We can use this function to extend excel sheet data w.r.t changes in database.
Syntax:
ddt_update_from_db(path of excel file, path of query file, variable);
3. ddt_save( ):
Syntax:
ddt_save(Path of excel sheet);
4. ddt_get_row_count( ):
Syntax:
ddt_get_row_count(path of excel sheet, variable);
5. ddt_set_row( ):
Syntax:
ddt_set_row(path of excel sheet, row no);
6. ddt_val( ):
We can use this function to capture specified column value from a pointed row.
Syntax:
ddt_val(path of excel sheet, column name);
7. ddt_set_val( ):
We can use this function to write a value into excel sheet column.
Syntax:
ddt_set_val(path of excel sheet, column name, value / variable);
8. ddt_close( ):
We can use this function to swapout a open excel sheet from RAM.
Syntax:
ddt_close(path of excel sheet);
Example 2:
table = "default.xls";
rc = ddt_open(table, DDT_MODE_READWRITE);
if (rc!= E_OK && rc != E_FILE_OPEN)
pause("Cannot open table.");
ddt_get_row_count(table,n);
for(i = 1; i <= n; i ++)
{
ddt_set_row(table,i);
a=ddt_val(table,"Input1");
b=ddt_val(table,"Input2");
c=a+b
ddt_set_val(table,"result",c);
ddt_save(table);
}
ddt_close(table);
Example 3:
table = "default.xls";
rc = ddt_open(table, DDT_MODE_READWRITE);
if (rc!= E_OK && rc != E_FILE_OPEN)
pause("Cannot open table.");
ddt_get_row_count(table,n);
for(i = 1; i <= n; i++)
{
ddt_set_row(table,i);
x=ddt_val(table,"input");
fact=1;
for(j = x; j >= 1;j--)
fact=fact*j
ddt_set_val(table,"result",fact);
ddt_save(table);
}
ddt_close(table);
Example4:
Prepare test script to print a list box values into a flat file one by one.
f="c:\My Documents\sm.txt";
file_open(f,FO_MODE_WRITE);
set_window ("Flight Reservation",10);
list_get_info("Fly From:", "count",n);
for(i=0; i<n; i++)
{
list_get_item("Fly From:",i,x);
file_printf(f,"%s\r\n",x);
}
file_close(f);
Example4:
Prepare test script to print a list box values into a excel sheet one by one.
f="c:\My Documents\sm.xls";
file_open(f,FO_MODE_WRITE);
set_window ("Flight Reservation", 10);
set_window ("Flight Reservation", 13);
list_get_info("Fly From:", "count", n);
for(i=0; i<n; i++)
{
list_get_item ("Fly From:",i,x);
file_printf(f,"%s\n",x);
}
file_close(f);
Synchronization Point:
To maintain time mapping between testing tool and application build during test execution, we can
use this concepts.
1. wait ( ):
WinRunner waits until specified object property is equal to our expected value.
Navigation:
Select position in script create menu synchronization point for object / window property
select indicator object (Ex: Status or progress bar) select required property with expected
(100% enabled, <100% disabled) specify maximum time to wait click paste.
Syntax:
obj_wait_info(object Name, property, Expected value, maximum time to wait);
Some times test engineers are defining time mapping between tool and application depends on
Images also.
Navigation:
Select position in script create menu synchronization point for object/window Bitmap
select indicator image (D click).
Syntax:
obj_wait_bitmap(Image object name, Image file name.bmp, maximum time to wait);
Some times test engineers are defining time mapping between testing tool and application depends
on part of images also.
Navigation:
Select position in script create menu synchronization point for screen area Bitmap
select required image region right click to release.
Syntax:
obj_wait_bitmap(Image object name, Image file name.bmp, maximum time to wait, x, y,
width, height);
During test script execution, recording time values are not useful. During running, WinRunner
depends on two runtime parameters. Test engineers are performing changes in the parameters if
required.
Delay for window synchronization 1000 msec(Default)
Timeout to execute context sensitive and checkpoints - 10000m,sed (Default)
Navigation:
Settings menu general options change delay and time out depends on requirements click
apply click ok.
BATCH TESTING
The sequential execution of more than one test to validate functionality is called batch testing. To
increase intention of bugs finding during test execution, batch testing is suitable criteria. The test
batch is also known as test suit or test set. Every test batch consists of a set of multiple dependent
tests. In every test batch end stage of one test is base state of next test.
To create this type of batches in WinRunner, we can use below statements.
a) call testname( );
b) call path of test( );
We can use first syntax when corresponding calling & called tests both in same folder.
We can use second syntax when calling & called tests both are in different folders.
Example 1:
Test case1 Successful order open
Test case2 Successful up-dation.
Example 2:
Test case1 Successful new user registration.
Test case2 Successful login
Test case3 Successful mail open.
Test case4 Successful mail reply
Example 3:
Test case1 Successful order open
Test case2 Successful calculation.
call subtest( );
X
xx
call subtest(xx );
edit_set(edit, x);
From the above model sub test maintains parameters to receive values from main test. To create
this type of parameters we can follow bellow navigation.
Navigation:
Open sub test file menu test properties click parameter tab click add to create new
properties enter parameter name with description click ok click add to create more
parameters click ok use that parameter in required place of test script.
X
xx
inputXXXXXX
Default.xls call subtest(xx );
edit_set(edit, x);
Main Test:
table = "default.xls";
rc = ddt_open(table, DDT_MODE_READ);
if (rc!= E_OK && rc != E_FILE_OPEN);
pause("Cannot open table.");
ddt_get_row_count(table,n);
for(i = 1; i <= n; i ++)
{
ddt_set_row(table,i);
temp=ddt_val(table,"input");
call subsri(temp);
set_window("Flight Reservation",1);
obj_get_text("Tickets:",t);
obj_get_text("Price:",p);
p=substr(p,2,length(p)-1);
obj_get_text("Total:",tot);
tot=substr(tot,2,length(tot)-1);
if(tot==p*t)
tl_step("s1",0,"test is pass");
else
tl_step("s1",1,"test is fail");
}
ddt_close(table);
Sub Test:
set_window ("Flight Reservation", 2);
menu_select_item ("File;Open Order...");
set_window ("Open Order", 1);
button_set ("Order No.", ON);
edit_set ("Edit", x);
button_press ("OK");
set_window("Flight Reservation",1);
obj_get_text("Name:",t);
if(t= =" ")
pause("cannot open record");
treturn( ); :
We can use this function to return a value from sub test to main test.
Syntax:
treturn( Value / Variable);
Silent Mode:
WinRunner allows you to continue test execution when a Checkpoint is fail also. To define this
type of situation we can follow below navigation.
Navigation:
Select Menu general options run tab select run in batch mode click apply click ok.
Note: When WinRunner in silent mode, tester interactive statements are not working.
EX: create_input_dialog(xxxxx);
Public Variables :
Syntax:
public variable;
FUNCTION GENERATOR:
It is a list of TSL functions library. In this library TSL functions classified into category wise. To
search required TSL function below navigation
Create menu insert function From function generator select required category select
required function based on description fill arguments click past.
Example 1:
Clipboard Testing
A tester conducts test on selected part of an object.
set_window(login, 5);
edit_get_selection(Agent Name,v);
Printf(v);
Syntax:
edit_get_selection(Name of edit box, variable);
Example 2:
Syntax:
win_exists(window name, time);
Case Study:
Fail pass
test 2
test 3
test 4
call test 1( );
if(win_exists(sample, 0) = = E_OK)
call test 2( );
call test 3( );
call test 4( );
else
call test 3( );
call test 4( );
Example 3:
Open Project:
Syntax:
invoke_application(Path of .exe, Command, Working Directory, SW_SHOW /
SW_SHOWMINIMISE / SW_SHOWMAXIMISE);
Example 4:
Syntax:
getvar(timeout_msec);
X = getvar(timeout_msec);
printf(X);
Example 6:
Search TSL function to change time out with out using settings menu.
Syntax:
setvar(time out, time in sec);
Example 7:
Example 8:
Example 9:
WinRunner allows you to execute prepared queries. A prepared query consists of variable in
structure of that query, this query also known as dynamic query.
TSL Script
Syntax:
In above syntax session name indicates allocated resources to user when he connected to database.
db_execute_query( ):
We can use this function to execute specified select statement on that connected database.
Syntax:
db_execute_query(session name, select statement, variable);
In above syntax variable specifies no of rows selected after execution of that statement.
db_write_records( ):
We can use this function to copy query result into specified file.
In above syntax TRUE indicates query result with header and FALSE indicates query result with
out header.
Example:
x=create_input_dialog("enter limit");
db_connect("query1","DSN=Flight32");
db_execute_query("query1","select * from orders where
order_number<="&x,num);
db_write_records("query1","default.xls",FALSE,NO_LIMIT);
Syntax:
public / static function function name( in / out / inout argument name)
{
return ( );
}
i=100;
Note: We can use static function to maintain output of onetime execution as input to next
time execution.
Example:
calling test:
a= 10;
b = 20;
add(a, b, c);
printf( c );
Example2:
public function add(in x, in y)
{
auto z;
z = x + y;
return(z);
}
calling test:
a= 10;
b = 20;
c = add(a, b);
printf( c );
Example 3:
calling test:
a= 10;
b = 20;
add(a, b);
printf( b );
Example 4:
To call user defined functions in required test scripts, we can try to make user defined function
as .EXE copies. To do this task, test engineers are following below navigation.
Open WinRunner click new record repeatable navigations as UDFs save the module in
dat folder file menu test properties general tab change test type Compiled module
click apply click OK execute once(permanent .EXE created for that user defined functions in
hard disk) write load statement in startup script of WinRunner (c:\Program Files \ Mercury
Interactive \ WinRunner \ dat \ myinit).
load( ):
We can use this statement to load user defined .EXE from hard disk to RAM.
Syntax:
load(compiled module name, 0 / 1, 0 / 1);
unload( ):
We can use this function to unload unwanted functions from RAM. We can use this statement in
our test scripts if required.
Syntax:
unload(path of compiled module, unwanted function name);
reload( ):
Syntax:
reload(path of compiled module, unloaded function name);
OR
reload(path of compiled module, 0/1, 0/1); loads all functions
LEARNING
In general test automation process starts with learning to recognize objects and windows in your
application build. WinRunner 7.0 supports auto learning and pre learning.
1. Auto Learning:
During recognization time, WinRunner recognize all objects and windows what you operated.
GUI MAP
button_press(OK); OK
4 2
3
5
Logical Name : OK
{
class : push button
label : OK
}
Step 1: Start recording
Step2: Recognize object During Recording
Step3: Script generation
To recognize entries WinRunner maintained in GUI MAP. To edit this entries, we can follow
navigation.
In this mode WinRunner maintains common entries for objects and Windows in a single file
Save
Open
Test2
.gui
Explicitly
If test engineers forgot entries saving, WinRunner maintains that unsaved entries in default buffer
(10kb). To open buffer, test engineers follows bellow navigation.
Tools GUI Map editor view menu GUI Files(LO < temporary >).
To save / open GUI Map entries, test engineers use file menu options in GUI Map editor.
In this mode WinRunner maintains entries for objects & windows per every test.
Test 1 GUI Map
Save
Open
Test2
.gui
Implicitly
In general WinRunner maintains Global GUI File.
If we have to change to Per Test Mode, we can use bellow navigation.
Settings menu general options environment tab select GUI Map File Per Test click
apply click ok.
Note: In general test engineers are using global GUI Map file mode.
2. Pre Learning:
In general test engineers jab starts with learning in lower versions of WinRunner (ex 6.0, 6.5).
Because auto learning is new concept in WinRunner 7.0.
To conduct this Pre Learning before starts recording, we can use rapid test script wizard (RTSW).
Open Build & WinRunner create menu Raped Test script wizard click next show
application main window click next select no test click next enter sub menu
symbol(, >>,) click next select pre learning mode(express, comprehensive) learn
say yes / no to open project automatically during WinRunner launching click next
remember paths of startup scripts and GUI Map File click next click ok.
Some times test engineers perform changes in entries w.r.t test requirements.
Some times our application objects / windows labels are variating depends on multiple input
values. To create data driven test on this type of object / window, we can perform changes in
corresponding object / window entry with Wild Card Characters.
Original Entry
Modified Entry
Tools GUI Map editor select corresponding entry click modify insert wild card
changes like as above example click ok.
Some times our application build objects / windows labels are variating depends on events.
To create data driven test on this type of objects and windows, we can perform change in entry
using regular expression.
Sample Sample
Start Stop
Original Entry
Logical name :start
{
class: push button
label : start
}
Modified Entry
Logical name :start
{
class: push button
label : ![s][to][a-z]*
}
Some times WinRunner is notable to recognize advanced technology objects in our application
build. To forcibly recognize that non recognized objects, we can use Virtual Object Wizard.
Navigation:
Tools menu virtual object wizard click next select expected type click next click
mark object to select non recognized area right click to release click next enter logical
name to that entry click next say yes / no to create more virtual objects click finish.
Some times WinRunner is notable to return all available properties to a recognized object. To get
all testable properties for that object we can follow below navigation.
Navigation:
Tools Menu GUI Map Configuration click add Show non testable object click ok
click configuration select mapped to standard class click ok.
Some times more than 1 objects in a single window consists of same physical description w.r.t
WinRunner defaults (Class & Label).
Navigation:
Tools Menu GUI Map Configuration select object type click configuration select
distinguishable properties into obligatory and optional list. click ok.
Note: In general test engineers are maintaining MSW_id as optional for every object type.
Because every objects consists of unique MSW_id.
OK OK
Sample
Logical Name : OK
{
class : push button
label : OK
MSW_id : XXXX
}
Navigation:
Settings General options record tab click selective recording select record only on
selected applications select record on start menu & Windows explorer if required Browse
required project path click OK.
WinRunner is a functionality testing tool but it provides a facility to conduct user interface testing
also. In this testing WinRunner applies Microsoft 6 rules on our application interface.
To apply above six rules on our application build, WinRunner uses below TSL functions.
a) load_os_api( ):
We can use this function to load application program in interface system calls into RAM.
Syntax:
load_os_api( );
Note: With downloading api system call into RAM, we are not able to conduct user interface
testing.
b) configure_chkui( ):
We can use this function to customize Microsoft,s six rules to be applied on our application build.
Syntax:
configure_chkui(TRUE / FALSE, .);
c) check_ui( ):
We can use this function to apply above customized rules on specified window.
Syntax:
check_ui(Window Name);
To create user interface test script, test engineers follows below navigation.
Open WinRunner / Build create menu RTSW click next show application main
window click next select user interface test click next specify sub menu symbol click
next select learning mode click learn say YES / NO to open your application
automatically during WinRunner launching remember paths of start up scripts & GUI Map file
remember path of user interface test script click ok click run analyze results manually.
Note:
Some times RTSW doesnt appears in create menu.
a) If you select wed test option in add in manager.
b) If you are in per test mode.
b) REGRESSION TESTING:
In general test engineers follows below approach after receiving modified build from developers.
WinRunner provides a faility to automate GUI Regression & BIT Map Regression.
We can use this option to find object properties level differences in between old build and new
build.
Old Build Modified
Build
Navigation :
Open WinRunner / Build create menu RTSW click next show application main
window click next select use existing information click next select GUI Regression test
script click next remember path of GUI Regression test script click ok open modified
build and close old build click run analyze results manually.
We can use this option to find image level differences between old build and modified build. This
regression is optional, because all screens does not consists of images.
Navigation :
Open WinRunner / Build create menu RTSW click next show application main
window click next select use existing information click next select BIT Map
Regression test script click next remember path of BIT Map Regression test script click
ok open modified build and close old build click run analyze results manually.
Exceptional Handling:
Exception is nothing but runtime error. To handle test execution errors in WinRunner, we can use
three types of exceptions.
a) TSL Exceptions
b) Object exceptions
c) Popup Exceptions.
a) TSL Exceptions:
We can use these exceptions to handle run time errors depends on TSL statements return code.
E_NOT_FOUND
How to
set_window(X,5); open X
window
Navigation:
Tools exception handling select exception type as TSL click next enter exception
name enter TSL function name specify return code enter handler function name click
ok click paste click ok after reading suggestion click close record our required
navigation to recover the situation make it as compiled module write lode statement of it in
start up script of WinRunner.
Example:
b. Object Exceptions:
down
Tools exception handling select exception type as Object click new enter exception
name select traceable object select property with expected to determine situation enter
handler function name click ok click paste click ok after reading suggestion click close
record our required navigation to recover the situation make it as compiled module write
lode statement of it in start up script of WinRunner.
Example:
c. Pop-UP Exceptions:
These exceptions raised when specified window come to focus. We can use these exceptions to
skip unwanted windows in our application build during test execution.
Tools exception handling select exception type as Pop-Up click new enter exception
name show unwanted window raising during testing select handler action( press enter /
click cancel, click OK and user defined function name) click ok click close.
To administrate exceptions during test execution, test engineers use below statements.
i. exception_off( ):
Syntax:
exception_off (exception name);
ii. exception_off_all( ):
We can use this function to disable all types of exceptions in your system.
Syntax:
exception_off_all( )
iii. exception_on( ):
Syntax:
exception_on(exception Name);
In this test automation, test engineers apply below coverages on web interfaces.
1. Behavioral Coverage
2. Input Domain Coverage
3. Error handling Coverage (Clint & server Validation)
4. Calculations Coverage
5. Back End Coverage
6. URL (Uniform Resource Locator) Coverage
7. Static text testing
In above coverages, URLs testing and static text testing are new coverages for Web application
functionality testing.
DSN Back
Front End
End
I. URLs Testing:
It is an extra coverage in web applications testing. During this test, test engineers validate links
execution and links existences. Links execution means that whether the link is providing right page
or not, when you click link. Link existence means that whether the corresponding link in right
place or not.
To automate this testing using WinRunner, we can select web test option in add in manager during
WinRunner launching. We can use GUI Check Point concept to automate URLs testing. In this
automation, test engineers are creating check points on text links, image links, cell, tables and
frame.
a. Text Link:
It is a non standard object and it consists of a set of non standard properties such as,
Syntax:
obj_check_gui(check list, Checklist file name, expected value file.txt, time to create);
b. Image link:
It is also a non standard object and it consists a set of non standard parameters such as.
Syntax:
obj_check_gui(image file name, checklist file, expected value file.txt, time to create);
To create above like check points, test engineers are collecting below like information from
development team.
Cell indicates an area of web page. It contains of a set of text links & image links. To cover all
these links through a single checkpoint, we can use cell property.
To get cell properties, test engineers select object first and then they change their selection from
object to parent cell.
Syntax:
win_check_gui(Cell logical name, checklist file name.ckl, expected value file.txt, time to
create);
d. Table:
It is also a non standard object and it consists of a set of non standard properties. These properties
are not suitable to conduct URL testing. Test engineers are using these properties for cells coverage
during testing.(columns, format, rows & table content).
e. FRAME:
It is also a non standard object and it consists of a set of standard and non standard properties. But
test engineers are using non standard properties only for URLs testing.
Frame content
-----
---
---
---
---
----
.txt .htm
Syntax:
win_check_gui(frame logical name, checklist file name.ckl, expected value file.txt,
time to create);
Note: in general test engineers are conducting URLs testing at frame level. If a frame consists of
huge amounts of links, test engineers are conducting on cell level.
To conduct calculations & other text based tests, we can use get text option in create menu. This
option consists of 4 sub options when you select web test option in add in manager.
Syntax:
web_obj_get_text(web object name, # row no, #column no, variable, text before,
text after, time to create)
Example:
Rediff
Xxx kb
sum = 0
set_window(rediff, 5);
tbl_get_row_count(mail box,n);
for( i=1, i < n , i++)
{
c. From Selection:
To capture static text from web pages, we can use this option.
Navigation:
Navigation:
Create menu get text from selection select required or text right click to relive
select text before & text after click ok.
Syntax:
web_frame_get_text(frame logical name, variable, text before, text after, time to
create);
Example:
Shopping
-------- Expected:
American $ xxxx
as------------- Australian
$ xxxx as------------------- Indian Rs = American $ value X 45 + Australian $ value X 35
Indian Rs xxx as-------
.txt .htm
------- abc--------
Example:
obj_get_text(edit, x);
Web Functions:
1. web_link_click ( ):
Syntax:
web_link_click (link text );
2. web_image_click( ):
Syntax:
web_image_click(image file name, x, y);
3. web_browser_invoke( ):
WinRunner use this function to open a web application through test script.
Syntax:
web_browser_invoke(I.E / NetScape, URL ):
Auto learning
Per text Mode
Selective Recording
Run Time Record check.
Web Testing concepts
GUI spy ( To identify weather the object is recognizable or not)
QC Test Policy
Company level
Test Strategy
Quality Analyst /
Project manager Test Methodology
Test Lead Test plan
Test Cases
Test Procedures Project level
Test Engineer
Test Scripts
Defect Reports
Test Lead Final Test Summary Report
I. TEST POLICY:
XXXXXXXXXXXXXXXX
XXXXXXX
Testing Standard : 1 defect per 250 line of coding /1 defect per 10 Functional Point
XXXXXXX
(C.E.O)
Components
100%
64 36
(Development & Maintenance) (Testing)
Testing Issues:
QC Quality
QA/PM Test Factor
TL Testing Technique
TE Test Cases
From the above model a quality software testing process formed with below 15 testing issues.
1. Authorization: Whether user is valid are not to connect to application.
2. Access Control: Whether a valid user have permission to use specific service or not.
3. Audit Trial: Maintains Metadata about user operations in our applications.
4. Continuity of processing: Inter process communication (Module to Module).
5. Corrections: Meet customer requirements in terms of functionality.
6. Coupling: Co-Existence with other existing softwares to share resources.
7. Ease of Use: User friendliness of the screens.
8. Ease of Operate: Installation, un-installation, Dumping, Downloading, uploading etc
9. File Integrity: Creation of backup.
10. Reliability: Recover from abnormal stage.
11. Performance: Speed of processing.
12. Portable: Run on different platforms.
13. Service Levels: order of functionalities.
14. Maintainable: Whether our application build is long time serviceable to customer site people
are not.
15. Methodology: Whether our testers are following standards are not during testing.
III.TEST METHODOLOGY:
It is a project level document. Methodology provides required testing approach to be followed for
current project. In this level QA / PM selects possible approaches for corresponding project testing
through below procedure.
Traditional
Off Shelf X X X X
Maintenance X X X X
Test
Reporting
PET Process (Process Expert Tools and Techniques) :
It is a refinement form of V model. It defines mapping between development stages and testing
stages. From this model organizations are maintaining separate team for functionality and system
testing. Remaining stages of testing done by development people. This model developed in HCL
and recognized by QA forum of India.
After finalization of possible tests to be applied for corresponding project, test lead category people
concentrate on test plan document preparation to define work allocation in terms of what to
test?, Who to test ?, when to test ?, and How to test ?.
To prepare test plan documents, test plan author follows below approach
1. Team Formation:
In general, test planning starts with testing team formation. To define a testing team, test plan
author depends on below factors.
i. Availability of testers
ii. Test duration
iii. Availability of test environment Resources
Case Study:
Test Duration:
- Client / Server or Web or ERP - 3 to 5 months functional & system testing
- System S/W - 7 to 9 months functional & system testing
- Machine critical - 12 to 15 months functional & system testing
(Robots, satellites etc )
- Team Size - 3 : 1 (developers : Testers)
After completion testing team formation, test plan author analyses possible risks and mitigations.
Example:
Format:
After completion of plan document preparation test plan author conducts a review for
completeness and correctness. In this review plan author follows Coverage Analysis
Case Study:
Deliverable Responsibility Completion time
V. Test Design:
After completion of test planning and required training to testing team, corresponding testing
team members will prepare list of test cases for their responsible modules. There are three types
of test case design methods to cover core level testing (Usability & Functionality testing).
In general test engineers are writing a set of test cases depends up on use cases in S/W RS. Every
use case describes functionality in terms of input, process and output. Depends on this use cases
test engineer are writing test cases to validate that functionality.
From the above model test engineers are preparing test cases depends on corresponding use cases
and every test case defines a test condition to be applied.
To prepare test cases, test engineers study use cases in below approach.
Determinant Dependent
UID
IN BOX
Input1 XXX XXX
PWD
Input 2 XXX XXX
OK
OK
Result XXXX
Output Outcome
Use Case 1:
A login process allows UID & PWD to validate users. During this validation, login process allows
UID as alphanumeric from 4 to 16 characters long and PWD allows alphabets in lower case from 4
to 8 characters long.
BVA(Size) ECP(TYPE)
Use Case 2 :
In a shopping application user can apply for different purchase orders. Every purchase orders
allows item selection number and entry of qty up to 10. System returns one item price and total
amount depends on given quantity.
BVA(range) ECP(TYPE)
Use Case 3:
In an insurance application, user can apply for different types of insurance policies.
When they select insurance type as B, system asks age of that customer. The age should be > 18
years and < 60 years.
A door opens when a person comes in front of door. A door closed when a person come in.
Test Case 1: Successful door opens, when person comes in front of door.
Test Case 2: Unsuccessful door open due to absence of the person in front of the door.
Test Case 3: Successful door closing after person get in.
Test Case 4: Unsuccessful door closing due to person standing at the door.
Use Case 6:
Prepare test case for money withdrawal from ATM.
Test Case 5: Unsuccessful operation due to entry of wrong pin no three times
Test Case 6: Successful selection of language
Test Case 7: Successful selection of account type
Test Case 8: Unsuccessful operation due to invalid account type selection
Use Case 7:
In an E-Banking application users can connect to bank server using his personnel computers. In
this login process user can use below fields.
Password 6 digit no
Area code 3 digit no, allows blank
Prefix 3 digit no, does not begins with 0 or 1.
Suffix 6 digit alphanumeric
Commands Check deposit, Money transfer, Bill pay and Mini statement.
BVA(Size) ECP(TYPE)
BVA(Size) ECP(TYPE)
BVA(Range) ECP(TYPE)
BVA(Size) ECP(TYPE)
Test Case 5: Successful selection of commands such as check deposit, money transfer, bills pay
and mini statement.
Test Case 6: Successful connect to bank server with all valid values
Test Case 7: Successful connect to bank server with out filling area code.
Test Case 8: Unsuccessful operation due to with out filling all fields except area code.
During test design test engineers are writing list of test cases in IEEE format.
P0 : Basic Functionality
P1 : General Function (I/P domain, Error handling,
Compatibility, Inter systems etc)
P2 : Cosmetic (User Interface)
6. Test Environment : Required Hardware and software to executive this test case.
7. Test Effort(Person/hr) : Time to executive this test case (Ex : 20 mts max)
8. Test Duration : Date & Time
9. Test Setup : Required testing tasks to do before starts this case execution.
10. Test Procedure : Step by step procedure to executive this test case.
Format:
11. Test Case Pass/Fail Criteria: When this case is pass and when this case is fail.
Note: In general test engineers are writing list of test cases along with step by step procedure only.
Example: Prepare test procedure for below test case. Successful file save in note pad.
Step
Action I/P required Expected
No
1 Open note pad - Empty Editor
2 Fill with text - Save icon enabled
3 Click save icon - Save window appears
Enter file name & Unique File File name appears in
4
click save name title bar of editor
Example2: Prepare test scenario with expected for below test case.
Successful Mail reply in Yahoo.
Step
Action I/P required Expected
No
Valid UID
1 Login to site Inbox appears
Valid PWD
2 Click Inbox - Mail box appears
3 Click Mail Subject - Mail Message Appears
Compose Window appears with
To: Received Mail ID
Sub: Received mail Subject
4 Click Reply - CC: Off
BCC: Off
MSG: Received message with
comments.
Type New massage Acknowledgement from WEB
5 -
and click send server
2. Input Domain Based Test Case Design:
In general test engineers are writing maximum test cases depend on use cases / functional specs in
S/W RS. These functional specifications provide functional descriptions with inputs, outputs and
process. But they are not responsible to provide information about size and type of input objects.
To collect this type of information test engineers study Data Modal of responsible modules (E-R
Diagrams in LLDs)
Example:
A/C No
Critical A/C Name
Balance Non Critical
A/C Orders
Note: In general test engineers are preparing step by step procedure based test cases for
functionality testing. Test engineers prepare valid and invalid table based test cases for input
domain of object testing.
Case Study:
In a bank automation Software, fixed deposit is functionality. Bank employee operates the
functionality with below inputs.
Customer Name Alphabets in lower case.
Amount Rs 1500 to 100000.00
Tenure Up to 12 months
Interest Numeric With decimal
From functional specification (Use Cases), if tenure is > 10 months interest must > 10%.
Test Case 1:
Data Matrix:
ECP BVA(Size)
I/P Attribute
Valid Invalid Min Max
A to Z
Customer Name
0 to 9
a to z 1 characters 256 characters
Special Characters &
Blank
Test Case 2:
Data Matrix:
ECP BVA(Range)
I/P Attribute
Valid Invalid Min Max
A to Z
Amount a to z
0-9 Special 1500 100000
Characters &
Blank
Test Case 3:
Data Matrix:
ECP BVA(Range)
I/P Attribute
Valid Invalid Min Max
A to Z
Tenure a to z
0-9 Special 1 12
Characters &
Blank
Test Case 4:
Data Matrix:
ECP BVA(Range)
I/P Attribute
Valid Invalid Min Max
A to Z
Interest a to z
0-9
Special 1 100
With Decimal
Characters &
Blank
Test Case 5:
Test Case ID: TC_FD_5
Test Case Name: Successful fixed deposit operation
Test Procedure:
Step
Action I/P required Expected
No
1 Login to bank Software Valid ID Menu Appears
2 Select Fixed Deposit - FD form Appears
Acknowledgement from
All valid
Fill all fields and click bank server
3
OK Error message from bank
Any in valid
server
Test Case 6:
Test Case 7:
Test Procedure:
Step
Action I/P required Expected
No
1 Login to bank Software Valid ID Menu Appears
2 Select Fixed Deposit - FD form Appears
3 Fill all fields and click Valid customer Error message from bank
OK Name, Amount server
and Time
interest. But
some as blank
To conduct usability testing test engineers writing a list of test cases depends on our organisation
user interface conventions, Global interface rules and Interest of customer site people.
Examples:
Amount Amount
$
DOB DOB
--/--/-- DOB --/--/-- (DD/MM/YY)
Test Case 5: Accuracy of data in data base as a result of user inputs.
After completion of all possible test cases writing for responsible modules, testing team
concentrates on review of test cases for completeness and correctness. In this review testing team
applies coverage analysis.
BR based coverage
Use case based coverage
Data modal based coverage
User Interface based coverage
Test Responsibility based coverage
At the end of this review test lead prepare Requirements Tracability Matrix or Requirements
Validation Matrix".
Business Requirements Sources Test Cases
(Use cases, Data model etc)
XXXXXXX XXXXXXXX XXXXXXXX
(Login) (Mail Open) XXXXXXXX
: XXXXXXXXX
:
: XXXXXXXXX XXXXXXXX
: ( Mail Compose) XXXXXX
:
: XXXXXXXXX XXXXXXX
: (Mail Reply) XXXXXXX
: XXXXXXX
:
From the above model tracebility matrix defines mapping between customer requirements and
prepared test cases to validate that requirements.
After completion of test cases selection & their review, testing team concentrates on build release
from development and test execution on the build.
Development Testing
Test Automation
Defect Resolving Modified Build Level 2 (Regression)
Level 3 (Final Regression)
In general test engineers are receiving build from development in below modes.
Test Environment
Testers
From the above approach test engineers are dumping application build from server to local host
through FTP. Soft Base means that collections of softwares.
During test execution test engineers are receiving modified builds from soft base. To distinguish
old builds & new build, development team gives unique version no in system, which is
understandable to testers.
For this version controlling, developers are using version control tools also.(Ex: VSS(Visual
Source safe)
Understandable
Operatable
Consistency
Controllable
Simplicity
Maintainable
Automatable
From the above 8 testable issues sanity testing is also known as Testability Testing or Octangle
Testing.
5. Test Automation:
If test automation is possible then testing team concentrate on test scripts creation using
corresponding testing tool. Every test script consists of navigational statements along with
checkpoints.
Stable Build
Test Automation
(Selective Automation)
(All P0 and Carefully selected P1 test cases)
2 4
In queue In Progress Failed Closed
3 5
Partial
Blocked Pass / Fail
7. Level 2 (Regression Testing):
During comprehensive test execution, test engineers are reporting mismatches as defects to
developers. After receiving modified build from them, test engineers concentrate on regression
testing to ensure bug fixing work and occurrences of side effects.
On Modified Build
Case 1:
If development team resolved bug impact (Severity) is high, test engineers re execute all P0, P1
and carefully selected P2 test cases on that modified Build.
Case 2:
If development team resolved bug impact (Severity) is medium, test engineers re execute all P0,
carefully selected P1 and some of P2 test cases on that modified Build.
Case 3:
If development team resolved bug impact (Severity) is low, test engineers re some of P0, P1 and
P2 test cases on that modified Build.
Case 4:
If development team released modified build due to sudden changes in project requirements, test
engineers re execute all P0, all P1 and Carefully selected P2 test cases w.r.t that requirements
modifications.
During comprehensive testing, test engineers are reporting mismatches as defects to developers
through IEEE format.
_ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
By Developers
16. Fixed By : PM / Team Lead
17. Resolved By : Programmers Name
18. Resolved On : Date of resolving
19. Resolution Type :
20. Approved By : Sign of PM
Defect Age:
Large-Scale Organisations
QA
Test Manager Project manager
If high severity
defect rejected Test Lead Team Lead
Test Engineer Developer
Transmittal Reports
Project Manager
Transmittal Reports
New
Open / Rejected / Deferred (defect accepted but not interested to resolve
in this version)
Closed
Reopen
Detect Defect
Reproduce Defect
Report Defect
Fix Defect
Resolve Defect
Close Defect
After receiving defect reports from testers, developers reviews that defect and send resolution type
to testers as reply.
Ex 1: Logo missing, wrong logo, version no mistake, copyright window missing, developers name
missing, tester names missing.
1. Coverage Analysis:
BR based coverage
Use case based coverage
Data modal based coverage
UI based coverage
TRM based coverage
2. Bug Density:
Ex: A 20%
B 20%
C 40 % Final Regression
D 20%
At the end of this review, testing team concentrates on final regression testing on high bug density
modules if time is available.
Gather
Regression
Requiremen
ts
Test Effort
Reporting Estimation
Final Plan
Regression Regression
After completion of final regression cycles, our organisation management concentrates on user
acceptance testing to collect feedback. There are two approaches to conduct this testing such as -
test and - test.
X. Sign OFF:
After completion of user acceptance testing and their modifications, test lead concentrates on final
test summary report creation. It is a part of software release note. This final test summary report
consists of below documents.
Test Strategy / Methodology (TRM)
System Test Plan
Requirements Tracebility matrix
Automated Test Scripts
Bugs Summary Report
Stability:
y
Defect arrival Rate
No of defects
0 x
Time
Sufficiency:
Requirements Coverage
Type-Trigger analysis
These measurements used by test lead during testing process (weekly twice).
Test Status:
Completed
In progress
Yet to execute
Delays in Delivery:
Test Efficiency:
Test Effectiveness:
Requirements Coverage
Type-Trigger analysis
Test Efficiency: