Sei sulla pagina 1di 34

eTest Center, Bangalore

Agenda
• Software Testing - Why needed?

• Testing Methodology Followed

• Test Automation

• Different testing types & Tools used

• Winrunner-Functionality Testing tool


Software Testing

• Determine whether the system meets requirements


specified by the ‘CLIENT’

• Find the bugs and track the same through Defect


Tracking tools.

• Improve quality of the application and add value to


the Organization.

• Track usability issues which are not specified


explicitly in the client requirements.
Software Testing Methodologies
• Waterfall methodology

• Unit Testing

• Integration Testing

• System Testing

• User Acceptance Testing

• Incremental methodology
E-Testing Methodology
Onsite

Offshore

Iteration
Knowledge
Repository
Requirement Post Deployment
Analysis Evaluation

Inputs on Tools,
Checklists,
Environment Functional,
Stress,
Performance,etc.
Regression,
Strategy Defects Release
Formulation Functional Test Tools
Load Test Tools Test Management
Tool

Test Cases
Test Planning Scripting Test Execution
Generation
Automation – Why Required?

• Reduced testing time


• Consistent test procedures – ensure process
repeatability and resource independence. Eliminates
errors of manual testing
• Reduces QA cost – Upfront cost of automated testing
is easily recovered over the lifetime of the product
• Improved testing productivity – test suites can be
run earlier and more often
• Proof of adequate testing
• For doing Tedious work – test team members can
focus on quality areas.
Test Automation

● Functionality Testing
Tools for Functionality Testing are used mainly when the
application has to be tested in a number of hardware and
browser combinations.
Tools:Silk Test, SQA Robot,Winrunner.

● Performance Testing
To Test the Scalability of the application and to determine
Performance bottlenecks and Stability at high loads.
Tools: Silk Performer,Loadrunner, Webload,WAS.
Which Test Cases to
Automate?
 Tests that need to be run for every build of the
application (sanity check, regression test)
 Tests that use multiple data values for the same
actions (data driven tests)
 Tests that require detailed information from
application internals (e.g., SQL, GUI attributes)
 Stress/load testing
Which Test Cases Not to
Automate?
 Usability testing
❏ "How easy is the application to use?"
 One-time testing
 "ASAP" testing
❏ "We need to test NOW!"
 Ad hoc/random testing
❏ based on intuition and knowledge of application
 Tests without predictable results
From Manual to Automated
Testing
Repeat steps until
Wait for Verify AUT
Perform user all applications
processes to functions as
actions are verified
complete expected
compliant
1 2 3 4

1 2 3 4
Synchronize
Generate
script playback Run test or
automated Add verification
to application suite of tests
script
performance
Testing Is a Team Effort
TEAM MEMBER RESPONSIBILITY

Project Manager Manages the testing process

Business Analyst Analyzes enterprise and creates tests

Developer Develops the applications and performs


defect fixes
WinRunner Expert Creates automated tests based on planning
documentation and requirement specifications

Subject Matter Expert Understands how applications work in terms


(business user) of navigation and data
System
Manages the test environment
Administrators
Automated Testing Process - A
Team Effort
Typical Responsibilities
1 2 3 4
Generate Synchronize
script playback Run test or
automated Add verification
to application suite of tests
script performance

WinRunner WinRunner WinRunner WinRunner


Expert Expert Expert Expert
Subject Matter Business Business
Experts Analysts Analysts
Business System
Analysts Administrators
Testing Process

• Gather test documentation


– what type of testing is required for the AUT?
– which test data to use?
– what results are expected?

• Learn the AUT


– what screens to navigate?
– what visual cues to establish?

• Confer with project team


– functional experts
Mercury Interactive’s
Winrunner
STEPS INVOLVED:

•Script generation
•Customization of Scripts
•Parameterization of data
•Maintenance of Test Scripts in Test Suites
•Save Test Results
1 2 3 4
Record user Synchronize
script to Add verification
Run test or
actions in statements to
application suite of tests
script under test check AUT

Recording and Playback

Analog vs. Context Sensitive Scripts

Initial/End Conditions

The GUI Map

RECORDING and PLAYBACK


What Happens During
Recording?

Depart Date: ___/___/___ 12/12/03


From City.Paine
Thomas . : ________________
BMW
To City. . . . .: ________________
Flight. . . . . . :Drive
_______ 1973 2002tii
234 Willow

set_window("Automobile Purchase Form", 10);


edit_set ("Customer Name", "Thomas Paine"); Order Number. . . .: __________
Customer. . . . . . . .: ___________________
edit_set ("Address","234 Willow Drive");
edit_set ("Date", "12/12/03"); Billing Date. . . . . . :
list_select_item ("Make", "BMW");
edit_set ("Year", "1973");
edit_set ("Model", "2002tii");
button_press ("Insert Sale");
What Happens During
Playback?
Purchase Completed...

Depart Date: ___/___/___ 12/12/03


From
ThomasCity. . : ________________
Paine
To City. . . . .: ________________BMW
Flight. . . . . .Drive
: _______ 1973 2002tii
234 Willow

set_window("Automobile Purchase Form", 10);


edit_set ("Customer Name", "Thomas Paine"); Order Number. . . .: __________
Customer. . . . . . . .: ___________________
edit_set ("Address","234 Willow Drive");
edit_set ("Date", "12/12/03"); Billing Date. . . . . . :
list_select_item ("Make", "BMW");
edit_set ("Year", "1973");
edit_set ("Model", "2002tii");
button_press ("Insert Sale");
Two Recording Modes

Context Sensitive Analog


When the application has
When the application is
non-GUI areas (e.g., a
based on GUI objects
drawing application)
When mouse tracks are
Default mode necessary for correct
execution
TIP
When you are unable to
Recommended use Context Sensitive A test can
mode combine both
Context
Sensitive and
Analog
statements
Context Sensitive Recording

● Object based
● Readable script
● Maintainable script
(editable)
● Script not affected by user
interface changes
❏ if object moves location on
GUI, script will still replay
correctly
● Portable script
❏ a context sensitive script can
be ported to different
platforms with different
configurations
A Closer Look at GUI Objects
menu window

static text
list item

edit field
scroll bar

frame

radio button push button


User Actions in the Test Script

Specify window for


input
set_window("Login", 10);
Type input into an
edit field edit_set ("Username", "thomas");
Type encrypted input
into password field password_edit_set("Password:", "kzptnzet");

Press a button to button_press ("OK");


submit "OK"
set_window("Automobile Purchase Form", 10);
Specify a list box
selection list_select_item ("Make", "BMW");
1 2 3 4
Record user Synchronize
script to Add verification
Run test or
actions in statements to
application suite of tests
script under test check AUT

Recording and Playback

Analog vs. Context Sensitive Scripts

Initial/End Conditions

The GUI Map

ANALOG vs.
CONTEXT SENSITIVE
SCRIPTS
Context Sensitive Script
R-eview

set_window ("Save As");

edit_set ("File Name", "output14");

button_press ("OK");

output14
Analog Recording
● Screen-coordinate
dependent
● Test script describes
mouse and keyboard
activities
● 3 commands:
x
❏ mouse press/release
❏ mouse move
❏ keyboard type
y
● Covers all types of
applications
Analog Script

move_locator_track (1); mouse movement


mtype (" <T55> <kLeft>-<kLeft>+"); mouse click
type (" <t3>output14" );
move_locator_track (2);
mtype (" <T35><kLeft>-<kLeft>+ ");

keyboard timing
output14
Analog or Context-Sensitive?
Functionality Context
Application under test Sensitive Analog

Graphics program Paintbrush


stroke
Graphics program Preferences
checkboxes
Mouse-based
Virtual reality environment movement
controls
Client/server database Data entry using
standard GUI
objects

• Context Sensitive statements • Analog statements are useful


describe actions made to GUI for literally describing the
objects and are recommended keyboard, mouse, and mouse
for most situations button input of the user
WinRunner Tracks AUT’s
Windows and Objects With the
GUI Map File
The GUI Map file contains the: GUI Map File
• Windows of the AUT WINDOW: Login
• Objects within each window
Name: Physical Description:
• Physical attributes that
create each object’s unique Name class: edit
identification attached_text: "Name"
Password class: edit
attached_text: "Password"
OK class: push_button
label: "OK"
GUI Map Editor
• Visual tree displays
windows and objects
contained in the GUI
Map File
• First level consists of Parent Window
all windows in AUT (logical name)
• Second level consists of
objects uniquely Child Objects
identified within each (logical names)
parent window

Physical Description
of window or object
highlighted above
The GUI Map
Characteristics Strengths
• Allows separation of physical •Maintainability
attributes from test scripts –If a button label changes in the
• Enables WinRunner to application, update the button
description once in the GUI map
uniquely identify objects in
rather than in 500 tests
the AUT using physical
•Readability
attributes
–button_press("Insert") instead of
• -Allows WinRunner to refer to button_press("{class:
objects in the script using an ThunderSSCommand}");
intuitive logical name •Portability
• Provides the connection –Use the same script for all platforms,
between logical names a-nd with a different GUI map for each
physical attributes platform
Check Points

● Gui Check Points

● Db Check

● Bitmap Check

● Text Check
Why Synchronize?
Without synchronization point With synchronization point

t
rip

rip
T

T
AU

AU
Sc

Sc
Run script Run script

Inputs data Accepts Inputs data Accepts


to AUT input to AUT input

Attempts Sends data to Sends data to


Waits
next step database server database server

Synchronization
Script Server
fails Waits for Waits
server; processes data
cannot

point
continue
Server returns
Waits
results

Client affirms
Continues transaction is
complete
Synchronization Points

• The AUT's performance may


slow down as the number of
users increases
• Synchronization points allow
WinRunner to wait for the AUT,
just like a real user
Playback
Test Results Report
Checkpoint outcome is either OK or mismatch

ert_Sale

Insert_Sale

Insert_Sale

Insert_Sale
Insert_Sale

Insert_Sale

Checkpoint
details can be
opened in a
separate
window
Thank You

Potrebbero piacerti anche