Sei sulla pagina 1di 15

UNIT – V

Software Testing Tools


Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a
computer program or a piece of electronic hardware, thus making it behave as expected. Debugging tends to
be harder when various subsystems are tightly coupled, as changes in one may cause bugs to emerge in
another. Many books have been written about debugging (see below: Further reading), as it involves numerous
aspects, including interactive debugging, control flow, integration testing, log files, monitoring (application,
system), memory dumps, profiling, Statistical Process Control, and special design tactics to improve detection
while simplifying changes.

Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial
task, for example as with parallel processes or some unusual software bugs. Also, specific user
environment and usage history can make it difficult to reproduce the problem.

After the bug is reproduced, the input of the program may need to be simplified to make it easier to
debug. For example, a bug in a compiler can make it crash when parsing some large source file.
However, after simplification of the test case, only few lines from the original source file can be
sufficient to reproduce the same crash. Such simplification can be made manually, using a divide-and-
conquer approach. The programmer will try to remove some parts of original test case and check if the
problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user
interaction from the original problem description and check if remaining actions are sufficient for bugs
to appear.

After the test case is sufficiently simplified, a programmer can use a debugger tool to examine program
states (values of variables, plus the call stack) and track down the origin of the problem(s).
Alternatively, tracing can be used. In simple cases, tracing is just a few print statements, which output
the values of variables at certain points of program execution.

Techniques
 Print debugging (or tracing) is the act of watching (live or recorded) trace statements, or print
statements, that indicate the flow of execution of a process. This is sometimes called printf
debugging, due to the use of the printf statement in C. This kind of debugging was turned on by
the command TRON in the original versions of the novice-oriented BASIC programming
language. TRON stood for, "Trace On." TRON caused the line numbers of each BASIC
command line to print as the program ran.

 Remote debugging is the process of debugging a program running on a system different than
the debugger. To start remote debugging, a debugger connects to a remote system over a
network. The debugger can then control the execution of the program on the remote system and
retrieve information about its state.

 Post-mortem debugging is debugging of the program after it has already crashed. Related
techniques often include various tracing techniques (for example, [8]) and/or analysis of memory
1
dump (or core dump) of the crashed process. The dump of the process could be obtained
automatically by the system (for example, when process has terminated due to an unhandled
exception), or by a programmer-inserted instruction, or manually by the interactive user.

 "Wolf fence" algorithm: Edward Gauss described this simple but very useful and now famous
algorithm in a 1982 article for communications of the ACM as follows: "There's one wolf in
Alaska, how do you find it? First build a fence down the middle of the state, wait for the wolf to
howl, determine which side of the fence it is on. Repeat process on that side only, until you get
to the point where you can see the wolf." [9] This is implemented e.g. in the Git version control
system as the command git bisect, which uses the above algorithm to determine which commit
introduced a particular bug.

 Delta Debugging - technique of automating test case simplification.[10]:p.123

 Saff Squeeze - technique of isolating failure within the test using progressive inlining of parts
of the failing test.

DEBUGGING APPROACHES:

 Several approaches are being practiced in the industry for debugging software under test (SUT).
 In general, three categories for debugging approaches may be proposed.
• Brute force
• Back tracking
• Cause elimination

 1) Brute Force Method:

 This method is most common and least efficient for isolating the cause of a software error. We
apply this method when all else fail. In this method, a printout of all registers and relevant

2
memory locations is obtained and studied. All dumps should be well documented and retained
for possible use on subsequent problems.

2) Back Tracking Method:

 It is a quite popular approach of debugging which is used effectively in case of small


applications. The process starts from the site where a particular symptom gets detected, from
there on backward tracing is done across the entire source code till we are able to lay our hands
on the site being the cause. Unfortunately, as the number of source lines increases, the number
of potential backward paths may become unmanageably large.

3) Cause Elimination:

 The third approach to debugging, cause elimination, is manifested by induction or deduction


and introduces the concept of binary partitioning. This approach is also called induction and
deduction. Data related to the error occurrence are organized to isolate potential causes. A
“cause hypothesis” is devised and the data are used to prove or disprove the hypothesis.
Alternatively, a list of all possible causes is developed and tests are conducted to eliminated
each. If initial tests indicate that a particular cause hypothesis shows promise, the data are
refined in an attempt to isolate the bug.
 Each of the above debugging approaches can be supplemented with debugging tools. For
debugging we can apply wide variety of debugging tools such as debugging compilers, dynamic
debugging aids, automatic test case generators, memory dumps and cross reference maps.

 What are the different approaches to debug the Software Applications

Several approaches are being practiced in the industry for debugging software under test
(SUT).
Some of the approaches are described below.

 1) Brute Force Method: This method is most common and least efficient for isolating the
cause of a software error. We apply this method when all else fail. In this method, a
printout of all registers and relevant memory locations is obtained and studied. All dumps
should be well documented and retained for possible use on subsequent problems.

 2) Back Tracking Method: It is a quite popular approach of debugging which is used


effectively in case of small applications. The process starts from the site where a particular
symptom gets detected, from there on backward tracing is done across the entire source
code till we are able to lay our hands on the site being the cause. Unfortunately, as the
number of source lines increases, the number of potential backward paths may become
unmanageably large.

 3) Cause Elimination: The third approach to debugging, cause elimination, is manifested


by induction or deduction and introduces the concept of binary partitioning. This approach
is also called induction and deduction. Data related to the error occurrence are organized
to isolate potential causes. A "cause hypothesis" is devised and the data are used to prove
or disprove the hypothesis. Alternatively, a list of all possible causes is developed and tests

3
are conducted to eliminated each. If initial tests indicate that a particular cause hypothesis
shows promise, the data are refined in an attempt to isolate the bug.

 Tools for Debugging: Each of the above debugging approaches can be supplemented
with debugging tools. For debugging we can apply wide variety of debugging tools such as
debugging compilers, dynamic debugging aids, automatic test case generators, memory
dumps and cross reference maps. The following are the main Debugging tools available in
the market.

OVERVIEW:

Why use Software Testing Tools?

Most companies today experience some form of attacks from criminal hackers and other malicious
threats. As the enterprise network has become more secure, attackers have turned their attention to the
application layer, which, according to Gartner, now contains 90 percent of all vulnerabilities. To protect
the enterprise, security administrators must perform detailed software testing and code analysis when
developing or buying software. Yet such code analysis can be extremely costly - on-premises software
testing tools are expensive to purchase, deploy, and maintain, and they can easily impair development
timelines to the point where speed-to-market is compromised. That's why so many leading enterprises
are turning to highly effective and cost-efficient software testing tools from Veracode.

What is a Software Testing Tool?

Software Testing tools are used as part of the testing phase within the software development lifecycle
(SDLC) to automate certain tasks, improve testing efficiency, and discover issues that might be difficult
to find using manual review alone. Veracode provides software testing tools that search for security
vulnerabilities within your applications. Veracode’s software testing tool performs both dynamic and
static code analysis and finds security vulnerabilities that include malicious code as well as the absence
of functionality that may lead to security breaches. Veracode's binary scanning approach produces more
accurate testing results, using methodologies developed and continually refined by a team of world-
class experts.

Top 15 Performance Testing Tools

 Apache JMeter
 NeoLoad
 LoadRunner
 LoadUI
 WebLOAD
 WAPT
 Loadster
 LoadImpact

4
 Rational Performance Tester
 Testing Anywhere
 OpenSTA
 QEngine (ManageEngine)
 Loadstorm
 CloudTest
 Httperf

Software Testing Tools--Win Runner

Win Runner is the most used Automated Software Testing Tool.


Main Features of Win Runner are

 Developed by Mercury Interactive


 Functionality testing tool
 Supports C/s and web technologies such as (VB, VC++, D2K, Java, HTML, Power Builder,
Delphe, Cibell (ERP))
 To Support .net, xml, SAP, Peoplesoft, Oracle applications, Multimedia we can use QTP.
 Winrunner run on Windows only.
 Xrunner run only UNIX and linux.
 Tool developed in C on VC++ environment.
 To automate our manual test win runner used TSL (Test Script language like c)

The main Testing Process in Win Runner is


1) Learning
Recognazation of objects and windows in our application by winrunner is called learning. Winrunner
7.0 follows Auto learning.
2) Recording
Winrunner records over manual business operation in TSL
3) Edit Script
depends on corresponding manual test, test engineer inserts check points in to that record script.

4) Run Script
During test script execution, winrunner compare tester given expected values and application actual
values and returns results.
5) Analyze Results
Tester analyzes the tool given results to concentrate on defect tracking if required.

5
WinRunner - As a GUI based load testing tool
We use WinRunner as a load testing tool operating at the GUI layer as it allows us to record
and playback user actions from a vast variety of user applications as if a real user had
manually executed those actions. We use WinRunner in addition to LoadRunner when we
want to record user experience response time. Visit mercuryinteractive.com/products/WinRunner/
for detailed information on WinRunner.

The WinRunner screen print shown below simulates a user starting up an Internet Explorer
session and connecting to www.google.com.au before performing a Google search on the text:
"Mercury Interactive". As can be seen, WinRunner records each of the actions that the user
performed on the desktop to get to and search the Google web site. This is in contrast to the
way that VUGen records the protocol that the client application generates. However, both
tools have their part to play in a load test.The screen image below is a script example of how
WinRunner recorded the events on the windows desktop to "Press Start" and then Invoke
Internet Explorer by selecting the option marked "Internet". The text "google.com" was
recorded as being entered as a URL and the "return" key (<kReturn>) was then recorded so
the IE loaded the Google site into the browser window. The characters "Mercury Interactive"
were then recorded as they were typed into the Google search field, followed by another
<kReturn> to initiate the search.

As can be seen from this script example, WinRunner does nothing at the protocol layer (like
VUGen would) but records and plays back user events, so that the underlying application
operates as if a person was sitting at the desktop.

For WinRunner to operate, it needs to be in control of the PC, so that it can execute the user
actions that had been previously recorded. This is why one can not execute a load test with
WinRunner as the means of load generation. In order to simulate 100 users, one would need
100 PCs with WinRunner on each PC.

However, WinRunner is a valuable piece of load testing technology when used properly in a
load test as it is the only means of determining the actual user response time, taking into
account the processing that is executed on the clients hardware. (As VUGen operates at a
protocol level it is only able to measure at a protocol level.)

6
Please visit performance tests and network sensitivity tests for other testing situations where
it is very appropriate to use WinRunner. By using WinRunner in these situations,
WinRunner usage will be extended beyond automated functional testing, increasing it's value
to your testing team and organization.

Winrunner helps you automate the testing process, from test development to execution. You
create adaptable and reusable test scripts that challenge the functionality of your application.
Prior to a software release, you can run these tests in a single overnight run- enabling you to
detect and ensure superior software quality.

SILK TEST:

Silk Test is mostly used for functional and regression testing. This tool can be used for Web,
Java or .NET and client-server applications. SilkTest offers many features such as basic workflow for
recording tests, data driven workflow for linking a single test case to test data values stored in external
tables and code completion in the the SilkTest IDE to improve productivity of testers. Silk Test also
offers functionality of test case management, test planning, data base function, date time functions etc.
to make your automation more effective. Apart from the features provided by the SilkTest, plenty of
4Test scripts are available on internet for you to download and consume in your project.

For the novice users SilkTest offers record/play back option and seasoned SilkTest users can
write code in the 4Test language, which is the scripting language for SilkTest. This scripting language
is also referred as Visual 4Test scripting language. 4Test is an object oriented 4GL language and offers
sufficient built in functionality to ease our life.

Architecture

Architecture wise, SilkTest can be divided in two parts, SilkTest host and agent. Test
automation is normally developed in SilkTest host using either record/playback or by manual scripting
using 4Test. Normally, test cases for SilkTest are developed in IDE (Integrated Development
Environment) provided by SilkTest host software. Executing automated test suite is the responsibility
of SilkTest agents. SilkTest even supports parallel execution of test cases with the help of these agents.
SilkTest host can communicate with these agents residing on multiple machines and execute automated
test cases on multiple machines simultaneously.

One example where this could be used is, if you need to test your application on Windows
2000, Windows XP and Windows 2000 SP 2, you can have SilkTest agent installed on these machines
and run automated test suite in parallel on all these machines at once. Feature like this, saves precious
execution time for the testers.

File Types

Test cases in Silk Test can be managed in the form of Test Suite (.S) or Test Plan (.P). Individual
test cases are normally recorded/manually automated in the Script (.T) files. These (.T) files uses

7
resources from the include files called (.INC) file. Include files or .INC files contain helper routine,
GUI object etc. which is used by the test cases present in the (.T) file. In the plan file (.P), these
individual .T files can be called and test cases present in these files can be executed. Similarly, using
Test Suite (.S), you can call test cases present in all the Test Plans.

For example you might want to have all the test cases representing functionality_X in a plan file
TestFunctionality_X.p file. Similarly you might have TestFunctionality_Y.p to represent all the test
cases for Functionality_Y. Hierarchically, suite file will come on top of these plan files. From a suite
file (.s) different plan files can be called. In this example Suite file TextXY.S will call
TestFunctionality_X.p and TestFunctionality_Y.p. So in simple term, suite will contain plan, plan will
contain test cases and test cases will use resources/helpers from the include files.

Apart from these files, SilkTest also produces results in a different file format called, .RES. In
the older versions of SilkTest, results were only produced in the .RES format and only option to access
results was to check it from IDE. Obviously, this was not the right approach since people outside
testing team are also interested in the results and they will not have IDE's on their system. To address
this issue, SilkTest provides capability to log results directly into MS SQL. This MS SQL server can be
installed on any machine in the network and act as a Result Repository. Other machines are connected
to this computer and during execution, results can be updated directly into this database. These results
can be viewed using SilkTest Results Viewer which allows you to check these results from any
machine using browser.

SilkTest is a tool for automated function and regression testing of enterprise applications. [1] It
was originally developed by Segue Software which was acquired by Borland in 2006. Borland was
acquired by Micro Focus International in 2009.

SilkTest offers various clients:

 SilkTest Classic uses the domain specific 4Test language for automation scripting. It is an
object oriented language similar to C++. It uses the concepts of classes, objects, and
inheritance.
 Silk4J allows automation in Eclipse using Java as scripting language
 Silk4Net allows the same in Visual Studio using VB or C#
 SilkTest Workbench allows automation testing on a visual level (similar to former TestPartner)
as well as using VB.Net as scripting language

Main features of SilkTest


 SilkTest Host: contains all the source script files.
 SilkTest Agent: translates the script commands into GUI commands (User actions). These
commands can be executed on the same machine as the host or on a remote machine.

SilkTest identifies all windows and controls of the application under test as objects and defines all of
the properties and attributes of each window. Thus it supports object oriented implementation
(Partially).

8
SilkTest can be run to identify mouse movement along with keystrokes (Useful for custom object). It
can use both record and playback or descriptive programming methods to capture the dialogs.

Extensions supported by SilkTest: .NET, Java (Swing, SWT), DOM, IE, Firefox, SAP Windows
GUI.SilkTest uses Silk Bitmap Tool (bitview.exe) to capture and compare the windows and areas.

File Types used in SilkTest


Test Plan (.pln): used to create a suite of tests when combined with test scripts. Example : test.pln

-Myfirsttest
script : Mytest.t
testcase:firsttest
-Mysecondtest
script : Mytest.t
testcase:secondtest

Where Mytest.t is the main script file and firsttest and secondtest are testcase names in Mytest.t file.
When this plan file is run it will automatically pick up first and second testcase in order and run
those.Test script (.t): used to write actual test scripts. Example: Mytest.t (Automating notepad
application)

use "Mytest.inc"
-testcase firsttest ()
notepad.invoke()//invoke works for some applications
notepad.file.new.pick()
notepad.file.exit.pick()
-testcase secondtest ()
notepad.invoke()
notepad.help.helptopics.pick()
notepad.exit()

When this script runs it will execute firsttest and secondtest in order and then exit the notepad
application.

Frame file (.inc): The abstraction layer used to define the windows and controls in application under
test that will be further referenced in .t files. Example : Mytest.inc

-Window mainwin notepad


-Menu File
Menuitem New
-Menu Edit
Menuitem Replace

Here 'Window' is main class with 'Menu', 'Menuitem' as a subclass. File, Replace are objects.Result
file (.res): contains test run results with the names of passed or failed tests along with a description of
the failures. Can also contain log messages.Other than the results file, all files are text-based and can be
9
edited in a text editor or the SilkTest IDE. As of SilkTest 2006 the files can be saved in either ANSI or
UTF-8 formats. All of the source files are compiled into pseudocode object files either when loaded or
at run time if the source files are changed.

LOAD RUNNER

LoadRunner is a commercial performance testing tool owned by Hewlett-Packard.


LoadRunner's history began in 1994 with a small console to control X-Runner sessions running on X-
Windows workstations. LoadRunner's interface and platform evolution has followed the changes in the
industry. By version 4 the LoadRunner controller was available for execution on Windows, including
control of WinRunner clients and custom programmed API virtual users. The UNIX Controller
continued to be available on multiple platforms though version 5 and was retired when the Windows
based controller gained the ability to control UNIX/LINUX based load generators with version 6 of
LoadRunner. Version 6 saw the inclusion of the analysis engine and version 8 500 points of SiteScope
to handle unified monitoring. Versions numbers 10.x of LoadRunner were skipped altogether in favor
of moving from 9.5x directly to version 11 of LoadRunner, announced in the summer of 2010.
LoadRunner supports a varied number of interfaces, many of which have a historical basis in how
client server computing has changed over the past two decades. The current version of LoadRunner
supports QuickTest Professional exclusively as a GUI Virtual user, leaving behind the support for
WinRunner and XRunner. Interfaces as varied as Windows Sockets on the bottom end of the stack and
RDP/Citrix at the top end are available. IN between these layers are sandwiched protocol support for
databases, distributed computing models, web technologies, specific applications and language
templates for times when no in-the-can support exists. With LoadRunner version 9.5 a protocol SDK
became available to allow customers to build a custom integration for applications not supported in the
as-shipping release of LoadRunner. 2010/2011 saw the beta deployment of a cloud based version of
LoadRunner on Amazon Web Services.
LoadRunner's primary development language is 'C,' initially chosen for its light weight and availability
across the variety of load generator platforms supported by the tool (UNIX & Windows). With the
movement of UNIX vendors away from shipping a compiler with each copy of the UNIX operating
system, Mercury moved towards the inclusion of LCC, the lightweight cross platform C compiler:
While C is the primary language of the tool, LoadRunner supports a number of additional languages for
script creation:
 VB
 VB Script
 Java
 JavaScript
 C#
The degree to which one scripting language may be used over another is governed by the protocol or
interface in use/under test.
With its wide range of protocol and language support the sweet spot for LoadRunner has been the
enterprise sale, where Gartner and other analysts have recognized a dominant market position for
LoadRunner in the past. LoadRunner faces market challenges from smaller commercial providers and
open source tools that cover single interfaces or subsets of interfaces of LoadRunner, but not the
complete suite that is currently supported. LoadRunner also benefits from a robust ecosystem of web
sites and support locations, owing to its longevity and position in the market.
10
Cost is the most common criticism of LoadRunner, not technical capability.
The market for LoadRunner talent is a challenging one. While many resumes exist on the market the
vast majority of these resumes are tied to individuals with few foundation or tool skills. The
performance market over the past ten years, from 2001 to 2010, has experienced an odd economic
condition: While the market is expanding and the number of suppliers has not been able to keep pace,
the compensation rates have been dropping. Economists note that in a resource scarce environment the
price of a resource will rise to reflect it's scarcity. This has not happened in the market for performance
testing skills. Dropping rates in a resource scarce environment reflects an average value of the resource
which is declining at a rate faster than the expansion of the market.
The economic contraction from 2009 onward has impacted the mobility of the mature LoadRunner
practitioners in the market, resulting in a high number which are location locked and some LoadRunner
positions going empty for up to a year because of a lack of local talent to fill the need. Remote work
models have been increasingly used to allow for remote mature performance test personnel to fill the
need for skills at distant organizations. Lead times to find qualified individuals for staff positions
extend to months as solid engineers have 'gone to ground' in fixed positions to ride out the down
economic cycle.
The ability to find skilled individuals to staff a performance test practice is the single largest
determinant of a positive or negative return on investment for tool purchase and deployment whether
that tool is commercial or open source. Unskilled individuals take five to ten times longer to deliver a
given test artifact at a lower overall level of quality. This results in an introduction of risk into the last
risk gate prior to the deployment of a new application.

JMETER

Apache JMeter is an Apache project that can be used as a load testing tool for analyzing and measuring
the performance of a variety of services, with a focus on web applications.
JMeter can be used as a unit test tool for JDBC database connections, FTP,LDAP, Webservices,JMS,
HTTP,generic TCP connections and OS Native processes. JMeter can also be configured as a
monitor,although this is typically considered an ad hoc solution in lieu of advanced monitoring
solutions. It can be used for some functional testing as well.
JMeter supports variable parameterization, assertions (response validation), per thread cookies,
configuration variables and a variety of reports.
JMeter architecture is based on plugins. Most of its "out of the box" features are implemented with
plugins. Off-site developers can easily extend JMeter with custom plugins.
Apache JMeter is a 100% pure Java desktop application designed to load test client/server
software (such as a web application ). It may be used to test performance both on static and
dynamic resources such as static files, Java Servlets, CGI scripts, Java objects, databases , FTP
servers , and more. JMeter can be used to simulate a heavy load on a server, network or object to
test its strength or to analyze overall performance under different load types.
Additionally, JMeter can help you regression test your application by letting you create test scripts with
assertions to validate that your application is returning the results you expect. For maximum flexibility,
11
JMeter lets you create these assertions using regular expressions.
But please note that JMeter is not a browser, it works at protocol level.

TEST DIRECTOR

Software Automated Tool TestDirector simplifies test management by helping you organize and
manage all phases of the software testing process, including planning, creating tests, executing tests,
and tracking defects.
With TestDirector, you maintain a project's database of tests. From a project, you can build test sets
groups of tests executed to achieve a specific goal.
For example, you can create a test set that checks a new version of the software, or one that checks a
specific feature.
As you execute tests, TestDirector lets you report defects detected in the software. Defect records are
stored in a database where you can track them until they are resolved in the software.
TestDirector works together with WinRunner, Mercury Interactive's automated GUI Testing tool.
WinRunner enables you to create and execute automated test scripts. You can include WinRunner
automated tests in your project, and execute them directly from TestDirector.
TestDirector activates WinRunner, runs the tests, and displays the results, TestDirector offers
integration with other Mercury Interactive testing tools (LoadRunner, Visual API, Astra QuickTest,
QuickTest 2000, and XRunner), as well as with third-party and custom testing tools.

The TestDirector workflow consists of 3 main phases:


In each phase you perform several tasks:
 Planning Tests
 Running Tests
 Tracking Defects
Planning Tests
Divide your application into test subjects and build a project.
1. Define your testing goals.
Examine your application, system environment, and testing resources to determine what and how you
want to test.
2. Define test subjects.
Define test subjects by dividing your application into modules or functions to be tested. Build a test
plan tree that represents the hierarchical relationship of the subjects.
3. Define tests.
Determine the tests you want to create and add a description of each test to the test plan tree.
4. Design test steps.
Break down each test into steps describing the operations to be performed and the points you want to
check. Define the expected outcome of each step.
5. Automate tests.
Decide whether to perform each test manually or to automate it. If you choose to perform a test
manually, the test is ready for execution as soon as you define the test steps. If you choose to automate
a test, use WinRunner to create automated test scripts in Mercury Interactive�s Test Script Language
(TSL).
6. Analyze the test plan.
Generate reports and graphs to help you analyze your test plan. Determine whether the tests in the
12
project will enable you to successfully meet your goals.

Running Tests
Create test sets and perform test runs.
1. Create test sets.
Create test sets by selecting tests from the project. A test set is a group of tests you execute to meet a
specific testing goal.
2. Run test sets.
Schedule test execution and assign tasks to testers. Run the manual and/or automated tests in the test
sets.
3. Analyze the testing progress.
Generate reports and graphs to help you determine the progress of test execution

Tracking Defects
1.Report defects detected in your application and track how repairs are progressing.
2. Report defects detected in the software. Each new defect is added to the defect database.
3.Track defects.
Review all new defects reported to the database and decide which ones should be repaired. Test a new
version of the application after the defects are corrected.
4. Analyze defect tracking.
Generate reports and graphs to help you analyze the progress of defect repairs, and to help you
determine when to release the application.

What is a Test Set?


After planning and creating a project with tests, you can start running the tests on your application.
However, since a project database often contains hundreds or thousands of tests, deciding how to
manage the test run process may seem overwhelming.
TestDirector helps you organize test runs by building test sets. A test set is a subset of the tests in your
project, run together in order to achieve a specific goal. You build a test set by selecting tests from the
test plan tree, and assigning this group of tests a descriptive name. You can then run the test set at any
time, on any build of your application.
Do You Keep Track of Defects?
Locating and repairing software defects is an essential phase in software development. Defects can be
detected and reported by software developers, testers, and end users in all stages of the testing process.
Using TestDirector, you can report flaws in your application, and track data derived from defect
reports.

When a defect is detected in the software:


a)end a defect report to the TestDirector database.
b)Review the defect and assign it to a member of the development team.
c)Repair the open defect.
d)Test a new build of the application after the defect is corrected. If the defect does not reoccur, change
the status of the defect.
e)Generate reports and graphs to help you analyze the progress of the defects in your TestDirector
project.
f)Reporting a New Defect
You can report a new defect at any stage of the testing process by adding a defect record to the project

13
database. Each defect is tracked through four stages: New, Open, Fixed, and Closed. When you initially
report a defect to the project database, you assign it the status New.

14
15

Potrebbero piacerti anche