Sei sulla pagina 1di 44

Testing Vocabulary

Every profession has its own vocabulary.To learn a profession, the first and crucial step is
to master its vocabulary.The entire knowledge of a profession is compressed and kept it in
its vocabulary.
Take our own software testing profession, while communicating with our collegues, we
frequently use terms like 'regression testing', 'System testing', now imagine communicating
the same to a person who is not in our profession or who doesn't understand our testing
vocabulary, we need to explain in detail each and every term .Communication becomes so
difficult and painful.To speak the language of testing, you need to learn its vocabulary.
Find below a huge collection of testing vocabulary

Affinity Diagram: A group process that takes large amounts of language data, such as
developing by brainstorming, and divides it into categories

Audit: This is an inspection/assessment activity that verifies compliance with plans, policies
and procedures and ensures that resources are conserved.

Baseline:A quantitative measure of the current level of performance.

Benchmarking: Comparing your company's products, services or processes against best


practices or competitive practices, to help define superior performance of a
product,service or support processes.

Black-box Testing: A test technique that focuses on testing the functionality of the
program component or application against its specifications without knowlegde of how the
system constructed.

Boundary value analysis: A data selection technique in which test data is chosen from the
"boundaries" of the input or output domain classes, data structures and procedure
parameters. Choices often include the actual minimum and maximum boundary values, the
maximum value plus or minus one and the minimum value plus or minus one.

Branch Testing: A test method that requires that each possible branch on each decision be
executed on at least once.
Brainstorming: A group process for generating creative and diverse ideas.

Bug: A catchall term for all software defects or errors.

Certification testing: Acceptance of software by an authorized agent after the software


has been validated by the agent or after its validity has been demonstrated to the agent.
Checkpoint(or verification point): Expected behaviour of the application which must be
validated with the actual behaviour after certain action has been performed on the
application.

Client: The customer that pays for the product received and receives the benefit from the
use of the product.

Condition Coverage: A white-box testing technique that measures the number of or


percentage of decision outcomes covered by the test cases designed.100% condition
coverage would indicate that every possible outcome of each decision had been executed at
least once during testing.

Configuration Management Tools


Tools that are used to keep track of changes made to systems and all related artifacts.
These are also known as version control tools.

Configuration testing: Testing of an application on all supported hardware and software


platforms.This may include various combinations of hardware types, configuration settings
and software versions.

Completeness: A product is said to be complete if it has met all requirements.

Consistency: Adherence to a given set of rules.

Correctness: The extent to which software is free from design and coding defects. It is
also the extent to which software meets the specified requirements and user objectives.

Cost of Quality: Money spent above and beyond expected production costs to ensure that
the product the customer receives is a quality product. The cost of quality includes
prevention, appraisal, and correction or repair costs.

Conversion Testing: Validates the effectiveness of data conversion processes, including


field-field mapping and data translation.

Customer: The individual or organization, internal or external to the producing organization


that receives the product.

Cyclomatic complexity: The number of decision statements plus one.

Debugging: The process of analysing and correcting syntactic, logic and other errors
identified during testing.

Decision Coverage: A white-box testing technique that measures the number of - or


percentage - of decision directions executed by the test case designed. 100% Decision
coverage would indicate that all decision directions had been executed at least once during
testing. Alternatively each logical path through the program can be tested.
Decision Table
A tool for documenting the unique combinations of conditions and associated results in
order to derive unique test cases for validation testing.

Defect Tracking Tools


Tools for documenting defects as they are found during testing and for
tracking their status through to resolution.

Desk Check: A verification technique conducted by the author of the artifcat to verify the
completeness of their own work. This technique does not involve anyone else.

Dynamic Analysis: Analysis performed by executing the program code.Dynamic analysis


executes or simulates a development phase product and it detects errors by analyzing the
response of the product to sets of input data.

Entrance Criteria: Required conditions and standards for work product quality that must be
present or met for entry into the next stage of the software development process.

Equivalence Partitioning: A test technique that utilizes a subset of data that is


representative of a larger class. This is done in place of undertaking exhaustive testing of
0each value of the larger class of data.

Error or defect: 1.A discrepancy between a computed, observed or measured value or


condition and the true, specified or theortically correct value or conditon 2.Human action
that results in software containing a fault (e.g., omission or misinterpretation of user
requirements in a software specification, incorrect translation or omission of a requirement
in the design specification)

Error Guessing: Test data selection techniques for picking values that seem likely to cause
defects. This technique is based upon the theory that test cases and test data can be
developed based on intuition and experience of the tester.

Exhaustive Testing: Executing the program through all possible combination of values for
program variables.

Exit criteria: Standards for work product quality which block the promotion of incomplete
or defective work products to subsequent stages of the software development process.

Flowchart
Pictorial representations of data flow and computer logic. It is frequently
easier to understand and assess the structure and logic of an application system by
developing a flow chart than to attempt to understand narrative descriptions or verbal
explanations. The flowcharts for systems are normally developed manually, while flowcharts
of programs can be produced.
Force Field Analysis
A group technique used to identify both driving and restraining forces that
influence a current situation.

Formal Analysis
Technique that uses rigorous mathematical techniques to analyze the
algorithms of a solution for numerical properties, efficiency, and correctness.

Functional Testing
Testing that ensures all functional requirements are met without regard to the final
program structure.

Histogram
A graphical description of individually measured values in a data set that is organized
according to the frequency or relative frequency of occurrence. A histogram illustrates the
shape of the distribution of individual values in a data set along with information regarding
the average and variation.

Inspection
A formal assessment of a work product conducted by one or more qualified independent
reviewers to detect defects, violations of development standards, and other problems.
Inspections involve authors only when specific questions concerning deliverables exist. An
inspection identifies defects, but does not attempt to correct them. Authors take
corrective actions and arrange follow-up reviews as needed.

Integration Testing
This test begins after two or more programs or application components have been
successfully unit tested. It is conducted by the development team to validate the
interaction or communication/flow of information between the individual components which
will be integrated.

Life Cycle Testing


The process of verifying the consistency, completeness, and correctness of software at
each stage of the development life cycle.

Pass/Fail Criteria
Decision rules used to determine whether a software item or feature passes or fails a test.

Path Testing
A test method satisfying the coverage criteria that each logical path through the program
be tested. Often, paths through the program are grouped into a finite set of classes and
one path from each class is tested.

Performance Test
Validates that both the online response time and batch run times meet the
defined performance requirements.
Policy
Managerial desires and intents concerning either process (intended objectives) or products
(desired attributes).

Population Analysis
Analyzes production data to identify, independent from the specifications, the types and
frequency of data that the system will have to process/produce. This verifies that the
specs can handle types and frequency of actual data and can be used to create validation
tests.

Procedure
The step-by-step method followed to ensure that standards are met.

Process
1. The work effort that produces a product. This includes efforts of people and equipment
guided by policies, standards, and procedures.
2. A statement of purpose and an essential set of practices (activities) that address that
purpose.

Proof of Correctness
The use of mathematical logic techniques to show that a relationship between program
variables assumed true at program entry implies that another relationship between program
variables holds at program exit.

Quality
A product is a quality product if it is defect free. To the producer, a product is a quality
product if it meets or conforms to the statement of requirements that defines the product.
This statement is usually shortened to: quality means meets requirements. From a
customer’s perspective, quality means “fit for use.”

Quality Assurance (QA)


Deals with 'prevention' of defects in the product being developed.It is associated with a
process.The set of support activities (including facilitation, training, measurement, and
analysis) needed to provide adequate confidence that processes are established and
continuously improved to produce products that meet specifications and
are fit for use.

Quality Control (QC)


Its focus is defect detection and removal. Testing is a quality control activity

Quality Improvement
To change a production process so that the rate at which defective products (defects) are
produced is reduced. Some process changes may require the product to be changed.

Recovery Test
Evaluates the contingency features built into the application for handling
interruptions and for returning to specific points in the application processing cycle,
including checkpoints, backups, restores, and restarts. This test also assures that disaster
recovery is possible.

Regression Testing
Testing of a previously verified program or application following program
modification for extension or correction to ensure no new defects have been introduced.

Risk Matrix
Shows the controls within application systems used to reduce the identified risk, and in
what segment of the application those risks exist. One dimension of the matrix is the risk,
the second dimension is the segment of the application system, and within the matrix at the
intersections are the controls. For example, if a risk is “incorrect input” and the systems
segment is “data entry,” then the intersection within the matrix would show the controls
designed to reduce the risk of incorrect input during the data entry segment of the
application system.

Scatter Plot Diagram


A graph designed to show whether there is a relationship between two
changing variables.

Standards
The measure used to evaluate products and identify nonconformance. The basis upon which
adherence to policies is measured.

Statement of Requirements
The exhaustive list of requirements that define a product.

Statement Testing
A test method that executes each statement in a program at least once during program
testing.

Static Analysis
Analysis of a program that is performed without executing the program. It
may be applied to the requirements, design, or code.

Stress Testing
This test subjects a system, or components of a system, to varying
environmental conditions that defy normal expectations. For example, high transaction
volume, large database size or restart/recovery circumstances. The intention of stress
testing is to identify constraints and to ensure that there are no performance problems.

Structural Testing
A testing method in which the test data is derived solely from the program structure.
Stub
Special code segments that when invoked by a code segment under testing, simulate the
behavior of designed and specified modules not yet constructed.

System Test
During this event, the entire system is tested to verify that all functional,
information, structural and quality requirements have been met.

Test Case
Test cases document the input, expected results, and
execution conditions of a given test item.

Test Plan
A document describing the intended scope, approach, resources, and schedule of testing
activities. It identifies test items, the features to be tested, the testing tasks, the
personnel performing each task, and any risks requiring contingency planning.

Test Scripts
A tool that specifies an order of actions that should be performed during a test session.
The script also contains expected results. Test scripts may be manually prepared using
paper forms, or may be automated using
capture/playback tools or other kinds of automated scripting tools.

Test Suite Manager


A tool that allows testers to organize test scripts by function or other grouping.

Unit Test
Testing individual programs, modules, or components to demonstrate that the work package
executes per specification, and validate the design and technical quality of the application.
The focus is on ensuring that the detailed logic within the component is accurate and
reliable according to pre-determined specifications. Testing stubs or drivers may be used to
simulate behavior of interfacing modules.

Usability Test
The purpose of this event is to review the application user interface and other human
factors of the application with the people who will be using the application. This is to ensure
that the design (layout and sequence, etc.) enables the business functions to be executed as
easily and intuitively as possible. This review includes assuring that the user interface
adheres to documented User Interface standards, and should be conducted early in the
design stage of development. Ideally, an application prototype is used to walk the client
group through various business scenarios, although paper copies of screens, windows, menus,
and reports can be used.

User Acceptance Test


User Acceptance Testing (UAT) is conducted to ensure that the system meets the needs of
the organization and the end user/customer. It validates that the system will work as
intended by the user in the real world, and is based on real world business scenarios, not
system requirements. Essentially, this test validates that the right system was built.

Validation
Determination of the correctness of the final program or software produced from a
development project with respect to the user needs and requirements.

Verification
1. The process of determining whether the products of a given phase of the software
development cycle fulfill the requirements established during the previous phase.
2. The act of reviewing, inspecting, testing, checking, auditing, or otherwise establishing and
documenting whether items, processes, services, or documents conform to specified
requirements.

Walkthroughs
During a walkthrough, the producer of a product “walks through” or
paraphrases the products content, while a team of other individuals follow along. The team’s
job is to ask questions and raise issues about the product that may lead to defect
identification.

White-box Testing
A testing technique that assumes that the path of the logic in a program unit or component
is known. White-box testing usually consists of testing paths, branch by branch, to produce
predictable results. This technique is usually used during tests executed by the
development team, such as Unit or Component testing.

Test Management

Planning a test project

A test project includes the creation of a test plan, collecting test scenarios, writing test
cases, executing test cases, evaluating and reporting the test results and managing the
software testers.

It is also imperative to ensure which items are in the scope of testing and which are out of
scope, maintaining scheduling and training. We need to establish a strategy to be followed
in each of the testing phases. All of these phases come under test management.

Testing Methodology

The best approach is to conduct end to end testing. Testing methodology is applied in two
major phases: Functional Testing and Non-Functional Testing.
But also the context where a system is deployed will influence the Testing Methodology.
There will be a difference between tests for a commercial software package to be sold
to consumers, and embedded software or a server solution for a website of a one-off
logistics solution.

Functional Testing

1. Unit Testing : Testing each component of the application separately.


2. System Testing : Testing the system as a whole.
3. Integration Testing : Testing the system with other interfaces.
4. Regression : Testing changes made to the system and ensuring that new problems
are not introduced as a result of the changes.
5. User Acceptance Testing : Testing the system to ensure it meets the user
requirements.
6. Sociability Testing : Testing the system with other applications in the platform.
7. Secrity Testing : Test the software from external damages it meets the user
requirements.
8. Performance Testing : Testing that is performed to determine how fast some
aspect of a system performs under a particular workload.

Non Functional Testing

1. Usability Testing : Usability testing focuses on determining if the product is easy


to learn, satisfying to use and contains the functionality that the users desire
2. Stress and Performance Testing : The purpose of this testing is to predict
the system behavior and performance.
3. Accessibility Testing : to determine the extent to which the end user interacts
with the application.......regards.......IBM experts....

Automated Testing

One of the purposes of automated testing is to ease regression testing. Once the System
is Stable, then Automated Testing starts.

Once automated tests exist, they can and should be used to help build better and more
stable software ready for any manual testing and user acceptance testing.

It is much better if the automated tests can be run overnight on builds of software. This
will help give a clearer indication to the stability of the build the following morning.
Hopefully, leading to more time being available for the rest of the testing and
development.

Test Framework
A set of ideas and tools that revolve around optimizing the test-effort. When
implemented using the supporting tools, these ideas render the maintenance of the
Automated testing scripts or other entities easier.

JUnit test tool

The JUnit tool is designed to assist in unit testing of java objects. Can it be useful in
testing a non-java environment (eg. by wrapping the test target inside a java
object)?Answer this

From Main Frame testing to ebusiness testing

• Testing the MainFrame System


• Testing the Client Server Application
• Testing the Internet Application
• Testing Web Services

Component Performance Criteria

Testing is a major part of agile development processes, but until recently, the vast
majority of effort has been focused on functional testing. In the world of integrated
systems, many projects are undertaken to speed up processes. Traditional development
processes teach that optimisation should not be carried out until a performance problem is
identified, thus aiding maintainability. Unfortunately, this has led to the postponement of
performance testing until after integration testing. This can often make it too late to
address fundamental problems as time is short or key resources have moved on to other
projects.

Component Performance Criteria is about breaking down the performance requirement for
the whole system and attributing it to individual components. Admittedly, it does not
guarantee the absence of performance problems later in the project, but it will highlight
if they are already there.

Example: if a system must process five transactions per second, and consists of 10 major
components, and an individual component has been measured to take 0.5 seconds to
process a single transaction and can only process one transaction simultaneously, a
bottleneck is already present and resolution of the problem can begin immediately.

In order to do this though, performance testing must be carried out frequently and
painlessly. This can only be done with automation.

What is Quality?
Define quality?
Lot of quality pioneers defined quality in different ways
A quality product is defined as the one that meets product requirements But Quality can
only be seen through customer eyes.So the most important definition of quality is meeting
customer needs or Understanding customer requirements, expectations and exceeding
those expectations.Customer must be satisfied by using the product, then its a quality
product.

Whats the difference between meeting product requirements and meeting customer
needs? Aren't customer needs tranlsated into product requirements?
Not always.Though our aim is to accurately capture customer needs into requirements and
build a product that satisfies those needs, we sometimes fail to do so because of the
following reasons
-Customers fail to accurately communicate their exact needs
-captured requirements can be misinterpreted

Can't we define a quality product as the one that contains no bugs/defects?


Quality is much more than absence of defects/bugs.Consider this, though the product may
have zero defects, but if the usability sucks i.e it is difficult to learn and operate the
product, then its not a quality product.

If the product has some defects, can it be still called a quality product?
It depends on the nature of those bugs.But in some cases, even though a product has bugs,
it can be still called a quality product.
Unless the product is very critical, aiming for zero defects is not cost effective always.We
should aim for 100% defect 'detection', but given the budget, time and resources
constraints, we can still release the product with some unfixed or open bugs. If the open
bugs cause no loss to the customer,then it can be still called a quality product.

Is quality only testers responsiblity?


No. Quality is everybody's responsibility including the customer.We, testers identify the
deviations and report them, thats it.There are many factors that impact the quality such as
maintainabiltiy, reusability, flexibility, portabilty which the testers can't validate. Testers
can only validate the correctness, reliability, usability and interoperability of a product and
report the deviations.

When is the right time to catch a bug?


As soon as possible.The cost of fixing the bug will keep on increasing exponentially as the
product development progresses.For example, the cost of fixing a design bug identified in
system testing is much more than fixing it, if it had been identified during design phase
itself because now you not only have to rectify the design but also the code, the
corresponding documents and code that is dependent on this code.

Are there any other quality control practices apart from testing?
Yes.Inspections, design and code walkthroughs, reviews etc.
what are software quality factors?
software quality factors are attributes of the software that, if they are wanted and not
present, pose a risk to the success of the software. There are 11 main factors and their
definitions are given below. The priority and importance of the these attributes keeps
changing from product to product.Like if the product being developed needs to be changed
quite frequently, then flexibility and reusability of the product needs to be given priority.
The following are the quality factors

Correctness: Extent to which a program satisfies its requirements

Reliability: Extent to which a program can be expected to perform its intended function
with
required precision.

Efficiency: The amount of computing resources and code required by a program to perform
a
function.

Integrity: Extent to which access to software or data by unauthorized persons can be


controlled.

Usability: Effort required learning, operating, preparing input, and interpreting output of a
program.

Maintainability: Effort required locating and fixing an error in an operational program.

Testability: Effort required testing a program to ensure that it performs its intended
function.

Flexibility: Effort required modifying an operational program.

Portability: Effort required to transfer software from one configuration to another.

Reusability: Extent to which a program can be used in other applications – related to the
packaging and scope of the functions that programs perform.

Interoperability: Effort required to couple one system with another.

How to reduce the amount spend to ensure and build quality?


or
How to reduce the cost of quality?
cost of quality includes the total amount spent on preventing errors, identifying and
correcting errors.
Coming to reducing this cost.Try to build a product that has less defects or no defects even
before it goes to testing phase and to achieve this you should spend more money and effort
on tyring to prevent errors from going into the product.You must concentrate greatly on
building efficient and effective processes and keep on continuously improving them by
identifying weakness in them.You many not reap great benefits immediately but over a long
run you can make significant savings by reducing the cost of quality.

How to reduce the cost of fixing a bug?


Catch it as early as possible. As the development process progresses,the cost of fixing a
bug keep on increasing exponentially. Practice life cycle testing.

Software Development Life Cycle

In traditional waterfall model, testing comes at the fag end of the development process.No
testing is done during requirements gathering phase, design phase and development phase.
Defects identified during this disconnected testing phase are very costly to fix which is
this model's biggest disadvantage.
Life cycle testing or V testing aims at catching the defects as early as possible and thus
reduces the cost of fixing them.It achieves this by continuously testing the system during
all phases of the development process rather than just limiting testing to the last phase.

The life cycle testing can be best accomplished by the formation of a separate test team.

when the project starts both the system development process and system test process
begins. The team that is developing the system begins the systems development process and
the team that is conducting the system test begins planning the system test process. Both
teams start at the same point using the same information. The systems development team
has the and document the requirements for developmental purposes. The test team will
likewise use those same requirements, but for the purpose of testing the system. At
appropriate points during the developmental process, the test team will test the
developmental process in an attempt to uncover defects.

The following is the software testing process which follows life cycle testing

Requirements Gathering phase:


Verify whether the requirements captured are true user needs
Verify that the requirements captured are complete, unambiguous, accurate and non
conflicting with each other

Design phase:
Verify whether the design achieves the objectives of the requirements as well as the design
being effective and efficient
Verification Techniques: Design walkthroughs, Design Inspections

Coding phase:
Verify that the design is correctly translated to code
Verify coding is as per company's standards and policies
Verification Techniques: Code walkthroughs, code Inspections
Validation Techniques: Unit testing and Integration techniques

System Testing phase:


Execute test cases
Log bugs and track them to closure

User Acceptance phase:


Users validate the applicability and usability of the software in performing their day to day
operations.

Maintenance phase:
After the software is implemented, any changes to the software must be thoroughly tested
and care should be taken not to introduce regression issues.

The life cycle testing is also called V testing. The project’s Do and Check procedures slowly
converge from start to finish (see above figure), which indicates that as the Do team
attempts to implement a solution, the Check team concurrently develops a process to
minimize or eliminate the risk. If the two groups work closely together, the high level of
risk at a project’s inception will decrease to an acceptable level by the project’s conclusion.

Types of Testing
Black box testing - not based on any knowledge of internal design or code. Tests are based
on requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's code. Tests
are based on coverage of code statements, branches, paths, conditions.

Unit testing - Unit is the smallest compilable component. A unit typically is the work of one
programmer.This unit is tested in isolation with the help of stubs or drivers.Typically done
by the programmer and not by testers.

Incremental integration testing - continuous testing of an application as new functionality


is added; requires that various aspects of an application's functionality be independent
enough to work separately before all parts of the program are completed, or that test
drivers be developed as needed; done by programmers or by testers.

Integration testing - testing of combined parts of an application to determine if they


function together correctly. The 'parts' can be code modules, individual applications, client
and server applications on a network, etc. This type of testing is especially relevant to
client/server and distributed systems.

Functional testing - black-box testing aimed to validate to functional requirements of an


application; this type of testing should be done by testers.

System testing - black-box type testing that is based on overall requirements


specifications; covers all combined parts of a system.

End-to-end testing - similar to system testing but involves testing of the application in a
environment that mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if appropriate.
Even the transactions performed mimics the end users usage of the application.

Sanity testing - typically an initial testing effort to determine if a new software version is
performing well enough to accept it for a major testing effort. For example, if the new
software is crashing systems every 5 minutes, bogging down systems to a crawl, or
destroying databases, the software may not be in a 'sane' enough condition to warrant
further testing in its current state.

Smoke testing - The general definition (related to Hardware) of Smoke Testing is:
Smoke testing is a safe harmless procedure of blowing smoke into parts of the sewer and
drain lines to detect sources of unwanted leaks and sources of sewer odors.
In relation to software, the definition is Smoke testing is non-exhaustive software testing,
ascertaining that the most crucial functions of a program work, but not bothering with finer
details.

Static testing - Test activities that are performed without running the software is called
static testing. Static testing includes code inspections, walkthroughs, and desk checks

Dynamic testing - test activities that involve running the software are called dynamic
testing.
Regression testing - Testing of a previously verified program or application following
program modification for extension or correction to ensure no new defects have been
introduced.Automated testing tools can be especially useful for this type of testing.

Acceptance testing - final testing based on specifications of the end-user or customer, or


based on use by end-users/customers over some limited period of time.

Load testing -Load testing is a test whose objective is to determine the maximum
sustainable load the system can handle. Load is varied from a minimum (zero) to the
maximum level the system can sustain without running out of resources or having,
transactions suffer (application-specific) excessive delay.

Stress testing - Stress testing is subjecting a system to an unreasonable load while denying
it the resources (e.g., RAM, disc, mips, interrupts) needed to process that load. The idea is
to stress a system to the breaking point in order to find bugs that will make that break
potentially harmful. The system is not expected to process the overload without adequate
resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data).
The load (incoming transaction stream) in stress testing is often deliberately distorted so
as to force the system into resource depletion.

Software Testing Techniques


1.Black-Box Testing techniques

When creating black-box test cases, the input data used is critical. Three successful
techniques for managing the amount of input data required include:

Equivalence Partitioning
An equivalence class is a subset of data that is representative of a larger class.Equivalence
partitioning is a technique for testing equivalence classes rather thanundertaking
exhaustive testing of each value of the larger class. For example, aprogram which edits
credit limits within a given range (1,000 - 1,500) would have three equivalence classes:
< 1,000 (invalid)
Between 1,000 and 1,500 (valid)
> 1,500 (invalid)

Boundary Analysis
A technique that consists of developing test cases and data that focus on the input and
output boundaries of a given function. In same credit limit example, boundary analysis would
test:
Low boundary +/- one (999 and 1,001)
On the boundary (1,000 and 1,500)
Upper boundary +/- one (1,499 and 1,501)

Error Guessing
Test cases can be developed based upon the intuition and experience of the tester. For
example, in an example where one of the inputs is the date, a tester may try February 29,
2000

2.White-Box Testing techniques

White-box testing assumes that the path of logic in a unit or program is known. White-box
testing consists of testing paths, branch by branch, to produce predictable results. The
following are white-box testing techniques:

Statement Coverage
Execute all statements at least once.

Decision Coverage
Execute each decision direction at least once.

Condition Coverage
Execute each decision with all possible outcomes at least once.

Decision/Condition Coverage
Execute all possible combinations of condition outcomes in each decision. Treat all iterations
as two-way conditions exercising the loop zero times and one time.

3.Incremental Testing Technique

4.Thread Testing Technique

Testing Metrics
While testing a product, test manager has to take a lot of decisions like when to stop
testing or when is the application ready for production, how to track testing progress, how
to measure the quality of a product at a certain point in the testing cycle?Testing metrics
can help to take better and accurate decisions

Lets start by defining the term 'Metric'


A metric is a mathematical number that shows a relationship between two variables.
Software metrics are measures used to quantify status or results.

How to track testing progress?


The best way is to have a fixed number of test cases ready before test execution cycle
begins.Then the testing progress is measured by the total number of test cases executed.

% Completion = (Number of test cases executed)/(Total number of test cases)


Not only the testing progress but also the following metrics are helpful to measure the
quality of the product
% Test cases Passed = (Number of test cases Passed)/(Number of test cases executed)

% Test cases Failed = (Number of test cases Passed)/(Number of test cases executed)
Note: A test case is Failed when atleast one bug is found while executing it, otherwise
Passed

How many rounds or cycles of testing should be done?


or
When to stop testing?
Lets discuss few approaches
Approach 1:This approache requires, that you have a fixed number of test cases ready
before test execution cycle.In each testing cycle you execute all test cases.You stop
testing when all the test cases are Passed or % failure is very very less in the latest testing
cycle.

Approach 2:Make use of the following metrics


Mean Time Between Failure: The average operational time it takes before a software
system fails.
Coverage metrics: the percentage of instructions or paths executed during tests.
Defect density: defects related to size of software such as “defects/1000 lines of code”
Open bugs and their severity levels,

If the coverage of code is good, Mean time between failure is quite large, defect density is
very ow and not may high severity bugs still open, then 'may' be you should stop testing.
'Good', 'large', 'low' and 'high' are subjective terms and depends on the product being
tested.Finally, the risk associated with moving the application into production, as well as the
risk of not moving forward, must be taken into consideration.

Software Testing Challenges


As per William Perry and Randall Rice, the authors of the book 'Surviving the Top Ten
Challenges of Software Testing, A People-Oriented', the following are the top ten
challanges of software testing

1. Training in testing
2. Relationship building with developers
3. Using tools
4. Getting managers to understand testing
5. Communicating with users about testing
6. Making the necessary time for testing
7. Testing “over the wall” software
8. Trying to hit a moving target
9. Fighting a lose-lose situation
10. Having to say “no”

Testing Tools
Test Management Tools
These tools are used to manage the entire testing process.Most of the tools support the
following activities

Requirements gathering
Test planning
Test cases development
Test execution and scheduling
Analyzing test exection results
Defect reporting and tracking
Generation of test reports

The following are some of the prominent tools

Mercury TestDirector

SilkCentral Test Manager

Defect tracking Tools


These tools are used to record bugs or defects uncovered during testing and track them
until they get completely fixed.

One of the free tool available on web


Bugzilla

Automation Tools
These tools records the actions performed on the application being tested, in a language it
understands and where ever we want to compare the actual behaviour of the application
with the expected behaviour, we insert a verification point.The tool generates a script with
the recorded actions and inserted verification points.To repeat the test case, all we need to
do is, playback(run) the script and at the end of its run,check the result file.

Some of the prominent tools available are

WinRunner

Silk test
Rational Robot

Load testing/Performance testing tools


These tools can be used to identify the bottlenecks or areas of code which are severly
hampering the performance of the application.They can be also used to measure the
maximum load which the application can withstand before its performance starts to
degrade.

Some of the prominent load testing tools

Load Runner

Silk Performer

OpenSta

Code coverage tools


This type of tools can be very useful to measure the coverage of the test cases and to
identify the gaps.The tool identifies the code that has not been run even once(hence not
tested) while running the test cases.You may have to sit with the developers to understand
the code.After analysis, the test cases should be updated with new ones to cover the
missing code.Its not cost effective to aim for 100% code coverage unless it is a critical
application otherwise 70-80% is considered to be a good coverage.

The following are some of the prominent tools

Rational purecoverage

Clover

Unit testing tools


Unit testing is a white box testing technique, done by developers.The following are some of
the automated unit testing tools

JUnit

Cactus

Why software has to be tested?


Why software has to be tested?
Well, the simple reason is development 'process' is unable to produce defect free
software.Even if the process produces defect free software, how do you know until you test
it.Will you get enough confidence that it will work without testing it. I dont think so.
Testing not only identifies and reports defects but also measures the quality of the product
which helps to decide whether to release the product or not.

Coming to why development process is unable to product defect free software, blame it on
the ever increasting complexity of the software products and on process variation.

What circumstances lead to bugs?


The following are some of the circumstances that cause bugs in a product

1. Incorrect capturing of user needs


2. Misinterpretation of requirements by developers
3. Incorrect or inadequate translation of requirements to design
4. Incorrect or inadequate translation of design to code
5. Tests fail to detect a bug because of inadequate coverage
6. Regression issues: The fix of one bug lead to another bug

How to reduce the number of defects in a software product even before it goes to
system testing?
I can think of two ways

1. Reduce the development process variation. Make the process as consistent as


possible by continously improving it.
2. Catch the defects as early as possible by practising life cycle testing

Why developers should not test?


Why developers should not test?
Ofcourse, they can test, but they can't be good testers.
If the developers test their own work or the work of their peers,then the following
problems crop up

• Misunderstandings of the requirements/specifications will go unnoticed.


• Given the time, developers tend to allocate more time improving the code or
documentation rather than testing the code.
• They tend to be optimistic of producing defect free work and thus 'under' test the
product.
• Testing needs skill, occassional tester with no prior training in testing techniques is
no match to a trained bug hunter whose sole activity is testing.
• To catch a higher percentage of bugs, tester needs to be aggressive.Nobody will be
aggressive, if they are testing their own product.Testers are rewarded if they hunt
lots of bugs, developers are rewarded if the product they developed has less
number of bugs and this balance can only be maintained if the separate teams exist
for testing and development.
Then who does unit testing and integration testing?
Ofcourse Developers. Its very very difficult for a tester to do unit testing and
integration testing as it involves understanding the code.So developers have to do the
unit testing and integration testing. Misinterpretation of requirements might escape,
but its better to test with these issues than not do testing at all.
For better success, code developed by one developer should be unit tested by his peer.

System Testing Process

Lets start by defining the term 'process'. Process is nothing but a set a activities that
represent the way work is to be performed , simply put its a specific way of doing
certain thing, in our context it is developing a software product.

The following is the team structure needed to implement the process

Feature Owners: This team interacts directly with the customers. They gather
requirements, group them into features and a single person in the team may 'own' one or
more features. They take the initiative, interact with different teams and provide
direction, in developing this feature.

UI Team: User Interface is very important for any product.No matter how many good
features a product has, if the UI sucks, then the product is doomed.So we have separate
UI team, that does a lot of research in UI and knows the difference between a good UI and
bad one, they specialize in designing UI for any product.

This team will design the UI for our product or features of the product. Feature owners
will sit with UI team to get the UI for their features.They basically will come up with some
sort of 'Page designs' or some mockups'.These mockups contains all the UI elements that
should be present in that page,along with the look and feel, the navigation between pages
should also be working.

Development team: Develops the Product

Testing team: Testing team tests the product

Actual Process:
The process begins after the feature owners develop a High level design document for each
feature.Apart from the document,the page designs or the UI Mockups should be released
for reference.

Development team starts coding the features, where as testing team starts preparing the
test cases.

First, before the actual test cases preparation,its always better to prepare a test
outline.A test outline basically contains multiple test scenarios/flows at a high level
including the information regd what should be checked/verified at what point in the flow.
Apart from the flows, the document can contain a matrix which maps the requirements(the
HLD can give a uniqueid for each requirement) from the HLD to the test flows to ensure
that all requirements have been thoroughly checked and there is no redundancy

Second,Each test flow/scenario is converted to a single test case.The test case should be
very detailed.It should specify the exact navigation steps,exact data and what should be
checked.This is especially helpful when the test case writers are different from test case
executers.

Third and optional step is automating these test cases with automation tools.

By the time the development team finishes coding the features and optionally do some
testing from their side, the testing team should be ready with test cases(in case of manual
execution) or automation scripts(in case test execution with automated tools)

Once the testing cycle starts, testing team tests the product and logs the bugs where as
development team fixes the bugs.

Its always better to maintain two different application instances.One instance is for
testing team to test, other instance is for development team/bug fix team.Both should be
at the same code
hlevel.When a bug is logged,it should first verified whether it is reproducible in the
developer's instance, if it is then it should be assigned to appropriate developer for
fixing.If it is not,then the dev should sit with the testing team to find out whether it is
really bug which requires code fix or some kind of application setting issue(these issues
most commonly when testing suite of products which are tightly integrated).This approach
has the following advantages.

1. If the issue is not reproducible on dev instance,then most likely the issue is some
sort of setup issue
2. After the bug is fixed,then first the codefix should be applied on dev instance,
verified and then it should be applied to testing instance.

How to reduce the amount of regression issues?


The rule of thumb is the more times the application code is changed,then the number of
regression issues shoot up.So don't patch the system frequently,the following is the
recommended patching policy
In a multiple testing round scenario, patch all the bugs between test rounds with the
exception of extremely critical bugs.Extremely critical bugs which is severly hampering
testing should be patched/code fix should be applied as and when the bug is fixed, but rest
of the bugs should be fixed and kept ready, but should be patched between testing
rounds.Sanity testing by dev should be done on the application instance after the patching
is done and it should be released for next round of testing.In this round of testing all the
test cases(even those that have passed in previous rounds) should be run again.
When the testing should be stopped?
In a multiround scenario whether to go for a next round or stop there depends on the
number of bugs logged in last round of testing. The following criteria can be used.
No new critical bugs/regression issues were found
Minor issues found are less(less is relative term which depends on the application being
tested)

How to catch and resolve the misunderstandings of the product requirements?


Since both the testing team and development team start with HLD, it might happen that
one(or very rarely both) teams might have misunderstood the requirements.This can be
catched earlier by having the testing documents and code reviewed by feature owners
before even the testing cycle starts.
But if these type of misunderstandings went into the testing cycle,then testers might log a
bug which the dev thinks is the expected behaviour.In such cases,those bugs must be
brought to the feature owners notice and let them decide what should be expected
behaviour.

Life Cycle of a Software Bug


Once a bug(defect or error) is found, it should be communicated to the developers who can
fix it. Once the bug is fixed/resolved, the fix should be verified by the testers and should
be closed.

The following topics are discussed in this page


Bug Information:Information that should be captured in the bug so that developers can
clearly understand the bug and fix it.
List of Bug statuses:
Lifecycle of some types of bugs:
Analysis of bugs:Bugs logged during a testing phase a invaluable source to improve the
existing testing processes.

Bug information

The following information should be captured in the bug so that developers can clearly
understand the bug, get an idea of it's severity, and reproduce it if necessary.Also the
developer should mention in the bug, the cause of the problem, steps he has taken to fix
it/fix description, steps he has taken to verify the fix and any information that helps to
prevent such issues in future.

Bug ID : A unique identifier(number) of the bug

Bug status: In the long road between logging bug and fixing it, the status of a bug
communicates where it is.Eg: New,Assigned,fixed,closed etc.
A list of different bug statuses are mentioned below along with their descriptions.
Application details: Details of the application like application name, version, URL, database
details etc.

Component and/or subcomponent: The part of the application in which the bug was found
by tester

Enviroment details: Such as Operating system, hardware platform etc.

Severity/Criticality:

Priority: For bugs of same severity, this field can be used to decide which one's to fix
first.

Test case name/number/identifier:

Subject: One-line description of the bug

Bug Description: A detailed description of the bug

Reproducible steps: A step by step description to reproduce the bug

Data used:

Additional information: File excerpts,error messages,log file excerpts, screen shots that
would be helpful in finding the cause of the problem or fix it.

Tester name:

Tester contact details:

Bug reporting date and time:

Assigned to: Developer to which the bug is assigned.

Description of problem cause:

Description of fix:

Code section/file/module/class/method that was fixed:

Date of fix:

Version of the file that contains the fix:


List of Bug statuses

New: When a bug is found, the tester logs the bug and the status of ‘New’ is assigned to
the bug.

Assigned: The development team verifies if the bug is valid. If the bug is valid, development
leader assigns it to a developer to fix it and a status of ‘Assigned’ is set to it.

Additional information Requested: When developer/dev_lead needs more information from


tester to understand the bug

Not Reproducible: When dev lead could not reproduce the bug

Not a Bug: Invalid bug(a bug that does not require any code fix)

Duplicate Bug: Already a bug is logged for the same issue

Deferred: The fix of the bug is postponed to some future release.

Fixed but not patched: The bug is resolved but the fix is yet to pushed to testing
instance.

Ready for retesting : The fix is pushed to testing instance and ready for retesting by
tester

Closed,fix verified: The tester verifies the fix and the bug is resolved completely

Closed,Not a bug: The tester verifies the bug and finds the bug does not require code fix

Closed,Duplicate bug:

Reopened: The tester verifies and finds the bug is not fixed(either completely or partially)

Lifecycle of some types of bugs

Valid bug: New -> Assigned -> Fixed but not patched -> Ready for retesting -> Closed,fix
verified

Invalid bug: New -> Not a Bug -> Closed,Not a bug

Duplicate bug: New -> Duplicate Bug -> Closed,Duplicate bug


Reopened bug: New -> Assigned -> Fixed but not patched -> Ready for retesting -> Reopened
-> Fixed but not patched -> Ready for retesting -> Closed,fix verified

Analysis of bugs

Bugs logged during a testing phase a invaluable source to improve the existing testing
processes.The holygrail for any testing team is zero customer bugs.Once a product is
released, majority of the customer bugs come within 6months to 1 year of product usage.
But immediately after a testing of product is over the following can be done.

-Testing Team should analyze all the invalid/duplicate/could_not_be_reproduced bugs and


come up with measures to reduce their count in future testing efforts.

Once customer bugs start pouring in the following can be done.

-Testing Team should analyze each and every customer bug,find out why they have missed
them in their testing effort and take appropriate measures.

Automated Testing Articles

Automation Tools - When to use them and how


best to use them
When to use Automation tools like winrunner, rational robot etc.?
Automating test cases and maintaining them requires significant effort, so before
investing both money and effort in it, do some cost-benefit analysis.Ask yourself
following questions. To answer them, you may need to do some small pilot project on
automating some test cases and running them using the preferred tools.

After automating the test case, how many times you will use/run the automation script
for testing, before dumping it forever or modifying/upgrading it for future Releases?
It takes roughly 3 times the effort to automate a test case than to manually execute it
once.The more times you run/use the automation script without modifying it, the more will
be the ROI(Return on Investment).Atleast you should be able to use it 5 times.So don't go
for automation tools if the product is a short term product which doesn't involve many
rounds of testing.

Is 'Time spent running the automation script' + 'Analysing the test results' less than
'Time spent on manual execution'?
Usually a automated script runs faster than manual execution.There are exceptions to this
assumption. Also we first run the automation script completely and then analyze the test
results, which can take considerable time, but when we manually execute, we don't need
separate time to analyze the results,as we know what actions caused a check point to fail.
Where as in the former case, we need to do some investigation as to what actions resulted
to failing a check point.
So the 'Time spent on running the automation script' + 'analysing the test results' must be
less than the 'Time spent on manual execution'
Ofcourse, if the automation scripts are well developed, don't require constant monitoring
and run without manual intervention, then you can kick start them at night, come back and
check the results in the morning. This way you can cut the time on running the scripts

Is it resuable for future releases of the product?If Yes how much effort will be
required to upgrade the automation script?
The ROI will be more, if with little change, you can completely reuse the automation script.

Is the Product stable or does it keep changing quite often because of which the
automation script needs to modified?
Never develop automation scripts on products that are already not stable. An unstable
product keeps changing quite often resulting in the change of the automation script.

Are the test cases fit to be automated?


Certain test cases are not fit to be automated. Identiy them and don't automate them.The
criteria on how to identify such cases change from product to product.I will share an
example from my own experience. We automated a complex test case, which took long time
to run. Also the product is an highly integrated application, so the test case often jumps
from one application to another.The product's performance was not consistent.Adding to
this, if somebody changed some settings of one application, then sometimes the test case
came to a sudden stop.If the automation script stopped running because of any of the above
mentioned issues, then because of the nature of the test case and the way automation
script
was developed, we have to start running again right from the beginning. So the end result
was, the time taken to run the automation script was way more than the time taken to
execute it manually.
Fit test cases when automated run from end to end without any manual intervention and
verify all the check points they intend to. It partly depends on how the automation script is
developed.

How best to use automation tools?


Additional tips to make best use of automation tools

Use automation tools for performing tedious and repetitive tasks, so that the testers can
concentrate on more important tasks.One example is for performing Sanity testing

Use automation tools to test the breadth of entire product/application being tested. Leave
the job of doing in-depth testing to test engineers and manual execution

Don't automate highly complex test cases.Its better to execute them manually.
Automation scripts whose execution time is less than 2 hours often run smoothly.

Don't completely forgo manual testing.Automation scripts do exactly what they are
programmed do, they don't deviate.That is their advantage and disadvantage. After first
few rounds of testing, if you run the same set of test cases, you won't be catching many
bugs, to catch more, we need to deviate from the flows and that we humans can do while
doing manual testing.

Winrunner Notes - Basic


Like every other automation tool, Winrunner records our actions on the application being
tested and generates script automatically in TSL(Test Script Language), which is similar to
C Language. Where ever we want to verify the application behaviour, we can insert
verification points in the script and presto! our automation script is ready.Following is the
quick notes on Winrunner basics.

Recording Modes

Winrunner supports 2 types of recording modes - Context sensitive recording and Analog
recording.

Context Sensitive: Context Sensitive mode records your actions on the application being
tested in terms of the GUI objects you select (such as windows, lists, and buttons), while
ignoring the physical location of the object on the screen. Every time you perform an
operation on the application being tested, a TSL statement describing the object selected
and the action performed is generated in the test script.

Analog: Analog mode records mouse clicks, keyboard input, and the exact x and y
coordinates traveled by the mouse. When the test is run, WinRunner retraces the mouse
tracks. Use Analog mode when exact mouse coordinates are important to your test, such as
when testing a drawing application.

GUI Map

Each object has a defined set of properties that determines its behavior and appearance.
WinRunner learns these properties and stores them in a separate file called GUI Map. The
tool uses these properties to identify and locate GUI objects during a test run.

Winrunner supports two types of GUI Maps - Global GUI Map file and GUI Map file per test

Global GUI Map file: Multiple tests can reference a common GUI map file. The advantage
of this type is, if a object description is changed and this object is referred in multiple
tests, then you need to make changes in only one file, if all these tests are using the same
global GUI map file. Also you save memory, if you maintain one GUI map file, instead of one
for each test. There are disadvantages too.If we use this type, then we need to explicitly
create, save, load and unload the file.These tasks won't be taken care by winrunner
automatically. Also, one more disadvantage is, if the file is used by multiple people, care
should be taken that one person changes to the file don't override the changes done by the
other.

GUI Map file per test: For each test created, winrunner creates and maintains a separate
GUI map file.The main advantage is you do not need to worry about creating, saving, and
loading GUI map files, winrunner does it automatically.This is recommended for
beginners.The disadvantage being, if there is change in objects description, then all GUI
map files referring it need to changed, so maintenance is a little tedious.

Logical name vs Physical Description: The logical name is actually a short nickname for the
object's lenghty physical description. The physical description contains a list of the
object's physical properties. The logical name and the physical description together ensure
that each GUI object has its own unique identification.In the actual test, you usually refer
the object with their logical names.Winrunner consults the GUI Map file for the test and
gets the physical description to identify the object.

Learning GUI objects for Global GUI Map file: When you work in the Global GUI Map File
mode, you need to teach WinRunner the information it needs about the properties of GUI
objects. WinRunner can learn this information in the following ways

-By using the RapidTest Script wizard to learn the properties of all GUI objects in every
window in your application
-By recording your actions on the application, winrunner learn the properties of all GUI
objects on which you performed actions.
-By using the GUI Map Editor to learn the properties of an individual GUI object, window,
or all GUI objects in a window

Finding an Object or Window in the GUI Map: When the cursor is on a statement in your
test script that references a GUI object or window, you can right-click and select Find in
GUI Map.

Checkpoints

Checkpoints enable you to compare the current behavior of your application to its expected
behavior. You can add four types of checkpoints to your tests:

1. GUI checkpoints check information about GUI objects. For example, you can check
that a button is enabled or see which item is selected in a list.
2. Database checkpoints check the data content in a database.
3. Text checkpoints read text in GUI objects and in bitmaps, and enable you to check
their contents.
4. Bitmap checkpoints compare a "snapshot" of a window or an area in your application
to an image captured in an earlier version.

Running Tests

When you run a test, WinRunner interprets your test, line by line. As the test runs,
WinRunner operates your application as though a person were at the controls. WinRunner
provides three run modes.

1. Verify mode, to check your application


2. Debug mode, to debug your test
3. Update mode, to update the expected results

Debugging Tools

If a test stops running because it encountered an error in syntax or logic, several tools can
help you to identify and isolate the problem.

1. Step commands run a single line or a selected section of a test.


2. Breakpoints pause a test run at pre-determined points, enabling you to identify
flaws in your test.
3. The Watch List monitors variables, expressions and array elements in your
test.During a test run, you can view the values at each break in the test run such as
after a Step command, at a breakpoint, or at the end of a test.

Supported Environments

WinRunner includes support for testing applications developed with PowerBuilder, Visual
Basic, ActiveX, and MFC.
Mercury Interactive also provides testing solutions for other leading application
development and deployment environments such as the Web, Java, Enterprise Resource
Planning (ERP) applications, Wireless Application Protocol (WAP), Oracle, Delphi, and Siebel.

Winrunner Q&A
How to increase maximum string length in winrunner?
Search and open the script 'wr_gen' which will be usually in the following directory -
'Program Files\Mercury Interactive\WinRunner\lib\'. wr_gen is a startup script for
winrunner. Find the variable MAX_STR_LEN, its default value will be 1024. Increase the
variable value to increase the maximum string length.

How to run a winrunner script through command prompt?


The syntax is as follows.Its not comprehensive.

"<path to wrun.exe>\wrun.exe" -t <path to winrunner script to be executed>\script_name -f


<path to a text file containing command line options>\<text file name>

we can either specify the command line options in a text file and use it using '-f' option or
specify all of them directly in the command itself.

Example: Want to run winrunner script 'Sample' thru command prompt, in batch mode, in
'Verify' run mode and the result file should be 'comm_line2', the following is the command

"C:\Program Files\Common Files\Mercury Interactive\SharedFiles\JavaAddin\wrun.exe" -t


J:\Automation\myfiles\Sample -f J:\Automation\myfiles\comm_line_options.txt

------------------comm_line_options.txt---------------------
-batch on
-run
-dont_quit
-dont_show_welcome
-verify comm_line2
---------------------End---------------------------------

'dont_quit' option keeps the winrunner open even after it finishes executing the script.
'dont_show_welcome' option does not display the welcome window when the winrunner is
launched.

How to auto-start winrunner tool, the moment the system is


booted/started?
This is a generic question. The following method is one of the way, which can be used to
start any application in windows OS during startup.

-Start->Run
-Enter 'rededit', click Ok
-Navigate to 'HKEY_CURRENT_USER>Software>Microsoft>Windows>Run'
-Right click on 'Run' folder. Select New -> String Value
-Type some name and press Enter
-Double click on the name, a 'Edit String' dialog box pops up
-Enter the full path of the wrun.exe of the application
-Click Ok
-Restart the application and verify

How to connect to database and execute a sql query in winrunner?


The following are the winrunner statements to do this, in the order of their execution

#Step1 - Connect to a database ,establish a session.


#Note: Below 'session1' is the session name. db_server, db_uid, db_pwd are the variable
names that contain the database connectivity values.
db_connect("session1","DSN="&db_server&";UID="&db_uid&";PWD="&db_pwd&";DBQ="&d
b_server&";");

#Step2 - Execute a Query


#Note: Below RES contains the number of Rows returned for the query
db_execute_query("session1", "Select * from tab", RES);

#Step3 - Get the rows returned by the query, one by one


#Note:Below 0 indicates the first row returned
db_get_row("session1", 0, temp_id);

#Step4 - Disconnect
db_disconnect("session1");

Any other way to connect to database and execute a sql query in


winrunner?
By using a function called 'invoke_application'
The invoke_application command runs a Windows executable (*.exe file). Test execution is
paused while this operating system command is executed.Following are its parameters

file - The full path of the application(.exe file) to invoke.


command_option - The command line options to apply.
working_dir - The working directory for the specified application. This is location that the
application uses to load or stores files when the user does not identify a specific directory.
sw_show - Specifies how the application appears when opened. Activates the window and
displays it in its current size and position.You can use several others as well. Do check WR
helpfile for more options against this argument.

The following is how to use this function to connect to db and execute a query
invoke_application("sqlplus",
env_user&"/"&env_pw&"@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST="&env_ser
ver&")
(PORT="&env_port&"))(CONNECT_DATA=(SID="&env_dbq&")))
@"testpath&"\sql_file\sample_query.sql "&tabl_name&"
"&spool_num,getvar("testname"),SW_SHOW);

env_user, env_pw, env_server, env_port, env_dbq are the variables that contain the
database connectivity information.
sample_query.sql is the file that contains the sql query that needs to be executed and the
output file name which will contain the query's output.

#Contents of sample_query.sql
set serveroutput on size 100000
set head off
set linesize 999
spool output_file #output file
Select * from tab;
spool off;
exit;

Testing Templates

Test Plan Template


Test Planning: is the selection of techniques and methods to be used to validate the
product against its approved requirements and design.In this activity we assess the
software application risks, and then develop a plan to determine if the software minimizes
those risks.We document this planning in a Test Plan document.

Download the template


Find below the links to download the template in either word format or web page single
archive file.

Explanation of different sections in the template

Document Signoff: Usually a test plan document is a contract between testing team and all
the other teams involved in developing the product including the higher management folks.
Before signoff all interested parties thoroughly reviews the test plan and gives feedback,
raises issues or concerns, if any.Once everybody is satisfied with the test plan, they signoff
the document and which is a green signal for the testing team to start executing the test
plan.
Change History: Under this section, you specify, who changed what in the document and
when, along with the version of the document which contain the changes.

Review and Approval History: This captures who reviewed the document and whether they
Approved the test plan or not. The reviewer may suggest some changes or comments(if any)
to be incorporated in the test plan.

Document References: Any additional documents that will help better understand the test
plan like design documents and/or Requirements document etc.

Document Scope: In this section specify what the test plan covers and who its intended
audience is.

Product Summary: In this section describe briefly about the product that is to be tested.

Product Quality Goals: In this section describe important quality goals of the product.
Following are some of the typical quality goals
-Reliability, proper functioning as specified and expected.
-Robustness, acceptable response to unusual inputs, loads and conditions.
-Efficiency of use by the frequent users
-Easy to use even for the less frequent users

Testing Objectives: In this section specify the testing goals that need to be accomplished
by the testing team. The goals must be measurable and should be prioritized. The following
are some example test objectives.
Verify functional correctness
Test product robustness and stability.
Measure performance ‘hot spots’ (locations or features that are problem areas).

Assumptions: In this section specify the expectations, which if not met could have negative
impact on this test plan execution. Some of the assumptions can be on the test budget that
must be allocated, resources needed etc.

Testing Scope: In this section specify ‘what will be covered in testing’ and ‘what will not be
covered’.

Testing Strategy: In this section specify different testing types used to test the product.
Tools needed to execute the strategy are also specified.

Testing Schedule: In this section specify, first the entire project schedule and then
detailed testing schedule.

Resources: In this section specify all the resources needed to execute the plan successfully
Communication Approach: In this section specify how the testing team will
report the bugs to the development, how it will report the testing progress
to management, how it will report issues and concerns to higher ups.

Test Outline Template


Test Outline: This document is written before writing test cases.This is a planning
document in which the flows or scenarios are written at a high level. These flows or
scenarios are later expanded to test cases, in which they are written in detail.Also the
biggest advantage of writing this document, before going to test cases is the 'traceability
matrix', where you ensure that the project/feature is sufficiently or thoroughly covered by
the individual test cases.

Download the template


Find below the links to download the template in either word format or web page single
archive file.

Explanation of different sections in the template

Change History: Under this section, you specify, who changed what in the document and
when, along with the version of the document which contain the changes.

Review and Approval History: This captures who reviewed the document and whether they
Approved the test outline or not. If approved, the reviewer will specify the review
comments(if any) to be incorporated in the test outline.There is a review template at the
end of theTest case Template doc , which can be used to specify the comments for test
outline also.If the test outline document is 'Not Approved', then either the scenarios
mentioned are not sufficient or the scenarios are in a very bad shape(not in a state to be
reviewed) etc.

Document References: Any additional documents that will help better understand the test
outline document like design documents or Requirements document etc.

Projects Covered in Test Outline: Projects can be features of the product or modules
which are covered in the test outline document.

Traceability Matrix: This Matrix is filled after finishing writing all scenarios in the
outline.This is to ensure that all requiremnts or features are sufficiently covered by the
test cases and none are missing.So you map the requirement or feature and subfeature to
the test case that will be covering it. The following IDs uniquely identify the requirements
or feature and subfeature.You can add your own IDs based on the need
REQ_ID = Requirement ID from the SRS document
DD_ID = Detailed Design ID from the Detailed Design document
Setup Requirements: Any setup that has to be done in the application being tested, prior to
executing this test case, should be mentioned here.For example, if the test case needs
certain login IDs with certain settings to begin, which are not created as part of the test
case, then such things need to mentioned in this section.

Test Objectives: Specify at a very high level, what the test case is intended to achieve or
verify.

Test Case Limitations: Does the test case achieve the above mentioned test objective
completely or are there any exceptions?These exceptions need to be specified in this
section.For example, test case has to verify 'something' on type A, type B and type X, but
because of some reason it could NOT verify that 'something' on type X, then its a
limitation.

Test Case Dependencies / Assumptions: Prior to executing this test case, any other test
case needs to be run? All those dependencies need to mentioned here.

Process Flow: In this section, we specify at a high level what the flow of the test case
is.Suppose there are multiple users in the test case, then a process flow can look like
user1: does something
user2: does something else
user1: does again something
user2: says good bye

Test Outline Table column - 'User': Who has to perform the action. Suppose in a
application, there are two roles 'Buyer' and 'Supplier', then user can be those role names.

Test Outline Table column - 'Action': Under Action you specify the following
Flow Name - A high level name given to action performed by the user.Suppose Buyer has to
create certain purchase orders in the applications, then the flow name can be 'Create
Purchase Orders'
Description - The following things should be mentioned here at a high level
Description of what actions should be performed
What is the type or characteristics of data to be used.
What should be verified or checked after performing the action.

Effort Estimates: In this section you specify the effort needed to write each test case
and the effort needed to execute them.

Test Case Template


Download the template
Find below the links to download the template in either word format or web page single
archive file.

Explanation of different sections in the template

Change History: Under this section, you specify, who changed what in the document and
when, along with the version of the document which contain the changes.

Review and Approval History: This captures who reviewed the document and whether they
Approved the test case or not. If approved, the reviewer will specify the review comments
to be incorporated in the test case.There is a review template at the end of the template
document, which can be used to specify the comments.If the test case document is 'Not
Approved', then either the test case is not necessary(redundant) or it is in a very bad
shape(not in a state to be reviewed)

Document References: Any additional documents that will help better understand the test
case like test oulines or design documents or Requirements document etc.

Introduction/Overall Test Objectives: Specify at a very high level, what the test case is
intended to achieve or verify.

Test Case Limitations: Does the test case achieve the above mentioned test objective
completely or are there any exceptions?These exceptions need to be specified in this
section.For example, test case has to verify something on type A, type B and type X, but
because of some reason it could NOT verify that something on type X, then its a limitation.

Test Case Dependencies / Assumptions: Prior to executing this test case, any other test
case needs to be run? All those dependencies need to mentioned here.

Setup Requirements: Any setup that has to be done in the application being tested, prior to
executing this test script should be mentioned here.For example, if the test case needs
certain Login IDs with certain settings to begin, which are not created as part of the test
case, then such things need to mentioned in this section

Process Flow: In this section, we mention who does what in the test case. Suppose there
are multiple users in the test case, then a process flow can look like
user1: does something
user2: does something else
user1: does again something
user2: says good bye

Test Case: The actual test case begins in section 5, which can be further divided into
subsections upon convenience and need.For example, if the test case is for an integrated
application, then everytime we login to a new application, we can have a new subsection.
Following is the example of how a test case looks like
Step Num: 1
Step Description: check login
Path and Action: Enter user name, Enter pwd, click Login
Test Data: abcd, abcd
Expected Results: Verify error message is thrown that username and password entered are
wrong

Appendix: This section contain any additional data that the test case refers.For example if
your test case has large amounts of 'Test Data' which is difficult to put under the column
'Test Data' for each step, then you can use the appendix section to hold the data and in the
test case, can give reference to appendix.

Test Case Review Template: This template can be used by the reviewers to provide their
review comments.They can classify the comments based on their severity.The Test Engineer
who incorporates the comments in the test case, should specify the action taken by him in
the template and then 'Close' the comment.

Costly Software Bugs


Software Bugs Cost U.S. Economy $59.6 Billion Annually, RTI Study
Finds
Research Triangle Park, NC -- Software bugs are costly both to software producers and
users. Extrapolating from estimates of the costs in several software-intensive industries,
bugs may be costing the U.S. economy $59.5 billion a year; about 0.6 percent of gross
domestic product, says a study conducted by RTI for the U.S. Department of Commerce's
National Institute of Standards and Technology (NIST).

"More than half of the costs are borne by software users, and the remainder by software
developers and vendors," NIST said in summarizing the findings. "More than a third of
these costs … could be eliminated by an improved testing infrastructure that enables earlier
and more effective identification and removal of software defects."

History's Worst Software Bugs

What seems certain is that bugs are here to stay. Here, in chronological order, is the Wired
News list of the 10 worst software bugs of all time … so far.

July 28, 1962 -- Mariner I space probe. A bug in the flight software for the Mariner 1
causes the rocket to divert from its intended path on launch. Mission control destroys the
rocket over the Atlantic Ocean. The investigation into the accident discovers that a formula
written on paper in pencil was improperly transcribed into computer code, causing the
computer to miscalculate the rocket's trajectory.
1982 -- Soviet gas pipeline. Operatives working for the Central Intelligence Agency
allegedly (.pdf) plant a bug in a Canadian computer system purchased to control the trans-
Siberian gas pipeline. The Soviets had obtained the system as part of a wide-ranging effort
to covertly purchase or steal sensitive U.S. technology. The CIA reportedly found out about
the program and decided to make it backfire with equipment that would pass Soviet
inspection and then fail once in operation. The resulting event is reportedly the largest non-
nuclear explosion in the planet's history.

1985-1987 -- Therac-25 medical accelerator. A radiation therapy device malfunctions


and delivers lethal radiation doses at several medical facilities. Based upon a previous
design, the Therac-25 was an "improved" therapy system that could deliver two different
kinds of radiation: either a low-power electron beam (beta particles) or X-rays. The Therac-
25's X-rays were generated by smashing high-power electrons into a metal target
positioned between the electron gun and the patient. A second "improvement" was the
replacement of the older Therac-20's electromechanical safety interlocks with software
control, a decision made because software was perceived to be more reliable.

What engineers didn't know was that both the 20 and the 25 were built upon an operating
system that had been kludged together by a programmer with no formal training. Because
of a subtle bug called a "race condition," a quick-fingered typist could accidentally configure
the Therac-25 so the electron beam would fire in high-power mode but with the metal X-
ray target out of position. At least five patients die; others are seriously injured.

1988 -- Buffer overflow in Berkeley Unix finger daemon. The first internet worm (the
so-called Morris Worm) infects between 2,000 and 6,000 computers in less than a day by
taking advantage of a buffer overflow. The specific code is a function in the standard
input/output library routine called gets() designed to get a line of text over the network.
Unfortunately, gets() has no provision to limit its input, and an overly large input allows the
worm to take over any machine to which it can connect.

Programmers respond by attempting to stamp out the gets() function in working code, but
they refuse to remove it from the C programming language's standard input/output library,
where it remains to this day.

1988-1996 -- Kerberos Random Number Generator. The authors of the Kerberos


security system neglect to properly "seed" the program's random number generator with a
truly random seed. As a result, for eight years it is possible to trivially break into any
computer that relies on Kerberos for authentication. It is unknown if this bug was ever
actually exploited.

January 15, 1990 -- AT&T Network Outage. A bug in a new release of the software that
controls AT&T's #4ESS long distance switches causes these mammoth computers to crash
when they receive a specific message from one of their neighboring machines -- a message
that the neighbors send out when they recover from a crash.
One day a switch in New York crashes and reboots, causing its neighboring switches to
crash, then their neighbors' neighbors, and so on. Soon, 114 switches are crashing and
rebooting every six seconds, leaving an estimated 60 thousand people without long distance
service for nine hours. The fix: engineers load the previous software release.

1993 -- Intel Pentium floating point divide. A silicon error causes Intel's highly promoted
Pentium chip to make mistakes when dividing floating-point numbers that occur within a
specific range. For example, dividing 4195835.0/3145727.0 yields 1.33374 instead of
1.33382, an error of 0.006 percent. Although the bug affects few users, it becomes a public
relations nightmare. With an estimated 3 million to 5 million defective chips in circulation,
at first Intel only offers to replace Pentium chips for consumers who can prove that they
need high accuracy; eventually the company relents and agrees to replace the chips for
anyone who complains. The bug ultimately costs Intel $475 million.

1995/1996 -- The Ping of Death. A lack of sanity checks and error handling in the IP
fragmentation reassembly code makes it possible to crash a wide variety of operating
systems by sending a malformed "ping" packet from anywhere on the internet. Most
obviously affected are computers running Windows, which lock up and display the so-called
"blue screen of death" when they receive these packets. But the attack also affects many
Macintosh and Unix systems as well.

June 4, 1996 -- Ariane 5 Flight 501. Working code for the Ariane 4 rocket is reused in
the Ariane 5, but the Ariane 5's faster engines trigger a bug in an arithmetic routine inside
the rocket's flight computer. The error is in the code that converts a 64-bit floating-point
number to a 16-bit signed integer. The faster engines cause the 64-bit numbers to be larger
in the Ariane 5 than in the Ariane 4, triggering an overflow condition that results in the
flight computer crashing.

First Flight 501's backup computer crashes, followed 0.05 seconds later by a crash of the
primary computer. As a result of these crashed computers, the rocket's primary processor
overpowers the rocket's engines and causes the rocket to disintegrate 40 seconds after
launch.

November 2000 -- National Cancer Institute, Panama City. In a series of accidents,


therapy planning software created by Multidata Systems International, a U.S. firm,
miscalculates the proper dosage of radiation for patients undergoing radiation therapy.

Multidata's software allows a radiation therapist to draw on a computer screen the


placement of metal shields called "blocks" designed to protect healthy tissue from the
radiation. But the software will only allow technicians to use four shielding blocks, and the
Panamanian doctors wish to use five.

The doctors discover that they can trick the software by drawing all five blocks as a single
large block with a hole in the middle. What the doctors don't realize is that the Multidata
software gives different answers in this configuration depending on how the hole is drawn:
draw it in one direction and the correct dose is calculated, draw in another direction and
the software recommends twice the necessary exposure.

At least eight patients die, while another 20 receive overdoses likely to cause significant
health problems. The physicians, who were legally required to double-check the computer's
calculations by hand, are indicted for murder.

Interview Tips
I interviewed few candidates for my company. A few tips.

Usually interviewers form an impression on whether the candidate is good or not


within 5 minutes of seeing and talking to him/her.Rest of the time, they try to validate
whether their impression is correct or wrong(atleast this is what I do).So its important
you make a good impression in the first few minutes.

Dress code

Well dress you wear for the interview is not everything, but you dress well, it will help
you to make a initial good impression. The color of the dress gives certain impression,
it seems(I dont remember paying attention to the dress, the candidate wore, but it
seems it would have affected my impression at a subconscious level.But what you
loose dressing up well)

• Red:You wear it to attract other people's attention.If you are presenting


something or you want others to pay attention to you, dress in red. This is not
the best color to wear for interview.
• Blue:If you want other people to like you, dress in blue.
• White:It gives the impression that you are hardworking.Wear this. I mean not
entirely in white but the predominant color should be white.Like a black
trouser and white shirt should do.

Resume

• Whatever you mention in the resume, you must try to be thorough in it.
• Be honest of what you put in it.
• Try to keep it small,it should not be longer than 2 to 3 pages.
• One of the main reason of rejection is, something is mentioned in the resume
and when asked questions about it, not able to answer them and in few cases
saying that "I have done that 'something' long back and I dont remember
anything".In many cases its true that you have used a tool or something a
year back and you dont remember much about it, in that case before
attending the interview, brush up atleast some basics regarding it, do
some home work and prepare yourself to talk about it or dont mention about
it in the resume, if it is not that important.

Topics to prepare, for a entry level testing job.

Testing fundamentals: Testing terminology, definitions etc.


Testing Processes: The testing processes used in your current/previous
company.Knowledge of life cycle testing etc.

Testing tools: Given a problem, how to solve them using testing tools.

Testing deliverables: Able to write a test plan or test case for a given product or
requirements

Analytical skills: Solving Puzzles

Programming skills(optional): Able to write pseudo code for a given problem.

Few more tips

Good Communication skills: obvious isnt it

Good Listening skills or understanding skills: Able to understand the questions


posed by interviewer. Try your best to understand fully what the interviewer is
blabbering in the first say.But if what he says is not clear, better to ask or try to
confirm what you heard is correct before answering.

Confidence level: Watch your body language, keep eye contact, dont be over
confident just be confident.

Two most important tips: You should have the right skills for the job.Also equally
important is you SHOW or DISPLAY infront of the interviewer that you have the right
skills.

$$$All the best for your job hunting$$$

Potrebbero piacerti anche