Sei sulla pagina 1di 16

A Document to become

An

EFFECTIVE Tester

By,
Arunku
mar Nehru

Why Software Testing is required (Why


Software has defects)?
No software can be 100% defect-free. Humans are involved in developing
software are fallible or prone to commit mistakes. To find such things which harm
the software, Software Testing is required.
Sometime to meet the Contractual/Regulatory/Legal Obligations also Software
Testing is required.
NOTE: Defect in Software/Systems/Documents may result in failures, but not all
defects do so.

Impacts when no proper Software Testing is


done?

Loss of Money/Time
Leads to Injuries or Death
Leads to bad reputation

Why Software has defects?


It is because of,

Time or Deadline pressure


Complexity being high
Lack of information about the Software or Requirement
Lack of information in the Experience or Skillset
Frequent changes in the Software lifecycle which are not properly
traced.

What is SOFTWARE TESTING?


Systematic Evaluation/Examination of Software or Software Engineering
work products to find Defects.
NOTE:

A human being can make an Error, which produces a Defect in the


program code or in a document. If a Defect is executed, the system may
fail to do what it should do (or do something it shouldnt) causing a
Failure.
Defects in software, systems or documents may result in failures,
but not all defects do so.

What are all the Objectives of Testing?

To assist in Improving the Software Quality


To measure the level of Software Quality
To assist in preventing defects in the future (Root Cause analysis).

Root Cause Analysis (Failure Investigation):


Analyse and identify the problem/ defect and try avoiding it in the future.

TEST EFFECTIVENESS:
Test Effectiveness can be improved by following established Test Design
Techniques and by following guidance indicated by Test Principles.

Test Principles
Test Design Techniques

Testing Principles:

Testing is to prove Presence of Defects but not Absence of


Defects.

Early Testing is necessary.


Testing should be started as early as possible which can prevent Fault
Migration and Fault Multiplication. Also, defects found early are easy,
cheap and quicker to correct.

Testing is Context Dependent.

Testing is dependent of context of usage. (For ex. Aircraft or Health care


software needs more testing than Finance or Banking software).

Exhaustive Testing is Impractical.


Testing with all possible set of Inputs with all possible
circumstances.

Instead we should prefer any of the below,


1. Risk based Testing
2. Testing based on the Context of Usage
3. Testing based on Test Objectives

Defect Clustering (80:20 rule).


80% of Defects can be found in 20% of the Software.

This principle suggests most of the defects in s/w are concentrated in few
zones of the s/w only. During Testing, based on the Defect distribution,
testing should be focussed more at such zones which exhibit more defects
and chances are that we shall find many more defects from such zones
only.

Pesticide Paradox.
Test scripts should be regularly reviewed and revised.

If the same tests are repeated over and over again, eventually the same
set of test cases will no longer find any new defects. To overcome this
Pesticide Paradox, Test scripts needs to be regularly reviewed and
revised and new different tests needs to be written to find potentially more
defects.

Absence of Errors Fallacy.


No Software can be 100% defect free. Software can achieve its
Quality not only when it has 0% defects, it is possible when it meets
the Purpose and Reasonable Expectations of the Client.

Test Design Techniques:


Structure-based or White-box Technique
White Box Testing is a software testing method in which the internal
structure/design/implementation of the item being tested is known to
the tester. It can be done in five different ways as described below,
1.
2.
3.
4.
5.

Control Flow analysis


Data Flow Analysis
Statement Coverage
Decision Coverage
Path Coverage

1. Control Flow analysis:


It is the assessment of the flow of control through the program in
detecting error
conditions such as,

Logical Errors
Syntax Errors
Unreachable Code
Missing Links

2. Data Flow Analysis:


Analysing the flow of data in terms of how it is being referenced, used,
modified, stored, destroyed etc... through the program execution such
as,
Variable being called in the program before initialized
Variable initialized but not being called in the program
Mis-Match in operations with respect to the data types
declared

3. Statement Coverage:
Statement Coverage is the assessment of the percentage of executable
statements that have been exercised by a test case suite.

4. Decision Coverage:
It is the assessment of the percentage of decision outcomes that have
been exercised by a test case suite.

5. Path Coverage:
It is the assessment of the number of paths (Independent ways of a
program can be executed) that have been exercised by a test case
suite.

Specification-based or Black-box Technique


Black Box Testing is a software testing method in which the internal
structure/design/implementation of the item being tested is not known
to the tester. It is done in five different ways as described below,
1.
2.
3.
4.
5.

Equivalence Class Partitioning


Boundary value Analysis
Decision Table Testing
State Transition Testing
Use Case Testing

1. Equivalence Class Partitioning:


Example: A requirement which accepts only the age entered should be
between (18 to 60).
Valid Equivalence class 18 to 60
Invalid Equivalence class age less than 18 to age greater than 60

2. Boundary value Analysis:


Example: A requirement which accepts only the age entered should be
between (18 to 60).
Valid Boundaries age 17, 18, 60 and 61
Invalid Boundaries age less than 17 and age greater than 61
Optional valid boundaries age 19 and 59

3. Decision Table Testing:


To test for combination of inputs which gives different results based on
the input, Decision Table Testing is preferred.
Example:

4. State Transition Testing:


A State Transition Table is a table showing what a finite state machine will
move to, based on the current state and other inputs.

Example:

5. Use Case Testing:


NOTE: UseCase is a Pictorial representation of the user requirements
(from the starting point of the software till the end point).
Partition the flow of UseCase into smaller portions and evaluating whether
the system behaves as expected is UseCase Testing.

Experience-based Technique
This sort of testing is done in the below mentioned circumstances,
1. In addition to the planned Testing.
2. When detail Documentation is not available.
3. Time and Budget constraint.
It is done using 2 methods, they are

1. Error Guessing
Guessing likely Errors in the application using previous similar
experiences and designing a Test script based on that.

2. Exploratory Testing
It is done when no Documentation is available, Exploring the
Software and coming up with the Test scripts. It is mostly done
during Maintenance Testing.

Fundamental Test Process:


The Fundamental Test Process is defined in 5 categories, they are

Test Planning and Control


Test Analysis and Design
Test Implementation and Execution
Evaluating for Exit Criteria and Reporting
Test Closure

Test Planning and Control:


Test planning is the activity of defining the Test objectives and the
Specification of test activities (what/ when (Entry criteria)/ how to test and when
to stop testing (Exit criteria) in order to meet the objectives and mission.
Test control is the on-going activity of comparing actual progress against
the plan and reporting the status including deviations from the plan to take
necessary actions to meet the objectives and mission.

Test Analysis and Design:


Test analysis is the activity to analyse the requirements (to gain
understanding and to check the possibility of testing of the s/w).
Test design is the activity for identifying and prioritizing test conditions,
identifying test cases, identifying the test data and identifying the test
environment setup using Test Basis (i.e. Requirements).

Test implementation and Execution:


Test implementation is the activity where the Test Cases (High-level) are
been converted to Test scripts (Low-level), preparing the Test data and Test
environment ready found during Test design process.
Test execution is the activity where the Tests are executed, results are
recorded (in Test Log), Defect is reported (in Test Incident Report) and filed (if
any), Re-test the defect once it is fixed (using different test data) and Regression
testing.

Evaluating Exit Criteria and Reporting:


It is the activity where Test Execution is assessed against the defined
objectives during Test planning. It has the following major tasks:

Checking Test Logs against the Exit criteria specified in the Test
Planning.
Assessing if more Tests are needed or if the Exit criteria specified
should be changed.

Writing a Test Summary Report to the stakeholders.

Test Closure Activities:


This activity has the following major tasks:

Checking which planned deliverables (Test Log, Test Incident Report


and Test Summary Report) have been delivered or missed.
Closing Test Incident Reports or raising Change Records for any issue
that remains open.
Handing over the Testware (any item related to testing) to the support
team.
Document lessons learnt for the future reference.

Types of Testing
There are 4 types of Testing, they are

Functional Testing (Black-box testing, covered in the previous topics)


Non-Functional Testing
Structural Testing (White-box testing, covered in the previous topics)
Change Related Testing

Non-Functional Testing:
It is performed at all test levels. The term Non-Functional testing describes
the tests required to measure characteristics of systems and software that can
be quantified on a varying scale such as Performance Testing, Load Testing,
Stress Testing, Usability Testing, Maintainability Testing, Reliability Testing and
Portability Testing and it is not limited only to those.

Change Related Testing:


It is also called as Maintenance Testing i.e. testing live software. After a
defect is detected and fixed, the software should be re-tested to confirm that the
original defect has been successfully removed is called Confirmation Testing.
This process also includes Regression Testing to discover any defects
introduced or uncovered as a result of the fix.

When to start Testing Software?


Testing of Software should be started as early as possible in the early life
cycle and it should be done at each and every stage of the Software
Development Life Cycle.
Rigorous testing of Design Documentation can help to reduce the risk of
problems occurring during operation and contribute to the quality of the Software
system.

Test Management
The Test Management process is divided in to two types, they are

Test Progress Monitoring


Incident Reporting

Test Progress Monitoring:


There are few points need to be considered during Test Progress Monitoring, they
are described below,

How much percentage of Test Analysis and Design completed or yet to


complete?
How much percentage of Test Implementation/ Test Execution is complete
or yet to complete?
Test Execution results break-up? (Passed/ Failed/ blocked)
Defect Severity matrix?
Defect Arrival rate? (How many new defects reported per day/hour)
Defect Resolution rate? (How many defects are resolved per day/hour)
Defect Age? (Time interval between Defect arrival and resolution)
Defect Density? (Number of issues found divided by Size of the program
(Lines of code))

Incident Reporting:
Incident in normal term is Bug, Issue or Defect
I.e. when the Expected behaviour is not equal to the Actual
Behaviour

Reporting an Incident is called as Incident Reporting and at any stage of lifecycle,


defects can be reported. Things to mention in the Incident report are given below
and highlighted ones are mandatory,

Summary
Version (Build number and Environment)
Test Item (which part of the software)
Details of the Incident ( Steps to reproduce the issue)
Severity ( Impact of a defect to the customer)
Priority (Urgency to fix)
Expected and Actual behaviour
Recommendations

Verification and Validation


Verification

Validation

1. Whether we are doing things right?

1. Whether we have the right product?

2. Checking throughout the process.

2. Checking only the final product.

3. Tester does this activity.

3. User does this activity and review is


done not only for the end product
also includes Design specification
etc

Testing Levels/ Phases


There are 4 major levels of Testing described as below,

Unit/ Component Testing:


The primary goal of unit testing is to take the smallest piece of testable
software in the application, isolate it from the remainder of the code, and
determine whether it behaves exactly as you expect.

Integration Testing:
The individual components are combined with other components to make
sure that necessary communications, links and data sharing occur
properly.

System Testing:
The system test phase begins once modules are integrated enough to
perform tests in a whole system environment.

Tests which are done during System testing are below,

Negative testing
End-to-End testing
Intense testing (or) Thorough testing
Functional and Non-Functional testing

Acceptance Testing:
There are 2 types of Acceptance testing, they are
User Acceptance Testing:
Whether the software meets the requirements is evaluated
by the client.
Operational Acceptance Testing:
Accepted software whether ready to be deployed is evaluated
by the client.

How much Testing is enough?


Deciding how much of testing is enough should take account of the Level
of Risk, including Technical, Safety, Business Risks and Projects Risks such as
Time and Budget.
Testing should provide sufficient information to the stakeholders to make
informed decision about the software being tested for the handover to the
customer.

Risks:
Risk is a chance of negative event or chance of something going wrong.

Level of Risk = % chance of the negative event likely to be


happen / likely impact when the negative impact really happens
Finding out the risky areas in the Software, listing their severity levels (as
high, medium and low) will be helpful to find out which part of the Software

needs to be given more importance for testing. This approach is called as Risk
Based Testing.

When to stop Testing:


The below defined process is written based on their priority.
1. Getting enough information from the Software product under test and sharing
it to the stakeholders and letting them to decide about the release.
2. When the residual Risks are minimal with no major pending issues. (This can
be achieved assessing risks on regular interval of time)
3. When a certain percentage of coverage is achieved in terms of
Code/Requirement/Risks with no major pending issues.
4. When all of the planned Test activities are completed with no major pending
issues.
5. When predicted number of defects are found with no major pending issues.
(This is not an exclusive process. It should be combined with any of the above
listed process).
6. When time of release is achieved.
7. When budget is exhausted.

Software Quality Characteristics


There are few things to consider while discussing about Software Quality
Characteristics, they are,

Functionality ( also called as Functional Test)


Core purpose of the software in terms of what Functions/ Features/
Transactions/ Business process it should achieve.

Reliability ( Non-Functional Test)


Measuring the level of service over a period of time.

Usability ( Non-Functional Test)


Whether the software functionality is User-friendly (Ease to Use/
Operate/ Learn).

Maintainability ( Non-Functional Test)


Ability of the software to undergo changes.

Portability ( Non-Functional Test)


Ability of the software to work under different environments.

Efficiency ( Non-Functional Test)


Ability of the software to optimize resource utilization (Load/ Stress/
Spike/ Capacity Testing).

Psychology of Testing:
Creators are blind to their mistakes and some time they try to supress or
hide those. To test effectively or to overcome Author Bias, evaluation should be
done by someone other than the creator. This approach is called Independent
Testing.

Communication Guidelines between the Tester and a Developer:


1. Report an Issue against an Item and not against a person.
2. Report should be Constructive and Informative.
3. Report should be Polite and in a Diplomatic manner.
4. Understand the reactions of the developer and yet keep engaged in
getting the fix done.
5. Insist on fix while convincing about the deviation.

Characteristics of a Good Test Professional:


1. Professional Pessimism/ Critical Attitude.
2. Good Communication (to convince the developer without hurting) and
Interpersonal skills.
3. Creative in Destruction.
4. Curious to find Defects.
5. Attention to Details.
6. Good knowledge of Customer Business (Domain).

Few Testing terms used frequently:


Smoke Testing:
Smoke testing is done at the start of the application is deployed. Smoke
test is the entry point for the entire test execution. When the application is
passed under the smoke test then only further system testing or regression
testing can be carried out.

Sanity Testing:
Sanity testing is also similar to Smoke testing, but has some minor
differences. In smoke testing only positive scenarios are validated but in sanity
testing both the positive and negative scenarios are validated.

Ad-hoc testing:
Testing carried out using no recognized Test Case Design Technique
which is a method used
to determine Test Cases. Here the testing is
done by the knowledge of the tester in the application and he tests the system
randomly without any test cases or any specifications or requirements.
Monkey testing:
Monkey test is a unit test that runs with no specific test in mind. The
monkey in this case is the producer of any input. For example, a monkey test can
enter random strings into text boxes to ensure handling of all possible user input
or provide garbage files to check for loading routines that have blind faith in their
data.
Fault Masking:
A defect can be hidden by the other defect i.e. unless you find the base
defect, you cannot find the actual defect.
Mutation Testing:
After adding known bugs in the code and verifying is that bug found during
the Testing process.
Developmental Testing & Maintenance Testing:
Testing during development stage is Developmental Testing and testing
after the release i.e. a live product is Maintenance Testing.
User Manual Support Testing:
Verifying the right help topics for the corresponding page is properly
displayed.
Alpha Testing Vs. Beta Testing:
Alpha Testing
The software being tested in the
development environment where the
software is being developed.

Beta Testing
The software being tested at the
clients environment and at the clients
place.

System Integration Testing:


Integrating different types of systems is called System Integration Testing.
(For ex. Any online shopping system will be integrated with the banking
transaction system)

Potrebbero piacerti anche