Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
An
EFFECTIVE Tester
By,
Arunku
mar Nehru
Loss of Money/Time
Leads to Injuries or Death
Leads to bad reputation
TEST EFFECTIVENESS:
Test Effectiveness can be improved by following established Test Design
Techniques and by following guidance indicated by Test Principles.
Test Principles
Test Design Techniques
Testing Principles:
This principle suggests most of the defects in s/w are concentrated in few
zones of the s/w only. During Testing, based on the Defect distribution,
testing should be focussed more at such zones which exhibit more defects
and chances are that we shall find many more defects from such zones
only.
Pesticide Paradox.
Test scripts should be regularly reviewed and revised.
If the same tests are repeated over and over again, eventually the same
set of test cases will no longer find any new defects. To overcome this
Pesticide Paradox, Test scripts needs to be regularly reviewed and
revised and new different tests needs to be written to find potentially more
defects.
Logical Errors
Syntax Errors
Unreachable Code
Missing Links
3. Statement Coverage:
Statement Coverage is the assessment of the percentage of executable
statements that have been exercised by a test case suite.
4. Decision Coverage:
It is the assessment of the percentage of decision outcomes that have
been exercised by a test case suite.
5. Path Coverage:
It is the assessment of the number of paths (Independent ways of a
program can be executed) that have been exercised by a test case
suite.
Example:
Experience-based Technique
This sort of testing is done in the below mentioned circumstances,
1. In addition to the planned Testing.
2. When detail Documentation is not available.
3. Time and Budget constraint.
It is done using 2 methods, they are
1. Error Guessing
Guessing likely Errors in the application using previous similar
experiences and designing a Test script based on that.
2. Exploratory Testing
It is done when no Documentation is available, Exploring the
Software and coming up with the Test scripts. It is mostly done
during Maintenance Testing.
Checking Test Logs against the Exit criteria specified in the Test
Planning.
Assessing if more Tests are needed or if the Exit criteria specified
should be changed.
Types of Testing
There are 4 types of Testing, they are
Non-Functional Testing:
It is performed at all test levels. The term Non-Functional testing describes
the tests required to measure characteristics of systems and software that can
be quantified on a varying scale such as Performance Testing, Load Testing,
Stress Testing, Usability Testing, Maintainability Testing, Reliability Testing and
Portability Testing and it is not limited only to those.
Test Management
The Test Management process is divided in to two types, they are
Incident Reporting:
Incident in normal term is Bug, Issue or Defect
I.e. when the Expected behaviour is not equal to the Actual
Behaviour
Summary
Version (Build number and Environment)
Test Item (which part of the software)
Details of the Incident ( Steps to reproduce the issue)
Severity ( Impact of a defect to the customer)
Priority (Urgency to fix)
Expected and Actual behaviour
Recommendations
Validation
Integration Testing:
The individual components are combined with other components to make
sure that necessary communications, links and data sharing occur
properly.
System Testing:
The system test phase begins once modules are integrated enough to
perform tests in a whole system environment.
Negative testing
End-to-End testing
Intense testing (or) Thorough testing
Functional and Non-Functional testing
Acceptance Testing:
There are 2 types of Acceptance testing, they are
User Acceptance Testing:
Whether the software meets the requirements is evaluated
by the client.
Operational Acceptance Testing:
Accepted software whether ready to be deployed is evaluated
by the client.
Risks:
Risk is a chance of negative event or chance of something going wrong.
needs to be given more importance for testing. This approach is called as Risk
Based Testing.
Psychology of Testing:
Creators are blind to their mistakes and some time they try to supress or
hide those. To test effectively or to overcome Author Bias, evaluation should be
done by someone other than the creator. This approach is called Independent
Testing.
Sanity Testing:
Sanity testing is also similar to Smoke testing, but has some minor
differences. In smoke testing only positive scenarios are validated but in sanity
testing both the positive and negative scenarios are validated.
Ad-hoc testing:
Testing carried out using no recognized Test Case Design Technique
which is a method used
to determine Test Cases. Here the testing is
done by the knowledge of the tester in the application and he tests the system
randomly without any test cases or any specifications or requirements.
Monkey testing:
Monkey test is a unit test that runs with no specific test in mind. The
monkey in this case is the producer of any input. For example, a monkey test can
enter random strings into text boxes to ensure handling of all possible user input
or provide garbage files to check for loading routines that have blind faith in their
data.
Fault Masking:
A defect can be hidden by the other defect i.e. unless you find the base
defect, you cannot find the actual defect.
Mutation Testing:
After adding known bugs in the code and verifying is that bug found during
the Testing process.
Developmental Testing & Maintenance Testing:
Testing during development stage is Developmental Testing and testing
after the release i.e. a live product is Maintenance Testing.
User Manual Support Testing:
Verifying the right help topics for the corresponding page is properly
displayed.
Alpha Testing Vs. Beta Testing:
Alpha Testing
The software being tested in the
development environment where the
software is being developed.
Beta Testing
The software being tested at the
clients environment and at the clients
place.