Sei sulla pagina 1di 34

Introduction to Software Testing Level 1

Confidential

Page 1 of 34

Testing is a process of gathering information by making observations and comparing them to expectations. Dale Emery
Introduction
In our day-to-day life, when we go out, shopping any product such as vegetable, clothes, pens, etc., we do
check it before purchasing them for our satisfaction and to get maximum benefits. For example, when we intend
to buy a pen, we test the pen before actually purchasing it, i.e. if it's writing, does the functionalities work, etc.
So, be it the software, hardware, or any other product, testing turns to be mandatory.
Software testing is a process of verifying and validating whether the program is performing correctly with no
bugs. It is the process of analyzing or operating software for the purpose of finding bugs. It also helps to identify
the defects / flaws / errors that may appear in the application code, which needs to be fixed. Testing not only
means fixing the bug in the code, but also to check whether the program is behaving according to the given
specifications and testing strategies. There are various types of strategies such as white box testing, black box
testing, gray box testing, etc.
Software development goes through a chain of process. It starts with requirement analysis phase and ends
with the maintenance phase. There are two important phases of software development, namely coding and
testing phase. Testing phase is an important phase, as it verifies and validates, that the software has been
developed according to the requirements of the user. Software testing can be carried out using two methods:
Manual Scripted Testing: This is considered to be one of the oldest type, in which test cases are designed
and reviewed by the team, before executing it.
Automated Testing: This applies automation in the testing, which can be applied to various parts of a
software process such as test case management, executing test cases, defect management, and reporting
of the bugs/defects.
This manual testing tutorial will help you to understand the basics of software testing in general and
Confidential

Page 2 of 34

manual testing in particular. Let's take a look at manual testing tutorial for beginners, but it will also prove to be
of help for advanced learners, as it will prove to be of help in clearing certain concepts, which an advanced
learner may not be comfortable with.
What is Manual Testing?
Manual testing is the method used to check software for defects manually. In this type of testing, a tester wears
garbs of an end-user. All features of software are tested to know, if the behavior of the software is exactly
according to the expectations of the customer. Normally, the tester has a test plan, which he uses for testing.
Other than test plan, there are test cases written, which are used for implementing the test plan.
Manual Testing Tutorial
After the introduction to software testing, we will now turn towards software testing tutorial. This tutorial deals
with all basics of manual testing.
Stages of Manual Testing
The entire process of manual testing goes through four phases. First phase is known as unit testing. It is the job of
the developer to test units of the code written by the developer himself. In some cases, the code may also be
tested by a peer. Integration testing is second phase of software testing. It is carried out, when big chunks of
codes are integrated to form a bigger block of code. Either black box or white box testing is carried out in this
phase. The next phase is system testing phase. The software is tested for all possibilities to rule out any kind of
abnormality in the system. Normally black box testing technique is used in system testing phase. User acceptance
testing is per the last stage of manual testing. In this phase, software is tested keeping the end-user in mind.
There are two types of acceptance testing, which are used, namely alpha testing and beta testing.

Confidential

Page 3 of 34

Need for Software Testing Strategies


The types of software testing, depends upon different types of defects. For example:
Functional testing is done to detect functional defects in a system.
Performance testing is performed to detect defects when the system does not perform according to the
specifications.
Usability testing to detect usability defects in the system.
Security testing is done to detect bugs/defects in the security of the system
Software Testing Life Cycle
Like the software development life cycle, the software also goes through a software testing life cycle. It is often
seen that software testing interview questions and answers revolve a lot around software testing life cycle. The
different phases of software testing life cycle are:

Confidential

Page 4 of 34

Requirement Phase
Test Planning Phase
Test Analysis Phase
Test Design Phase
Test Verification and
Construction Phase
Test Execution Phase
Result Analysis Phase
Bug Tracking and
Reporting Phase
Rework Phase
Final Test and
Implementation Phase
Image 1: Software Testing Life Cycle

Confidential

Page 5 of 34

Introduction to Software Testing Life Cycle


In every organization testing is an important phase in the development of a software product. However, the way
it is carried out differs from one organization to another. It is advisable to carry out the testing process from the
initial stages, with regard to the Software Development Life Cycle or SDLC to avoid any complications.
Software Testing Life Cycle Phases
Software testing has its own life cycle that meets every stage of the SDLC. The software testing life cycle diagram
can help one understand its various phases. They are:
Requirement Stage
Test Planning
Test Analysis
Test Design
Test Verification and Construction
Test Execution
Result Analysis
Bug Tracking
Reporting and Rework
Final Testing and Implementation
Post Implementation

Confidential

Page 6 of 34

Requirement Stage
This is the initial stage of the software testing life cycle process. In this phase the developers take part in
analyzing the requirements for designing a product. The role of software testers is also necessary in this phase as
they can think from the 'users' point of view which the developers may not. Thus a team of developers, testers
and users can be formed, to analyze the requirements of the product. Formal meetings of the team can be held in
order to document the requirements which can further be used as software requirements specification or SRS.
Test Planning
Test planning means to predetermine a plan well in advance to reduce further risks. A well-designed test plan
document plays an important role in achieving a process-oriented approach. Once the requirements of the
project are confirmed, a test plan is documented. The test plan structure is as follows:
Introduction: This describes the objective of the test plan.
Test Items: The items that are required to prepare this document will be listed here such as SRS, project
plan.
Features to be tested: This describes the coverage area of the test plan, that is, the list of features to be
tested; that are based on the implicit and explicit requirements from the customer.
Features not to be tested: The incorporated or comprised features that can be skipped from the testing
phase are listed here. Features that are out of scope of testing, like incomplete modules or those on low
severity, for example, GUI features that don't hamper the process can be included in the list.
Approach: This is the test strategy that should be appropriate to the level of the plan. It should be in
acceptance with the higher and lower levels of the plan.
Item pass/fail criteria: Related to the show stopper issue. The criteria used has to explain which test item
has passed or failed.
Suspension criteria and resumption requirements: The suspension criteria specifies the criteria that is to be
used to suspend all or a portion of the testing activities, whereas resumption criteria specifies when testing
can resume with the suspended portion.
Confidential

Page 7 of 34

Test deliverable: This includes a list of documents, reports, charts that are required to be presented to the
stakeholders on a regular basis during the testing process and after its completion.
Testing tasks: This phase lists the testing tasks that need to be performed. This includes conducting the
tests, evaluating the results and documenting them based on the test plan designed. This also helps users
and testers to avoid incomplete functions and prevent waste of resources.
Environmental needs: The special requirements of the test plan depending on the environment in which
the application has to be designed are listed here.
Responsibilities: This phase assigns responsibilities to people who can be held responsible in case of a risk.
Staffing and training needs: Training on the application/system and on the testing tools to be used needs to
be explained to the staff members who are responsible for the application.
Risks and contingencies: This emphasizes on the probable risks and various events that can occur and what
can be done in such situations.
Approval: This decides who can approve the process as complete and allow the project to proceed to the
next level that depends on the level of the plan.
Test Analysis
Once the test plan documentation is done, the next stage is to analyze what types of software testing should be
carried out at the various stages of SDLC.
Test Design
Test design is done based on the requirements of the project documented in the SRS. This phase decides whether
manual or automated testing is to be done. In automation testing, different paths for testing are to be identified
first and writing of scripts has to be done if required. An end-to-end checklist that covers all the features of the
project is necessary in the test design process.
Test Verification and Construction
In this phase, the test plan, test design and automated test script are completed. Stress and performance testing
plans are also completed at this stage. When the development team is done with a unit of code, the testing team
Confidential

Page 8 of 34

is required to help them in testing that unit and report any bug in the product, if found. Integration testing and
bug reporting is done in this phase of software testing life cycle.
Test Execution
Planning and execution of various test cases is done in this phase. Once the unit testing is completed, the
functionality of the tests is done in this phase. At first, top-level testing is done to find out the top-level failures
and bugs are reported immediately to the development team to get the required workaround. Test reports have
to be documented properly and the bugs have to be reported to the development team.
Result Analysis
After the successful execution of the test case, the testing team has to retest it to compare the expected values
with the actual values, and declare the result as pass/fail.
Bug Tracking
This is one of the important stages as the Defect Profile Document (DPD) has to be updated for letting the
developers know about the defect. Defect Profile Document contains the following
1. Defect Id: Unique identification of the Defect.
2. Test Case Id: Test case identification for that defect.
3. Description: Detailed description of the bug.
4. Summary: This field contains some keyword information about the bug, which can help in minimizing the
number of records to be searched.
5. Defect Submitted By: Name of the tester who detected/reported the bug.
6. Date of Submission: Date at which the bug was detected and reported.
7. Build No.: Number of test runs required.
8. Version No.: The version information of the software application in which the bug was detected and fixed.
9. Assigned To: Name of the developer who is supposed to fix the bug.
10.Severity: Degree of severity of the defect.
Confidential

Page 9 of 34

11.Priority: Priority of fixing the bug.


12.Status: This field displays current status of the bug.
Reporting and Rework
Testing is an iterative process. The bug that is reported and fixed by the development team, has to undergo the
testing process again to assure that the bug found has been resolved. Regression testing has to be done. Once the
Quality Analyst assures that the product is ready, the software is released for production. Before release, the
software has to undergo one more round of top-level testing. Thus testing is an ongoing process.
Final Testing and Implementation
This phase focuses on the remaining levels of testing, such as acceptance, load, stress, performance and recovery
testing. The application needs to be verified under specified conditions with respect to the SRS. Various
documents are updated and different matrices for testing are completed at this stage of the software testing life
cycle.
Post Implementation
Once the test results are evaluated, the recording of errors that occurred during the various levels of the software
testing life cycle, is done. Creating plans for improvement and enhancement is an ongoing process. This helps to
prevent similar problems from occurring in the future projects. In short, planning for improvement of the testing
process for future applications is done in this phase.

Software Testing Strategy


There are three software testing types, under which all software testing activities are carried out. They are:
White Box Testing Strategy: Testing of internal structure of the software is known as white box testing.

Confidential

Page 10 of 34

Black Box Testing Strategy: This testing strategy is used to test different functionalities of the software,
which is being developed.
Gray Box Testing Strategy: The software is tested to find defects of any kinds, whether in code or in
structure.
There are other types of software testing, which are used to test a product to ensure that the software meets
requirements of the end-user. They include:
Functional Testing
Smoke Testing
Usability Testing
Validation Testing
Compatibility Testing
Sanity Testing
Exploratory Testing
Security Testing
Regression Testing
Recovery Testing
Performance Testing (This includes 2 sub-types - Load Testing and Stress Testing)

Software Testing Techniques


Software testing methodologies are divided into static testing techniques and dynamic testing techniques.
Software review and static analysis by using tools are methods, which come under static testing techniques.
Specification based testing techniques, structure based testing techniques and experience based testing
Confidential

Page 11 of 34

techniques are all included under dynamic testing technique.


Example: Equivalence partitioning is one of the important strategy used in specification based testing technique.
Bug Life Cycle
The aim of entire software testing activity is to find defects in the software, before it is released to the end-user
for use. A Bug life cycle starts after a tester logs a bug.
Phases in the bug life cycle are:

Confidential

Page 12 of 34

New
Open
Assign
Test
Deferred
Rejected
Duplicate
Verified
Reopened
Closed

Image 2: Bug Life Cycle

Confidential

Page 13 of 34

The duration or time span between the first time that the bug is found is called 'New' and closed successfully
(status: 'Closed'), rejected, postponed or deferred is called 'Bug/Error Life Cycle'.
Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various
statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed.
There are seven different life cycles that a bug can pass through:
Cycle I
A tester finds a bug and reports it to the Test Lead.
The test lead verifies if the bug is valid or not.
Test lead finds that the bug is not valid and the bug is 'Rejected'.
Cycle II
A tester finds a bug and reports it to the Test Lead.
The test lead verifies if the bug is valid or not.
The bug is verified and reported to the development team with status as 'New'.
The development leader and team verify if it is a valid bug. The bug is invalid and is marked with a status of
'Pending Reject' before passing it back to the testing team.
After getting a satisfactory reply from the development side, the test leader marks the bug as 'Rejected'.
Cycle III
A tester finds a bug and reports it to the Test Lead.
The test lead verifies if the bug is valid or not.
The bug is verified and reported to the development team with status as 'New'.

Confidential

Page 14 of 34

The development leader and team verify if it is a valid bug. The bug is valid and the development leader
assigns a developer to it, marking the status as 'Assigned'.
The developer solves the problem and marks the bug as 'Fixed' and passes it back to the Development
leader.
The development leader changes the status of the bug to 'Pending Retest' and passes it on to the testing
team for retest.
The test leader changes the status of the bug to 'Retest' and passes it to a tester for retest.
The tester retests the bug and if it is working fine, the tester closes the bug and marks it as 'Closed'.
Cycle IV
A tester finds a bug and reports it to the Test Lead.
The test lead verifies if the bug is valid or not.
The bug is verified and reported to the development team with status as 'New'.
The development leader and team verify if it is a valid bug. If the bug is valid, the development leader
assigns a developer for it, marking the status as 'Assigned'.
The developer solves the problem and marks the bug as 'Fixed' and passes it back to the Development
leader.
The development leader changes the status of the bug to 'Pending Retest' and passes it on to the testing
team for retest.
The test leader changes the status of the bug to 'Retest' and passes it to a tester for retest.
The tester retests the bug and the same problem persists, so the tester after confirmation from test leader
reopens the bug and marks it with a 'Reopen' status. And then, the bug is passed back to the development
team for fixing.
Cycle V
A tester finds a bug and reports it to the Test Lead.
The test lead verifies if the bug is valid or not.
The bug is verified and reported to the development team with status as 'New'.
Confidential

Page 15 of 34

The developer tries to verify if the bug is valid but fails to replicate the same scenario as it was at the time
of testing, and asks for help from the testing team.
The tester also fails to regenerate the scenario in which the bug was found. And finally, the developer
rejects the bug marking it as 'Rejected'.
Cycle VI
After confirmation that the data is unavailable or certain functionality is unavailable, the solution and retest of
the bug is postponed for indefinite time and it is marked as 'Postponed'.
Cycle VII
If the bug does not stand importance and needs to be postponed, then it is given a status as 'Deferred'.
This was about the various life cycles that a bug goes through in software testing. And in the ways mentioned
above, any bug that is found ends up with a status of Closed, Rejected, Deferred or Postponed.

Software Testing Models


Confidential

Page 16 of 34

There are different software testing models, which the software testing team can choose from. Each of these
models have different methods, as they are based on different principles. A number of factors are taken into
consideration, before a particular model is chosen. The different models that are used are:
Waterfall Model in Testing
Validation and Verification Model
Spiral Model
Rational Unified Process (RUP) Model
Agile Model
Rapid Application Development (RAD) Model
What is a Test Case?
Simply put, a test case is a scenario made up of a sequence of steps and conditions or variables, where test inputs
are provided and the program is run using those inputs, to see how it performs. An expected result is outlined
and the actual result is compared to it. Certain working conditions are also present in the test case, to see how
the program handles the conditions.
Every requirement or objective that the program is expected to achieve, needs at least one test case. Realistically,
it definitely takes more than one test case to determine the true functionality of the application being tested. The
mechanism used to judge the result of the test case, i.e. whether the program has failed or passed the test, is
called a test oracle.
Test cases, at root level, are used to measure how a program handles errors or tricky situations such as if one
input is incorrect or if both inputs are incorrect. They are also expected to expose hidden logical errors in the
program's code that have gone undetected.
Typical Structure of a Test case

Confidential

Page 17 of 34

A formal written test case can be divided into three main parts:
Information
Information consists of general information about the test case such as a case identifier, case creator info, test
case version, formal name of the test case, purpose or brief description of the test case and test case
dependencies. It should also include specific hardware and software requirements (if any) and setup or
configuration requirements.
Activities
This part consists of the actual test case activities such as the environment that should exist during testing,
activities to be done at the initialization of the test, activities to be done after test case is performed, step-by-step
actions to be done while testing and the input data that is to be supplied for testing.
Results
Results are the outcomes of a performed test case. Result data consists of information about expected results,
which is the criteria necessary for the program to pass the test and the actual recorded results.

Test Case Format


Two sample formats for writing test cases are:
Confidential

Page 18 of 34

Detailed
Test
Test
Test
Test Purpos
Prerequisite
Create Environmen
Procedur
Case Id
e
s
d By
t
e
Serial
no
assigne
d to
test
case

Brief
idea
about
case

Software or
Name hardware in
of test which the
creator test case is
executed

Conditions
that should
be fulfilled
before the
test is
performed

Steps to
be
performe
d in test

Test
Data

Verdict:
Expecte Actual
Comment
Pass/Fai
d Result Result
s
l

What
Inputs,
the
variable
program
s and
should
data
do

What
Notes on
is
Status of
the
actuall the test
procedure
y done

Simple
Step No.

Serial no of step

Step or Activity

Detailed operation or procedure

Criteria for Success Expected result


Status

Whether the code passed the test or not

Designing test cases can be time-consuming in a testing schedule, but they are worth the time spent because
they can prevent unnecessary retesting or debugging or at least lower the rate of such operations. Organizations
can take the test case approach in their own context and according to their own perspectives. Some follow a
general approach while others may opt for a more detailed and complex approach. It is important for you to
decide between the two extremes and decide what would work best for you.
Confidential

Page 19 of 34

Software Testing Types in detail


Black Box Testing: It explains the process of giving the input to the system and checking the output, without
considering how the system generates the output. It is also known as Behavioral Testing.
Functional Testing: The software is tested for the functional requirements. This checks whether the application is
behaving according to the specification.
Performance Testing: This testing checks whether the system is performing properly, according to the user's
requirements. Performance testing depends upon the Load and Stress Testing, that is internally or externally
applied to the system.
Load Testing: In this type of performance testing, the system is raised beyond the limits in order to check
the performance of the system when higher loads are applied.
Stress Testing: In this type of performance testing, the system is tested beyond the normal expectations or
operational capacity.
Usability Testing: This is also known as 'Testing for User Friendliness'. It checks the ease of use of an application.

Regression Testing: Regression testing is one of the most important types of testing, which checks whether a
small change in any component of the application affects the unchanged components or not. This is done by reexecuting the previous versions of the application.
Smoke Testing: It is used to check the testability of the application, and is also called 'Build Verification Testing or
Link Testing'. That means, it checks whether the application is ready for further testing and working, without
dealing with the finer details.
Confidential

Page 20 of 34

Sanity Testing: Sanity testing checks for the behavior of the system. This is also called Narrow Regression Testing.
Parallel Testing: Parallel testing is done by comparing results from two different systems like old vs new or
manual vs automated.
Recovery Testing: Recovery testing is very necessary to check how fast the system is able to recover against any
hardware failure, catastrophic problems or any type of system crash.
Installation Testing: This type of software testing identifies the ways in which installation procedure leads to
incorrect results.
Compatibility Testing: Compatibility testing determines if an application under supported configurations
performs as expected, with various combinations of hardware and software packages.
Configuration Testing: This testing is done to test for compatibility issues. It determines minimal and optimal
configuration of hardware and software, and determines the effect of adding or modifying resources such as
memory, disk drives, and CPU.
Compliance Testing: This checks whether the system was developed in accordance with standards, procedures,
and guidelines.
Error-Handling Testing: This determines the ability of the system to properly process erroneous transactions.
Manual-Support Testing: This type of software testing is an interface between people and application system.
Inter-Systems Testing: This method is an interface between two or more application systems.
Confidential

Page 21 of 34

Exploratory Testing: Exploratory testing is similar to ad-hoc testing, and is performed to explore the software
features.
Volume Testing: This testing is done when huge amount of data is processed through the application.
Scenario Testing: Scenario testing provides a more realistic and meaningful combination of functions, rather than
artificial combinations that are obtained through domain or combinatorial test design.
User Interface Testing: This type of testing is performed to check, how user-friendly the application is. The user
should be able to use the application, without any assistance by the system personnel.
System Testing: This testing conducted on a complete, integrated system, to evaluate the system's compliance
with the specified requirements. This is done to check if the system meets its functional and non-functional
requirements and is also intended to test beyond the bounds defined in the software / hardware requirement
specifications.
User Acceptance Testing: Acceptance testing is performed to verify that the product is acceptable to the
customer and if it's fulfilling the specified requirements of that customer. This testing includes Alpha and Beta
testing.
Alpha Testing: Alpha testing is performed at the developer's site by the customer in a closed environment.
This is done after the system testing.
Beta Testing: This is done at the customer's site by the customer in the open environment. The presence of
the developer, while performing these tests, is not mandatory. This is considered to be the last step in the
software development life cycle as the product is almost ready.
White Box Testing: It is the process of giving the input to the system and checking, how the system processes the
input to generate the output. It is mandatory for a tester to have the knowledge of the source code.
Confidential

Page 22 of 34

Unit Testing: Unit testing is done at the developer's site to check whether a particular piece / unit of code is
working fine. It tests the unit of the program as a whole.
Static and Dynamic Analysis: In static analysis, it is required to go through the code in order to find out any
possible defect in the code. Whereas, in dynamic analysis, the code is executed and analyzed for the output.
Statement Coverage: It assures that the code is executed in such a way that every statement of the application is
executed at least once.
Decision Coverage: This helps in making decision by executing the application, at least once to judge whether it
results in true or false.
Condition Coverage: In this type of software testing, each and every condition is executed by making it true and
false, in each of the ways, at least once.
Path Coverage: Each and every path within the code is executed at least once to get a full path coverage, which is
one of the important parts of the white box testing.
Integration Testing: Integration testing is performed when various modules are integrated with each other to
form a sub-system or a system. This mostly focuses in the design and construction of the software architecture.
This is further classified into Bottom-Up Integration and Top-Down Integration testing.
Bottom-Up Integration Testing: Here the lowest level components are tested first and then the testing of
higher level components is done using 'Drivers'. The entire process is repeated till the time all the higher
level components are tested.

Confidential

Page 23 of 34

Top-Down Integration Testing: This is totally opposite to bottom-up approach, as it tests the top level
modules and the branch of the modules are tested step by step using 'Stubs', until the related module
comes to an end.
Security Testing: Testing that confirms, how well a system protects itself against unauthorized internal or
external, or willful damage of code; means security testing of the system. Security testing assures that the
program is accessed by the authorized personnel only.
Mutation Testing: In mutation testing, the application is tested for the code that was modified after fixing a
particular bug/defect.

Check list for Software Tester


We will try to set a baseline that should help any software tester in his/her day-to-day activities. The following
checklists are defined in the most generic form and do not promise to cover all processes that you are required to
go through and follow during your work. There may be some processes which are completely missed out from
the lists or it may also contain processes which you don't need to follow in your form of work.
First Things First
Confidential

Page 24 of 34

Check the scripts assigned to you: This is the first and foremost process in the list. There is no specific logic
used to assign scripts to testers who should execute them all, but you may come across practices where
you will be assigned a script based on your workload for the day or your skill to understand and execute it
in the least possible time.
Check the status/comments of the defect in the Test Report Tool: Once you unveil a bug, it's very
important to keep track of the status of it as you will have to retest the bug once it is fixed by a developer.
Most of the time, the general practice is to confirm if any fix to a bug is successful as it also makes sure that
the tester can proceed with other tests involving the deeper side of that particular functionality.
Sometimes, it also addresses issues related to understanding of functionality of the system, for example: a
tester registering a defect, which is not an actual bug as per the programming/business logic. Then in that
case, a comment from the developer might help in understanding the mistake committed by the tester.
Checks while Executing Scripts
Update the test data sheet with all values which are required such as user name, functionality, test code,
etc.
Use naming conventions defined as testing standards to define a bug appropriately.
Take screen prints for the script executed using naming conventions and provide test data that you used
for the testing. The screen prints will help other testers and developers to understand how the test was
executed and it will also serve as a proof for you. If possible, try to explain the procedure you followed,
choice of data and your understanding, etc.
If your team is maintaining any type of tracking sheet, do not forget to update all the tracking sheets for
the bug, its status, time and date found, severity, etc.
If you are using a test reporting tool, do not forget to execute the script in the tool. Many test reporting
tools require scripts to be executed in order to initiate the life cycle of a bug. For example: a test director
needs a script to be executed till the step where the test script failed, other test steps before the failed test
step are declared as 'passed'.
Update the tracking sheets with current status, status in reporting tools, etc., if it is required to be updated
after you execute the script in the reporting tool.
Confidential

Page 25 of 34

Check if you have executed all the scripts properly and updated the test reporting tool.
After you complete your day's work, it is better to do a peer-to-peer review. This step is very important and
often helps in finding out missing steps/processes.
Checks while Logging Defects
First of all, confirm with your test lead if the defect is valid.
Follow the appropriate naming conventions while logging defects.
Before submitting the defect, get it reviewed by Work Lead/Team Lead.
Give appropriate description and names in the defect screen prints as per naming conventions.
After submitting defects, attach the screen prints for the defect on Test Reporting Tool.
Note down the defect number/unique identifier and update the test tracking sheet with appropriate
information.
Maintain a defect log, defect tracking sheet, screen prints dump folder etc., for a backup.

Checks for Blocking and Unblocking Scripts


Blocking or unblocking of a script relates to a bug which affects execution of a script. For example: if there is a bug
on login screen, which is not allowing anyone to enter the account after entering valid username and password
and pressing the 'OK' button, there is no way you can execute any test script which requires the account screen
that comes after the login screen.
Confirm with your test lead/work lead if the scripts are really blocked due to an existing bug.
Block scripts with an active defect (Defect status: New/Assigned/Fixed/Reopen).
Update the current script/defect in the test reporting tool and tracking sheets with the defect
number/unique identifier, which is blocking the execution of the script or testing of the defect.
If a defect is retested successfully, then unblock all scripts/defects blocked by it.
Confidential

Page 26 of 34

At the end of day, send an update mail to your Team Lead/Work Lead which should include the following:
Scripts executed (Number)
Defects raised/closed (Number)
If any comments are added on defects
Issues/queries if any
Steps for Software Testing
Along with finding defects in the software, software testing aims to verify and validate the software, so that it
complies with the business and technical requirements, works exactly as it is expected to and can be
implemented again, when the existing factors do exist. Although software testing can be implemented at any
point of time in the software development life cycle, it is normally implemented after the development stage has
come to an end. Till the software development process is underway, it is the developers, who carry out the
testing process, commonly referred to as white box testing. After the software has been handed over to the
testing team, starts the black box testing. This brings us to what are the essential steps of software testing. Let's
find out.
Note Scope of Testing
Although software testing aims to find defects in the software, it is important to note, that testing cannot assure
that the software will work perfectly under all circumstances. It will only assure that the software will work
perfectly under certain predetermined conditions. It is in the scope phase that it is decided upon, 'what is the
software supposed to do'. It is important that the testing team understands the aim of the software. This will help
in understanding the real time scenario, where the software is exactly going to be used. The documents created
before the development process started should be analyzed and business rules have to be understood for the
same. If there is any variance found in the document, the same should be raised during the meeting with the
development team. Along with all the other tasks, there an important factors, which should not be forgotten
while deciding the scope of testing. It is the stage when software testing stop. Else there are chances, that the
Confidential

Page 27 of 34

testing process will go in a never ending loop.


Decide Testing Approach
Once the scope has been decided upon, the next stage of software testing is to decide the testing approach, that
is going to be used. The methods and tools, which need to be used for the process are decided upon. At this
stage, it is important to take the client assumptions and various dependencies and limitations into consideration.
Decide Testing Tasks
Now that the testing approach has been decided upon, the next step is to decide the tasks, that have to be
completed in each testing phase. The documents to be delivered to the internal client (development team and
stakeholders) and to the external client (end user) have to be decided upon. Giving the documents to the internal
and external client keeps them in loop about the happenings.
Estimate Time and Budget
The next step is to estimate the time and budget necessary for the entire software testing lifecycle as well as for
each of the phase of the cycle. In case any of the budget is overshooting, measures can be taken to curb them
and get them on track.
Identify the Testing Phases
Now starts the actual task of testing the software. Normally, start and end date of each of the testing phase is
identified, along with the phases that may overlap. It is in this phase that different software testing
methodologies are used to test the software. Each of the methodology used has a different aim, while testing the
software. This helps in testing maximum aspects of the software. The exit criteria for each of the phase is
identified as well, so that the testing process can be stopped after the exit criteria has been met. When bugs have
been identified in the software, the same are reported to the Testing Manager. The Manager creates a report
along with the conditions in which the bug was reported. This is then forwarded to the development team, who
study the bugs. They may accept some of the bugs and reject some of the bugs. After which they normally give
Confidential

Page 28 of 34

the time necessary for fixing the bugs. Once the bugs are fixed, the software is sent back to the testing team for
further testing.
Decide Testing Environment
The different hardware and software requirements necessary for testing the software should be identified in
advance. Normally, wide range of test environment are decided upon. For example, if it is a software intended for
use on the internet, then the software is tested on all the prominent browsers. There are times, when the
software may work perfectly fine on one browser, but may show errors, when it is run on another browser.
Decide Retesting Strategy
If there is a problem with the software, it is sent back to the development team. When the problem has been
fixed, the testing team has to test the software. The strategy to retest the software has to be place for the same.

Decide Regression Testing Strategy


Often it happens, that one defect is fixed and another defect is created. The aim of regression testing is to ensure
that no more new bugs have been introduced in the software and the software is working perfectly.
Closure of Testing Activities
After the exit criteria has been met, start the test closer activities. In this step, the key outputs are captured,
results, log, documents related to the project are put together. All of these go a long way and often prove to be of
help for the future projects.
End-to-End vs. System Testing
When end-to-end testing is carried out, it is the flow of activities in the system starting from scratch to the end of
the system which are tested. On the other hand, in system testing, the system as a whole is tested to find defects,
Confidential

Page 29 of 34

if any, in the system. In most cases, end-to-end testing is carried out after changes have been made to the
system, while system testing is carried out towards the end of the software development, where the application
is validated against the requirements of the end user. To explain the difference between end-to-end testing and
system testing further, we can take an example. If an email page is being tested, the starting point in end-to-end
testing is logging into the page, while the end point is when you log out of the page. On the other hand, system
testing is when you work through entire system, like logging into the system, sending an email, opening an email,
replying to an email, forwarding the email, etc., and finally logging out of the system. Any defects in moving from
one component to another and the working of the component itself is the aim of system testing. Therefore, endto-end testing is often considered to be a subset of system testing.

Software Testing Artifacts


Software testing process can produce various artifacts such as:
Test Plan: A test specification is called a test plan. A test plan is documented so that it can be used to verify and
ensure that a product or system meets its design specification.
Traceability matrix: This is a table that correlates or designs documents to test documents. This verifies that the
test results are correct and is also used to change tests when the source documents are changed.
Test Case: Test cases and software testing strategies are used to check the functionality of individual components
that are integrated to give the resultant product. These test cases are developed with the objective of judging the
application for its capability or feature.
Test Data: When multiple sets of values or data are used to test the same functionality of a particular feature in
Confidential

Page 30 of 34

the test case, the test values and changeable environmental components are collected in separate files and
stored as test data.
Test Script: The test script is the combination of a test case, test procedure and test data.
Test Suite: Test suite is a collection of test cases.
Measuring Software Testing
There arises a need for measuring the software, both, when the software is under development and after the
system is ready for use. Though it is difficult to measure such an abstract constraint, it is essential to do so. The
elements that cannot be measured cannot be controlled. There are some important uses of measuring the
software:
Software metrics help in avoiding pitfalls such as:
1. Cost overruns
2. In identifying where the problem has raised
3. Clarifying goals
It answers questions such as:
1. What is the estimation of each process activity?
2. What is the quality of the code that has been developed?
3. How can the underdeveloped code be improved?
Some of the common software metrics are:
Confidential

Page 31 of 34

Code Coverage
Cyclamate complexity
Cohesion
Coupling
Function Point Analysis
Execution time
Source lines of code
Bug per lines of code

Manual Testing FAQ's


1. Validation vs. Verification
Verification answers the question, Am I building the product right, while validation answers the question, Am I
building the right product. To explain it further, verification is carried out at the end of every phase to ensure that
the software has been developed in accordance with the conditions set up at the beginning of the said phase. On
the other hand, validation is carried throughout the software development life cycle, to ensure that all the
requirements are satisfied.
2. What is severity and priority?
Impact the defect has on working of the system is called severity. It is the tester, who decides severity of the
defect found in the system. Priority, however, is used to describe level of importance of the defect from customer
or business standpoint. The developer is the one, who will decide priority of the defect logged in.
3. Tell me in short about requirement testing?
Requirement testing has an important role to play in entire software testing process. In this testing method
missing, vague, incomplete and wrong requirements are tracked down, because if the software is made with
Confidential

Page 32 of 34

wrong, incomplete or wrong requirements, then the software may not serve the purpose it was developed for. It
ensures incomplete requirements do not make their way into the software.
4. Explain boundary value analysis in brief.
It is an integral part of black box testing, where tests are carried out using values on the boundary. It can either
be a maximum value or a minimum value permitted for that input or output. It helps in analyzing, if the system
performs in the desired manner for both ends of the spectrum.

5. Differentiate between Usability and Functional testing.


In usability testing, application is tested to find, if there are errors pertaining to functioning of the software. For
example, if software gives erroneous messages, which can confuse the user is a part of usability testing.
Functional testing is carried out to find, if the software performs an action it is not supposed to. A classic example
can be when the software accepts an erroneous username and/or password and logs into the system.
6. What is a Test Strategy?
This is a document, which is used to define the test approach for a particular software. It is decided using system
requirement specification document. In the initial phase, test plan is drawn using this document, however, since
the document is not updated at regular interval, it may not prove to be useful in case there is a change in system
requirement specification.
7. Explain in short a 'test case'
It is in a test case, that desired behavior of the software is described. It includes input, expected output along
with the action as well as the actual output. Different parameters, which are essentially a part of a test case are
test case objective, test conditions, input data, test case name, etc. Test cases are normally prepared at the
beginning of the development cycle to ascertain that requirements specification document does not have a
problem.
Confidential

Page 33 of 34

8. Explain: a test suite


Several test cases put together is known as test suite. Often a test suite is designed for every component of
software under test. Post condition of the first test case becomes the precondition of subsequent test case.
9. Differentiate between Load and Volume Testing.
As the name suggests, load testing checks whether the system works as it is supposed to work, when load on the
system increases. Increased load can either be due to increase in number of users at the same time or it can be
due to numerous transactions taking place on the system simultaneously.

10. Distinguish between regression testing and retesting.


Regression testing is testing the software after changes have been made to it, in order to ensure no new bugs
have been introduced in the software. The added advantage of regression testing is that any undiscovered bugs
may also come to fore. Retesting on the other hand is running the same set of test cases on the system, which
failed during the last round of tests, after changes were carried out to the software. This helps in verifying
corrective actions taken on the software.

Confidential

Page 34 of 34

Potrebbero piacerti anche