Sei sulla pagina 1di 22

Software Testing

1. Introduction
In a software development project, errors can be creeping at any stage during the
development. For each phase we have discussed different techniques for detecting and
eliminating errors that originates in that phase.

During testing, the program to be tested is executed with a set of test cases, and the
output of the program for the test case is evaluated to determine if the program is
performing as it is expected to. Due to its approach, dynamic testing can only as certain
the presence of error in the program; the exact nature of the error is not usually decided
by testing. Testing forms the first step in determining the errors in a program. Clearly
the success of testing revealing errors in programs depends critically on the test cases.

Page | 1
Software Testing

2. Need for Testing


Testing is the process of running a system with the intention of finding errors.Testing
enhances the integrity of a system by detecting deviations in design and errors in the
system. Testing aims at detecting error-prone areas. This helps in the prevention of
errors in a system. Testing also adds value to the product by conforming to the user
requirements.

2.1 Causes of Errors


The most common causes of errors in an e-commerce system are:

Communication gaps between the developer and the business decision maker.
Time provided to a developer to complete the project.
Over commitment by the developer.

2.1.1 Communication gaps between the developer and the business


decision maker
A communication gap between the developer and the business decision maker is
normally due to subtle differences between them. The difference can be classified into
five broad areas:

Thought processes
Background and experience
Interests
Priorities
Language

For example, an entrepreneur with a financial marketing background wants to set up an


online share-selling site. The developer with an engineering back ground might be
unaware of the intricate details of the financial market. Therefore the developer may not
know to incorporate those features that can add value to other site visitors into the site
such as the prices of mutual funds.

Page | 2
Software Testing

2.1.2Time provided to a developer to complete the project


A common source of errors in projects comes from time constraints in delivering a
product. At best, a project schedule provides an educated guess that is based on what is
known at the time. At worst, a project schedule is a wish-derived estimate with no basis
in reality.

Assuming the best, previously unknown problems may present themselves during
development and testing. This can lead to problems maintaining the schedule. Failing to
adjust the feature set or schedule when problems are discovered can lead to rushed work
and flawed systems.

2.1.3Over commitment by the developer


High enthusiasm can lead to over commitment by the developer. In these situations,
developers are usually unable to add here to deadlines or equality due to the lack of
resources or required skills on the team.

2.2 Objectives of Testing


Testing is essential because of:
Software reliability
Software quality
System assurance
Optimum performance and capacity utilization
Price of non-conformance

2.2.1 Software reliability


E-commerce requires software that performs critical tasks. Such as creating storefront
and a shopping cart, collecting customer data, and providing the payment gateway. This
software needs to function correctly.

Testing assures the organization of the quality and integrity of the e-commerce solution.

2.2.2 Software quality


Software quality is characterized by the correctness of program logic and
implementation. It begins with testing the software during development. The developer
must test each module to make sure that it functions correctly at the time it is written or
Page | 3
Software Testing

modified. Test values and boundary conditions must both be verified. Next, the module
should undergo interface testing to check for functional errors. Only after the module
works correctly can it be released for testing to the larger system. Early detection of
errors saves rework and prevents a problem from becoming more complex in nature. As
a result, error detection during the operation of a system incurs greater direct and
indirect costs.

At a higher level, the interaction of individually correct components must be tested. For
example, if a customer enters the details of their credit card payment and are
disconnected before the order confirmation, the software must indicate the status of the
transaction when the customer reconnects to the e-commerce site. If the software
functions otherwise, it does not meet organization requirements. Another instance of
the quality of software is that of accurate tax and shipping calculations. Because all
states have different tax systems and some of them are complex, it becomes difficult for
the developer to integrate all the tax structures with multi-location shipping. This raises
the complexity of the software and increases the chance of errors.

2.2.3 System assurance


The main purpose of system assurance is to deliver a quality product. Conformance to
requirements increases the organizations confidence in the system. An e-commerce
over the Internet is more critical than in real life. If the faith of any of these parties
dwindles in the e-commerce site, the entrepreneurs can lose lot money, as well as their
reputation. For example, in the case of a faulty e-commerce system, the credit card of
the customer may be billed immediately for the complete order, when only a partial
order has been filled. Testing must assure that partial order fulfillment and billing are
done correctly.

2.2.4 Optimum performance and capacity utilization


Another purpose of testing is to ensure optimum performance and capacity utilization of
e-commerce system components. The purpose of stress or capacity testing/planning is
to make sure that the web site is able to perform acceptably at peak usage. For example,
during the Christmas shopping season the web site loads increase significantly. To

Page | 4
Software Testing

handle this, the e-commerce solution must be able to handle the anticipated load with
minimal degradation.

2.2.5 Price of non-conformance


The main purpose of testing is to detect errors and error-prone areas in a system. Testing
must be through and well planned. A partially tested system is as bad as an untested
system. And the price of an untested and under-tested system is high.

The following list suggests some of the potential fallouts of an untested or under-tested
any system:

Legal suits against the entrepreneur due to a faulty transaction system. This
type of system may not have been tested for transaction software
functionality.
Loss of critical data from the database can result in untraceable
transactions. This could again invite legal action and losses due to the
digression of the site visitors to competitors sites.
Insecure transactions can cause losses to customers and may result in the
withdrawal of certification by the security certification agency.
System may breakdown if the system has not been tested. A system
breakdown results in loss of time due to restoration of service. Fixing the
error can involve both direct and indirect costs.

2.3 Best Practices


While testing a system steps are to be followed:

Prepare comprehensive test plan specifications and test cases for each level of
testing. Supplement these with the test data and test logs. Test plans for system
testing may involve operators and test plans because acceptance testing
involves customers.
Design the test cases to test system restrictions, such as file and database size
(stress testing).
Develop the data to test specific cases. Copies of live files must not be used
except for Acceptance testing.
Page | 5
Software Testing

Do not use confidential data for testing without written authorization,


especially in the case of Acceptance testing.
Follow relevant standards.
Perform Regression testing on each component of the system. This ensures
that no anomalies have crept into the system because of the changes made to
the system.
Make sure to document and set up the test environment for each level in
advance of testing. Test environments specify the preconditions required to
perform the tests.
Specify the intended test coverage as part of the test plan Test coverage is the
degree to which Specific test cases address all specified requirements for a
specific for a specific system or component.

3. Heuristics of Software
Testing
Software testability is how easily, completely and conveniently a computer program
canbe tested.Software engineers design a computer product, system or program keeping
in mind theproduct testability. Good programmers are willing to do things that will help
the testingprocess and a checklist of possible design points, features and so on can be
useful in negotiating with them.
Page | 6
Software Testing

Here are the two main heuristics of software testing.


1. Visibility
2. Control

3.1 Visibility
Visibility is our ability to observe the states and outputs of the software under test.

Features to improve the visibility are:

3.1.1 Access to Code


Developers mustprovide full access (source code, infrastructure, etc) to testers.The
Code, change records and design documents should be provided to thetesting team. The
testing team should read and understand the code.

3.1.2 Event logging


The events to log include User events, System milestones, Error handling andcompleted
transactions. The logs may be stored in files, ring buffers in memory and/or serial ports.
Things to be logged include description of event, timestamp,subsystem, resource usage
and severity of event. Logging should be adjusted bysubsystem and type. Log file
report internal errors, help in isolating defects, andgive useful information about
context, tests, customer usage and test coverage.The more readable the Log Reports are,
the easier it becomes to identify thedefect cause and work towards corrective measures.

3.1.3 Error detection mechanisms


Data integrity checking and System level error detection (e.g. Microsoft Appviewer) are
useful here. In addition, Assertions and probes with the followingfeatures are really
helpful.
Code is added to detect internal errors.
Assertions abort on error.
Probes log errors.
Design by Contract theory---This technique requires thatassertions be defined for
functions. Preconditions apply to inputand violations implicate calling functions while

Page | 7
Software Testing

post-conditionsapply to outputs and violations implicate called functions.


Thiseffectively solves the oracle problem for testing.

3.1.4 Resource Monitoring


Memory usage should be monitored to find memory leaks. States of runningmethods,
threads or processes should be watched (Profiling interfaces may beused for this.). In
addition, the configuration values should be dumped.Resource monitoring is of
particular concern in applications where the load onthe application in real time is
estimated to be considerable.

3.2 Control
Control refers to our ability to provide inputs and reach states in the software under
test.The features to improve controllability are:

3.2.1 Test Points


Allow data to be inspected, inserted or modified at points in the software. It is
especially useful for dataflow applications. In addition, a pipe and filters architecture
provides many opportunities for test points.

Custom User Interface controls:Custom UI controls often raise serious


testability problems with GUI test drivers.

Ensuring testability usually requires:


Adding methods to report necessary information
Customizing test tools to make use of these methods

Test Interfaces: Interfaces may be provided specifically for testing e.g. Excel
and Xconq etc. Existing interfaces may be able to support significant testing e.g.
Install Shield, AutoCAD, Tivoli, etc.

Fault injection: Error seeding---incrementing low level I/O code to simulate


errors---makes itmuch easier to test error handling. It can be handled at both
system andapplication level, Tivoli, etc.

Page | 8
Software Testing

Installation and setup: Testers should be notified when installation has


completed successfully. Theyshould be able to verify installation,
programmatically create sample recordsand run multiple clients, daemons or
servers on a single machine.

3.3 Categories of Heuristics of software testing

3.3.1 Observability
What we see is what we test.
Distinct output should be generated for each input
Current and past system states and variables should be visibleduring testing
All factors affecting the output should be visible.
Incorrect output should be easily identified.
Source code should be easily accessible.
Internal errors should be automatically detected (through self-testingmechanisms)
and reported.

3.3.2 Controllability
The better we control the software, the more the testing process can be automated and
optimized.
Check that
All outputs can be generated and code can be executed throughsome combination of
input.
Software and hardware states can be controlled directly by thetest engineer.
Inputs and output formats are consistent and structured.
Test can be conveniently, specified, automated and reproduced.

3.3.3 Decomposability
By controlling the scope of testing, we can quickly isolate problems and perform
effective and efficient testing. The software system should be built fromindependent
modules which can betested independently.

3.3.4 Simplicity

Page | 9
Software Testing

The less there is to test, the more quickly we can test it.The points to consider in this
regard are functional (e.g. minimum set of features), structural (e.g. architecture is
modularized) and code (e.g. a codingstandard is adopted) simplicity.

3.3.5 Stability
The fewer the changes, the fewer are the disruptions to testing.The changes to software
should be infrequent, controlled and not invalidating existing tests. The software should
be able to recover well from failures.

3.3.6 Understandability
The more information we will have, the smarter we will test.The testers should be able
to understand the design, changes to the design and the dependencies between internal,
external and shared components.Technical documentation should be instantly
accessible, accurate, well organized, specific and detailed.

Suitability
The more we know about the intended use of the software, the better we can organize
our testing to find important bugs.The above heuristics can be used by a software
engineer to develop a software configuration (i.e. program, data and documentation)
that is convenient to test and verify.

4. Types of Testing
Testing is usually applied to different types of targets in different stages of the
softwares delivery cycle. The stages progress from testing small components (unit
testing) to testing completed system (system testing).

4.1 Unit Test

Page | 10
Software Testing

Unit test, implemented early in the iteration, focuses on verifying the smallest testable
elements of the software. Unit testing is typically applied to components in the
implementation model to verify that control flow and data flow are covered and
function as expected. These expectations are based on how the components participate
in executing a use case. The implementers perform unit test as the unit is developed.
The detail of unit test is described in the implementation work flow.

4.2 Integration Test

Integration testing is performed to ensure that the component in the implementation


model operate properly when combined to execute a use case. The target-of-test is a
package or a set of packages in the implementation model. Often the packages being
combined come from different development organizations. Integration testing exposes
incompleteness or mistakes in the packages interface specifications.

4.3 System Test

System testing is done when the software is functioning as a whole or when- defined
subsets of its behavior are implemented. The target in this case is the whole
implementation model for the system.

4.4 Acceptance Test

Acceptance testing is the final test action prior to deploying the software. The goal of
Acceptance testing is to verify that the software is ready and can be used by the end-
users to perform those functions and tasks the software was built to do.

In the introduction to Test, it was stated that there is much more to testing software than
testing only the functions, interface and response time characteristics of a target-of-test.
Additional test must focus on characteristics/ attributes such as the target-as-test:
Integrity (resistance to failure)
Page | 11
Software Testing

Ability to be installed/executed on different platforms


Ability to handle many requests simultaneously
In order to achieve this, many different tests are implemented and executed each with a
specific test objective. Face focused on testing only one characteristics or attribute of
the large-of-test.

Often individual tests are categorized, implemented and executed in groups, most
commonly arranged by similarities in their test objectives or the quality dimension they
address, such as:

Quality Type of Test


dimension
Functionality Function test: tests focused on verifying the target-of-test
functions as intended, providing the required service(s),
method(s), or use case(s). This test is implemented and
executed against different target-of-tests including units,
integrated units, applications(s), and systems.
Security test: tests focused on ensuring the target-of-test
data (or systems) is accessible to only those actors
intended. This test is implemented and executed various
target-of-test.
Volume test: testing focused on verifying the target-of-the
ability to handle large amount of data, either as input and
output or resident within the database. Volume testing
includes test strategies such as creating queries
that[would] return the entire contents of the database or
have so many restrictions that no data is returned or data
entry of the maximum amount of data in each field.
Usability Usability test: Tests which focus on:
Human factors,
Aesthetic,
Consistency in the user interface,
Online and context-sensitive help,
Page | 12
Software Testing

Wizards and agents,


User documentation and
Training materials
Reliability Integrity testing: tests which focus on assessing the
target-of-tests (resistance to failure)and technical
compliance to language syntax, and resource usage. This
test is implemented and executed against different target-
of-test including units and integrated units.
Structure test: tests that focus on assessing the target-of-
tests adherence to its design and formation. Typically this
test is done for web-enabled application ensuring that all
links are connected. Appropriate content in displayed and
there is no orphaned content. See concepts: structure
testing for additional information.
Stress test: a type of reliability test that focuses on
ensuring the system functions add intended when
abnormal conditions are encountered stresses on the
system may include extremes workloads, insufficient
memory, and unavailable services hardware or diminished
shared resources. Typically, these tests are performed to
determine when the system does break, how it breaks.
Performance Benchmark test: a type of performance test that
compares the performance of a [new or unknown] target-
of-tests to a known reference-workload and system.
Contention test: test focused on verifying the target of-
tests can acceptably handle multiple actor demands on the
same resource (data, records, memory,etc..)
Load test: a type of performance test to verify and assess
acceptability of the operational limits of a system under
varying workloads while the system-under test remains
constant. Measurements include the characteristics of the
work load and the response time. When system
incorporate distributed architectures or load balancing,
special tests are performed to ensure the distribution and
Page | 13
Software Testing

the load balancing methods function appropriately.

Performance profile: a test in which the target-of-tests timing


profile is monition including execution flow, data access
function and system calls to identify and address performance
bottlenecks and inefficient process.

Supportability Configuration test: tests focused on ensuring the target-


of-test functions as intended on different hardware and or
software configuration. This test may also implement as a
system performance test.
Installation test: test focused on ensuring the target-of-
test installs as intended on different hardware and or
software configuration and under different condition (such
as insufficient disk space or power interrupt). This test is
implemented and executed against application(s) and
system

5. Test cases
A Test case a set of test inputs execution conditions and expected results developed for
a particular objective such as to exercise a particular program path or to verify
compliance with a specific requirement.

Unit testing is implemented against the smallest testable element(units) of the software
and involves testing the internal structure such logic and data flow and the unit function

Page | 14
Software Testing

and observable behaviors. Designing and implementing tests focused on a unit internal
structure relies upon the knowledge of the units implementation (white- box approach).

The design and implementation of test s to verify the units observable behaviors and
functions does not rely upon knowledge of the implementation and therefore is known
as black-box approach.

5.1White-Box Test Approach

A white-box test approach should be taking to verify a units internal structure.


Theoretically, testers should test every possible path through the code but that is
possible only in very simple units. At the very last testers should exercise every
decision-to-decision path (dd-path) at least once because testers are then executing all
statements at least once. A decision is typically an if-statement and a dd-path is a path
between two decisions.
To get this level of test coverage, it is recommended that testers should choose test data
so that every decision is evaluated in every possible way. Reliability testing should be
done simultaneously with the white box testing.

5.2 Black-Box Test Approach

The purpose of a black-box test is to verify the units specified function and observable
behavior without knowledge of how the unit implements the function and behavior.
Block-box tests focus and relay upon the units input and output.Deriving unit test
based upon the black-box approach utilized the input and output arguments of the units
operation and / or output state for evolution. For example,the operation may include an
algorithm (requiring two values as input and return a third as output) or initiate change
in an objects or components state such as adding or deleting a database record. Both
must be tested completely. To test an operation you should derive sufficient test cases to
verify the following:
For each valid used as input .an appropriate value was returned by the operation
For each valid input state , an appropriate output state occur
For each invalid input state , an appropriate output state occur

5.3Test cases based upon input Arguments

Page | 15
Software Testing

An input argument is an argument used by an operation. By using input arguments test


cases should be created for each operation, for each of the following input condition:

Normal values form each equivalence class.


Values on the boundary of each equivalence class.
Values outside the equivalence classes.
Illegal values.

Remember to treat the object state as an input argument. If, for example, test an
operation add on an object Set, test add with values from all of Sets equivalence
classes, that is, with a full Set, with some element in Set, and with an empty Set.

5.4 Test Cases based upon Output Arguments


An output argumentis an argument that an operation changes. An argument can be
both an input and an output argument. Select input so that output is according to each
of the following.

Normal values from each equivalence class.


Values on the boundary equivalence class.
Values outside the equivalence classes.
Illegal values.

5.5 Test Cases for Regression Test


Regression testing compares two builds or versions of the same target-of-test and
identifies differences as potential defects. It thus assumes that a new version should
behave like an earlier one and ensures that defects have not been introduced as a result
of the changes.

Ideally, you would like all the test cases inone iteration to be used as test cases
in the later iterations. The following guidelines should be used to identify, design, and
implement test cases that maximize the value of regression testing and re-use. While
minimizing maintenance:

Ensures the test case identify only the critical data elements(those needed to
create/support the condition being tested).
Ensure each test case describes or represents a unique set of inputs or sequence of
events that result in a unique behavior by the target-of-test.
Page | 16
Software Testing

Eliminate redundant or equivalent test cases.


Group together test cases which have the same target-of-test initial state and state
of the test data.

Page | 17
Software Testing

6. Finding Faults
It is commonly believed that the earlier a defect is found the cheaper it is to fix it. For
example, if a problem in the requirements is found only post-release, then it would cost
10100 times more to fix than if it had already been found by the requirements review.

6.1 Testing Tools

Program testing and fault detection can be aided significantly by testing tools and
debuggers. Testing/debug tools include features such as:

Program monitors, permitting full or partial monitoring of program code


including:

o Instruction Set Simulator, permitting complete instruction level monitoring


and trace facilities

o Program animation, permitting step-by-step execution and conditional


breakpoint at source level or in machine code

o Code coverage reports

Formatted dump or Symbolic debugging, tools allowing inspection of program


variables on error or at chosen points.

Automated functional GUI testing tools are used to repeat system-level tests
through the GUI.

Benchmarks, allowing run-time performance comparisons to be made.

Performance analysis (or profiling tools) that can help to highlight hot spots and
resource usage.

6.2 Measuring Software Testing

Page | 18
Software Testing

Usually, quality is constrained to such topics as correctness, completeness, security but


can also include more technical requirements such as capability, reliability, efficiency,
portability, maintainability, compatibility, and usability.

6.3 Testing Artifacts

Software testing process can produce several artifacts.

6.3.1 Test Plan

A test specification is called a test plan. The developers are well aware what test plans
will be executed and this information is made available to management and the
developers. The idea is to make them more cautious when developing their code or
making additional changes. Some companies have a higher-level document called a test
strategy.

6.3.2 Traceability matrix

A traceability matrix is a table that correlates requirements or design documents to test documents. It is
used to change tests when the source documents are changed, or to verify that the test results are
correct.

6.3.3 Test script

The test script is the combination of a test case, test procedure, and test data. Initially
the term was derived from the product of work created by automated regression test
tools. Today, test scripts can be manual, automated, or a combination of both.

6.3.4 Test suite

The most common term for a collection of test cases is a test suite. The test suite often
also contains more detailed instructions or goals for each collection of test cases. It
definitely contains a section where the tester identifies the system configuration used
Page | 19
Software Testing

during testing. A group of test cases may also contain prerequisite states or steps, and
descriptions of the following tests.

6.3.5 Test data

In most cases, multiple sets of values or data are used to test the same functionality of a
particular feature. All the test values and changeable environmental components are
collected in separate files and stored as test data. It is also useful to provide this data to
the client and with the product or a project.

6.3.6Test harness

The software, tools, samples of data input and output, and configurations are all referred
to collectively as a test harness.

Page | 20
Software Testing

7. Conclusion
Software testing is an art. Most of the testing methods and practices are not very
different from 20 years ago. Good testing also requires a tester's creativity, experience
and intuition, together with proper techniques.

Testing is more than just debugging. Testing is not only used to locate defects and
correct them. It is also used in validation, verification process, and reliability
measurement.

Testing is expensive. Automation is a good way to cut down cost and time. Testing
efficiency and effectiveness is the criteria for coverage-based testing techniques.

Complete testing is infeasible. Complexity is the root of the problem. At some point,
software testing has to be stopped and product has to be shipped. The stopping time can
be decided by the trade-off of time and budget. Or if the reliability estimate of the
software product meets requirement.

Page | 21
Software Testing

8. Bibliography
Books
An Integrated Approach to Software Engineering by Pankaj Jalote

Websites
www.4shared.com
www.google.com
www.scribd.com
www.wikipedia.org

Page | 22

Potrebbero piacerti anche