Sei sulla pagina 1di 6

Testing Glossary

Acceptance testing - Involves the end users performing final testing of an application prior to its release to
production. The focus is on whether or not the application meets user requirements and produces correct
results.
Application - The computerized component of a business system. An application may have both online and
batch processing modes and consist of programs, datastores and other controls.
Application component - Either a program, screen, datastore, JCL member or other control member that is
within the scope of an application. A component may be shared by more than one application.
Application package - An application that is usually licensed from a commercial vendor, sometimes referred
to as commercial off-the-shelf software (COTS). An application package is often customized to fit within a
specific customer business environment.
Application under test (AUT) - The software application being tested, sometimes referred to as the system
under test (SUT).
black-box testing - Testing techniques that measure the behavior of an application in terms of the results
produced based on input values provided. The internal design of the systems is not known by the tester (i.e.,
the system is viewed as a "black box").
bottleneck - The load point at which an application shows degradation in performance attributable to a
specific cause (database, network, router, etc.), whether known or unknown
boundary testing - Involves test cases designed to verify that an application is able to properly handle
specific data conditions such as leap years, change in century indicators, transactions that may span more than
a single date, specific valid data ranges, etc. For example, if the valid data for a cost field is $100 to $200,
then the four boundary conditions are $99.99 (fail), $100 (pass), $200 (pass), $200.01 (fail).
breakpoint - The load point at which the application suffers a fatal degradation or malfunction in
performance.
capture/playback - Technology that is able to record a user's keystrokes (inputs) and system responses
(outputs) as the user interacts with an application through the user interface (UI). The recorded script can then
be played back to reproduce the interaction and results. Capture/playback techniques are typically used in
support of regression testing.
certification test - See validation test.

code coverage analysis - The process of finding areas of a program not exercised by a set of test cases,
creating additional test cases to increase coverage and determining a quantitative measure of code coverage.
compliance testing - Involves test cases designed to verify that an application meets specific criteria, such as
processing four-digit year dates, properly handling special data boundaries and other business requirements.
CPU clock test - Involves physically setting the internal CPU clock to another date value to determine if
application processing is adversely affected (i.e., the application does not properly process dates in the next
century). This testing method was used extensively during Year 2000 testing.
data comparison - A testing technique that involves the comparison of two datastores to identify any
variations that may exist. The differences in data values between the two datastores may be expected or
unexpected, thus identifying potential defects within an application. Data comparison using validated baseline
datastores is often used as part of regression testing.
date range testing - Involves test cases designed to verify that an application is able to process date values
that fall within a specific series (e.g., dates between January 1, 1906 and December 31, 2005). Date range
testing is a specialized type of boundary testing and can be used to validate that a specific century window or
bridging logic is working properly. This testing method was used extensively during Year 2000 testing.
date simulation - Technology or coding techniques that generate a specific date value in response to a
program's request for the current system date from the CPU. Date simulation utilities in effect intercept the
program system date request and return a user-specified value to determine if an application is able to
properly process the date. Date simulation is used in support of regression and time warp testing techniques.
This testing method was used extensively during Year 2000 testing.
debugging - Involves the analysis of the internal flow of a program's logic. Techniques such as statement
execution counting, data value tracking, pseudo code and breakpoints are used to monitor and modify the
program execution to determine if any defects exist within the logic or in the input data being processed.
Debugging technology and techniques are used primarily in unit testing.
defect - Program logic or data value that is not correct (i.e., does not meet specifications). Defects within an
application are most often detected during the
testing cycle.
disaster recovery - The ability of an application or application environment to be restored to a point of
integrity in the event of an unforeseen system hardware or software failure. The system failure may be caused
by an external event such as a storm, earthquake, power failure, etc., or an internal mechanical breakdown.
Disaster recovery techniques require that periodic checkpoints (backups) be established from which the
application or environment could be restored with full system and data integrity.
Efficiency - The ability of software to perform with minimum use of computer resources (e.g., internal
memory, external memory and machine cycles).
Error - A condition that may occur within an application that is not correct, such as an invalid data value. If
detected by the application logic, the error should be properly handled within error-handling specifications. If
undetected, the error usually identifies a program defect and may result in an abnormal termination of the
application processing and/or loss of data integrity.
Error handling testing - Involves test cases designed to verify that the application is able to properly process
incorrect transactions and error conditions (such as invalid data values). Errors of this type should not crash
the application, but they will require user intervention (e.g., click a response via a dialog box in a client/server
application) to handle the error condition.
External interface testing - Involves test cases designed to verify that an application is properly processing
data received from or sent to another application or external entity. This most often involves the testing of
electronic data interchange (EDI) interfaces between external corporate or government agencies.
Factory ready unit - A grouping of application components that are deemed to be interrelated due to
functional or technical dependencies or boundaries that have been analyzed for specific remediation by a
"conversion factory." A factory ready unit also may be referred to as a remediation unit, change unit,
conversion unit, upgrade unit or work unit.
Fault tolerance testing - Involves test cases designed to determine if an application is able to detect and
respond to system faults, such as loss of communication links (e.g., terminal hardware failure) or loss of
datastore (I/O) access.
Field validation - Test cases designed to verify the values of specific items either within a program's internal
storage or within an external datastore's stored values.
Forward regression testing - Regression tests that involve the advancement of date values within the test
case that in effect cause a transaction to be performed at a future date. This type of testing is sometimes
referred to as "time warping" or "future date testing." This testing method was used extensively during Year
2000 testing.
Functional testing - Involves test cases designed to validate that the application processing results satisfy
end-user specifications and business requirements, as well as ensure that the application works as
designed/expected. Functional testing solutions verify content and business processes in order to test the true
end-user experience on your site.
future date testing - See forward regression testing.

Glass-box testing - Involves test cases designed to test specific logic paths within an application. Glass-box
testing techniques require that the tester have some knowledge of the application design and logic coding (i.e.,
one can see into the application "box" through the transparent glass walls). Also known as "white-box
testing."
Gray-box testing - Combines white- and black-box testing. Testing that includes both specification point of
view and analysis of internal code structure.
Human engineering - The ability of the software to be easily understood and used by human users.
Integration testing - Involves test cases designed to validate that two or more application components are
properly working together. Integration testing usually follows or is conducted in parallel with unit testing.
Intersystem testing - Involves test cases designed to validate that two applications are properly
communicating/functioning via electronic data exchange, transaction protocols, etc
Load testing - Involves stress testing applications under real-world conditions to predict system behavior and
performance and to identify and isolate problems. Load testing applications can emulate the workload of
hundreds or even thousands of users, so that you can predict how an application will work under different user
loads and determine the maximum number of concurrent users accessing the site at the same time.

Manual support testing - Involves test cases designed to validate the processes and documentation used by
end users in interacting with the application, such as data entry.
Manual support testing - Involves test cases designed to validate the processes and documentation used by
end users in interacting with the application, such as data entry.
Modifiability - The ability of the software to be revised by a software maintainer.
Negative testing - Involves the intentional entry of incorrect data to determine if an application will recognize
that the data is invalid and take appropriate action.
Portability - The ability of the software to be transferred easily from one computer to another for the
purposes of execution.
Performance testing - Measures system response to an increasing volume of customers and compares this
response time against the design performance specifications.

Positive testing - Involves the entry of valid data into an application to determine if its processing results are
correctly produced.
Problem tracking - Involves registering and monitoring the resolution of errors or defects discovered during
a test.
Production data - Datastores in which current business data is stored. Extraction from production datastores
based on selection criteria is often a source for test data generation.
Recovery testing -Verifies that the processing results (outputs) of an application are functionally equivalent
after one or more changes have been made within the application, and that none of the modifications have
adversely impacted the unchanged functionality of the application.
Reliability - The probability that software will not cause the failure of the system for a specified time under
specified conditions. Also defined as the ability of the software to satisfy its requirements without error.
Requirements testing - Verifies that the application processing results are functionally correct measured
against user requirements, design specifications and compliance to policies and standards. If technical changes
are made within an application program that do not alter the processing results, then regression testing should
be performed.
Response time testing - Used to determine the time taken by the application to respond to user actions (such
as keystrokes and mouse movements). Also included can be testing the time taken to download different pages
of the site to determine if certain pages are taking much longer than the accepted standard.
Scalability testing - Used to determine how scalable the application is and how well the application continues
to function in a larger size or volume configuration.
Source change management - Technology that tracks modifications made to program source code (i.e.,
versions) and ensures that the source code and compiled object code running in the test and/or production
environments are at the proper version levels.
special date value testing - Test cases designed to validate that an application is able to process specific date
values, such as leap years, end-of-month dates, etc. This is a form of boundary testing. This testing method
was used extensively during Year 2000 testing.
Static testing - Testing techniques that involve either manual or automated analysis of application
components and/or processing results without the actual execution of the application. A quality assurance
(QA) review is a form of static testing.
stress testing - A structural testing technique that measures whether the application environment is properly
configured to handle expected, or potentially unexpected, high transaction volumes. Ideally, stress testing
emulates the maximum capacity the application can support before causing a system outage or disruption.
system - Either an application or the underlying operating hardware/software that supports the application
execution (such as the operating system, database management system, teleprocessing system, file
management systems, etc.).
system testing - Involves test cases designed to validate that an application and its supporting
hardware/software components are properly processing business data and transactions. System testing
requires the use of regression testing techniques to validate that business functions are meeting defined
requirements
Testability - The ability of the software to be easily verified by execution.

Test automation - The use of technology, such as capture/playback and data comparison, to enable test
scripts/cases to be developed and executed (potentially in an unattended or off-hours mode).
test baseline - A known (usually validated) state of application components used as the basis for comparison
for regression testing. Comparison of a component in a present state against its baseline state will identify
variations that may be expected or unexpected, depending on the particular test case.
test bed - The existence of test scripts, datastores and other artifacts used to perform functional and
load/performance/stress testing of an application.
test case - Validates one or more criteria to certify that an application is structurally and functionally ready to
be implemented into the business production environment. A test case is usually associated with at least one
business function/requirement that is being validated and requires that specific test data be developed for input
and reference during the test execution. The test case execution may be governed by one or more
capture/playback scripts, test schedules, checklists or data comparison scripts against an established baseline.
test checklist - Used to document specific steps and/or criteria to be validated for a test case. The results of
the test can be measured against the checklist to determine if the test is complete.
test coverage analysis - Involves the measurement of test results to determine the degree of completeness that
has been achieved over a series of test executions against overall test goals and criteria. Coverage metrics may
include the number of statements executed, specific sections of code that have been executed, or logic paths
executed during a specific test. Collection of these metrics over a series of test runs provides the data to
determine the degree of test coverage that has been achieved. Test coverage should not be confused with code
coverage.
test data - Data developed in support of a specific test case. The test data resulting from a test execution may
serve as input to a subsequent test. Test data may be manually generated or extracted from an existing source
such as production data. Recording of user input using capture/playback tools also may be a source of test
data.
test data generation - Processes that are used to create or capture test data. This may be manual or
automated.
test environment - Usually consists of dedicated hardware, software, library configurations and
tools/facilities that enable an application to be tested independent of the existing production environment. If
possible, the test environment should enable a mirroring of the production environment. Also known as “test
infrastructure.”

test error log - A log created as an output of a test execution. The test error log contains a record of any
problems, failures or errors encountered during the test.
test execution - The actual invocation and running of one or more test cases. A test execution produces tests
results, a test execution log and potentially a test error log. Also known as "test run."
test execution log - A log created as an output of a test execution. The test execution log contains a record of
the test process that was invoked. Depending on the degree of detail captured and the level of automation
utilized, the log may contain detailed a test history, screen snaps and execution statistics.
test execution statistics - Metrics produced as an output of a test execution, indicating the time it took to run
a specific test case, I/O record counts, statement execution counts, code breakpoints that were tested, etc. The
test execution statistics may be stored in a test library or repository for historical analysis.
test history - A record produced as an output of a test execution. The test history shows what aspects of a test
case were executed, the results and potentially statistics/metrics at each step.
test maintenance - Involves the revision/modification of test data or test scripts to retain integrity of the test
case. A test case may need to be modified if one or more application components have been structurally or
functionally redesigned (e.g., a new field is added to a user input screen).
test management - Processes and procedures used to track and manage the development of test cases, their
test scripts, test data and historical results. Test management activities often involve the use of a central test
repository into which the test cases and execution results are registered. Formal check-in and check-out
procedures may be required for test maintenance. Also see test repository.
test matrix - A matrix used to show the interrelationships of all test cases within a test plan.
test plan - Details the activities required to achieve a set of test goals. The test plan provides the framework
for the development of test suites and test cases therein.

test repository - A place used to store and manage test plans, test cases/scripts, test results, test schedules and
test execution statistics. The test repository may be centralized or decentralized, depending on the
configuration of the test environment.

test results - Results produced as an output of a test execution. The test results include actual application
outputs (screens, files, databases, reports, etc.), as well as test history, test error and test execution logs.
test schedule - Establishes a sequence and timeframe for the execution of test cases. If automated, the test
schedule may operate in an unattended mode using timing features of the automated test tool.
test script - Manages the execution of a capture/playback tool by providing the sequence of commands for
providing input and responses to online transaction screens. The test script may contain conditional logic and
import data from external variable files, as well as invoke other test scripts as appropriate within the test case.
test team - Consists of individuals who facilitate the test planning and scripting activities, as well as analyze
the test results. The test team may be part of a formal QA group within the organization. One or more test
specialists, business analysts and end users may participate in the test team.
unified modeling language (UML) - A modeling language for specifying, visualizing, constructing and
documenting the artifacts of a system-intensive process. The language has gained significant industry support
from various organizations via the UML Partners Consortium and has been submitted to and approved by the
Object Management Group (OMG) as a standard.
unit testing - Involves the design of test cases that validate that the internal program logic is functioning
properly, and that program inputs produce valid outputs. All decision branches and internal code flow should
be validated. Unit testing involves the use of debugging technology and testing techniques at an application
component level and is typically the responsibility of the developers, not the QA staff.
validation test - Involves the execution of a series of functional and structural test cases to verify that an
application or group of applications is fully operational and is ready for implementation into the production
environment. The validation test may include the upgrade of supporting system hardware and/or software.
Also known as “certification test.”
virtual user - A software process that simulates real user interactions with the application under test

Potrebbero piacerti anche