Sei sulla pagina 1di 77

Testing Throughout the Software Life-cycle

Slide 1 Internal

Testing Throughout the Software Lifecycle


Software Development Models Test Levels Test Types the Targets of Testing Maintenance Testing

Summary

Software Development Models


The V-Model Iterative Development Models Testing within a Lifecycle Model

V-Model
User/Business Requirements
Acceptance Test Plan

Acceptance Test

System Requirements

System Test Plan

System Test

Technical Specification Development Levels

Integration
Test Plan

Integration
Test

Program Specification

Unit Test Plan

Unit Test

Test Levels

Coding

V-Model
The benefits of the V Model include: The testing phases are given the same level of management attention and commitment as the corresponding development phases The outputs from the development phases are reviewed by the testing team to ensure their testability Verification and validation (and early test design) can be carried out during the development of the software work products The early planning and preliminary design of tests provides additional review comments on the outputs from the development phase

V-Model
The levels of development and testing shown in the model vary from project to project For example, there may additional test levels, such as System Integration Testing, sitting between System Testing and Acceptance Testing (more on these test levels later) The work products coming out from any one development level may be utilised in one or more test levels For example, whilst the prime source for Acceptance testing is the Business Requirement, the System Requirements (e.g. Use Cases) may also be needed to support detailed test design

Iterative Development Models


Iterative Development
Establish Requirements Design the System Build the System Test the System

Achieved with small developments Iterations and Increment within Iterations As Increments are developed and tested the System grows and grows. Need for more testing with Regression Testing paramount E.g. RAD, RUP and Agile development models Agile development
aim is to deliver software early and often Rapid production and time to market Can handle (and anticipates) changing requirements throughout all development and test phases

Iterative Development Models


Rapid Application Development
User Requirements

Code

Acceptance Test

Testing within a Lifecycle Model


Characteristics of good testing in any life cycle model:
A Test Level exists for every development Level Each Test Level has specific objectives Test analysis and design for each Test Level begins during corresponding development Level Early and proactive involvement of testers in reviewing development deliverables benefits both parties

Test Levels should be adapted depending on Project nature. May be better to combine Test Levels, e.g. with COTS testing.

Test Levels

Component Testing
Integration Testing System testing Acceptance Testing

Component Testing
User/Business Requirements
Acceptance Test Plan

Acceptance Test

System Requirements

System Test Plan

System Test

Technical Specification Development Levels

Integration
Test Plan

Integration
Test

Program Specification

Unit Test Plan

Unit/Component Test

Test Levels

Coding

Component Testing
Definition

Component A minimal software item that can be tested in isolation.


Component Testing The testing of individual software components. Sometimes known as Unit Testing, Module Testing or Program Testing Component can be tested in isolation stubs/drivers may be employed Test cases derived from component specification (module/program spec) Functional and Non-Functional testing Usually performed by the developer, with debugging tool Quick and informal defect fixing

Component Testing
Definition
Test-First/Test-Driven approach create the tests to drive the design and code construction!
Instead of creating a design to tell you how to structure your code, you create a test that defines how a small part of the system should function. Three steps:
1. Design test that defines how you think a small part of the software should behave (Incremental development). 2. Make the test run as easily and quickly as you can. Don't be concerned about the design of code, just get it to work! 3. Clean up the code. Now that the code is working correctly, take a step back and re-factor to remove any duplication or any other problems that were introduced to get the test to run. Russell Gold, Thomas Hammell and Tom Snyder - 2005

Integration Testing
Definition Component Integration Testing System Integration Testing

Integration Testing
Definition
Integration Testing - Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems

Components may be code modules, operating systems, hardware and even complete systems
There are 2 levels of Integration Testing
Component Integration Testing System Integration Testing

Component Integration Testing


User/Business Requirements
Acceptance Test Plan

Acceptance Test

System Requirements

System Test Plan

System Test

Technical Specification Development Levels

Integration
Test Plan

Integration
Test

Program Specification

Unit Test Plan

Unit/Component Test

Test Levels

Coding

Component Integration Testing


Definition
component integration testing Testing performed to expose defects in the interfaces and interaction between integrated components performed by the test team

usually formal (records of test design and execution are kept) all individual components should be integration tested prior to system testing

Component Integration Testing


Test Planning

To consider - should the integration testing approach:


Start from top level components and work down? Start from bottom level components and work up? Use the big bang method? Be based on functional groups? Start on critical components first? Be based on business sequencing? Maybe suit System Test ne.

Knowledge of the system architecture is important The greater the scope of the integration approach the more difficult it is to isolate defects Non-Functional requirements testing may start here e.g. early performance measures

Component Integration Testing


Top-down testing
Component under test

Q S T

Stubs

Component Integration Testing


Top-down testing
Pros
provides a limited working system early in the design process depth first integration demonstrates end-to-end functions early in the development process early detection of design errors through early implementation of the design structure early testing of major control or decision points

Cons
stubs only provide limited simulations of lower level components and could influence spurious results breadth first means that higher levels of the system must be artificially forced to generate output for test observations

Component Integration Testing


Bottom-up testing
Component under test

P
P is the driver for components Q and R

Same for Q and R driving their components

Component Integration Testing


Bottom-up testing
Pros
using drivers instead of upper level modules to simulate the environment for lower level modules

Cons
unavailability of a demonstrable system until late in the development process late detection of system structure errors

necessary for critical, low level system components


testing can be observed on the components under test from an early stage

Component Integration Testing


Big Bang Approach
Main Menu Function 1 Function 2 Function 3

Component 1

Component 2

Component 3

Component 4

Component 5

Component 6

Not usually the preferred approach

Component Integration Testing


Suggested Integration Testing Methodology
The following testing techniques are appropriate for Integration Testing: Functional Testing using Black Box Testing techniques against the interfacing requirements for the component under test Non-functional Testing (where appropriate, for performance or reliability testing of the component interfaces, for example)

System Integration Testing


Well talk about System Integration Testing later. For now, we should stick to the sequence of the Test lifecycle.

Which means System Testing next.

System Testing
Context Definition Functional Systems testing Non-Functional Systems Testing Good Practices for System Testing

System Testing
User/Business Requirements
Acceptance Test Plan

Acceptance Test

System Requirements

System Test Plan

System Test

Technical Specification Development Levels

Integration
Test Plan

Integration
Test

Program Specification

Unit Test Plan

Unit/Component Test

Test Levels

Coding

System Testing
Definition

System Testing - process of testing an integrated system to verify that it meets specified requirements Concerned with the behaviour of the whole system, not with the workings of individual components. Carried out by the Test Team

Functional System Testing


Definition Requirements-based functional testing Business process functional testing

Functional System Testing


A Functional requirement is (per IEEE):
A requirement that specifies a function that a system or system component must perform A Requirement may exist as a text document and/or a model

Functional System Testing


Requirements-based functional testing - Functionality
Accuracy Provision of right or agreed results or effects

Interoperability

Ability to interact with specified systems Adhere to applicable standards, conventions, regulations or laws Ability to provide adequate and accurate audit data

Compliance

Auditability

Suitability

Presence and appropriateness of functions for specified tasks

Functional System Testing


Requirements based testing

testing against requirements and specifications test procedures and cases derived from:
detailed user requirements system requirements functional specification User documentation/instructions high level System design

Functional System Testing


Requirements-based functional testing - techniques
Starts by using the most appropriate black-box testing techniques May support this with white-box techniques (e.g. menu structures, web page navigation) Risk based approach important

Functional System Testing


Business Process based testing

test procedures and cases derived from:


expected user profiles Business scenarios use cases

testing should reflect the business environment and processes in which the system will operate. therefore, test cases should be based on real business processes.

Non-functional System Testing


Definition Non-functional requirements Non-functional test types

Non-functional System Testing


Definition

testing of those requirements that do not relate to functionality

Non-functional System Testing


Non-functional requirements
Emphasis on non-functional requirements:
Performance Load Data volumes Storage Recovery Usability Stress Security*

* Note that ISTQB treats this as a Functional test. From the syllabus:
Security Testing A type of functional testing, security testing, investigates the functions (e.g. a firewall) relating to detection of threats, such as viruses, from malicious outsiders.

Non-functional System Testing


Non-functional requirements
The non-functional aspects of a system are all the attributes other than business functionality, and are as important as the functional aspects. These include:
the look and feel and ease of use of the system how quickly the system performs how much the system can do for the user

It is also about:
how easy and quick the system is to install how robust it is how quickly the system can recover from a crash

Non-functional System Testing


Non-functional test types
Reliability - The capability of software to maintain its level of performance under stated conditions for a stated period of time. Is the software product reliable? Usability - Is the software product easy to use, learn and understand from the users perspective?

Maintainability: The effort needed to make specified modifications. Is the software product easy to maintain?
Efficiency: The relationship between the level of performance of the software and the amount of resources used, under stated conditions. Does the software product use the hardware, system software and other resources efficiently? Is the number of resources required by the software product during use affordable and acceptable?

Portability: The ability of software to be transferred from one environment to another. Is the software product portable?

System Testing
Good Practices for System testing
implement documented procedures for requirements analysis, control and traceability

review deliverables to ensure feasible, testable requirements and associated acceptance criteria trace requirements to the design and tests which prove the requirement has been met test data, facilities and documentation must be sufficient to demonstrate conformance with requirements Test environment must closely mirror the target production system

System Integration Testing


Context Definition Objectives Interfaces to External Systems

System Integration Testing


User/Business Requirements
Acceptance Test Plan

Acceptance System Integration Test

System Requirements

System Test Plan

Testing

System Test

Technical Specification Development Levels

Integration
Test Plan

Integration
Test

Program Specification

Unit Test Plan

Unit/Component Test

Test Levels

Coding

System Integration Testing


Definition

System Integration Testing is testing between the System and Acceptance phases. The System has already proven to be functionally correct, what remains to be tested is how the system reacts to other systems and/or organisations.

System Integration Testing


Objectives of Systems Integration Testing
The objective of System Integration Testing is to provide confidence that the system or application is able to interoperate successfully with other specified software systems and does not have an adverse affect on other systems that may also be present in the live environment, or vice versa It is possible that the testing tasks performed during System Integration Testing may be combined with System Testing, particularly if the system or application has little or no requirement to interoperate with other systems In terms of the V Model, Systems Integration Testing corresponds to the Functional and Technical Specification phases of the software development lifecycle

System Integration Testing


Testing Interfaces to External Systems

Having completed Component integration testing and Systems testing, one must execute the plan for system-to-system integration Infrastructure may need to be transformed in order to feed to an external system Black Box testing techniques used

Acceptance Testing
Context

Definition
User Acceptance Testing Operational Acceptance Testing Contract/Regulation acceptance testing Alpha and Beta testing Other Acceptance Test terms

Acceptance Testing
User/Business Requirements
Acceptance Test Plan

Acceptance Test

System Requirements

System Test Plan

System Test

Technical Specification Development Levels

Integration
Test Plan

Integration
Test

Program Specification

Unit Test Plan

Unit/Component Test

Test Levels

Coding

Acceptance Testing
Definition

Acceptance testing: Formal testing with respect to user ne, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

Acceptance Testing
Definition
Usually the responsibility of the Customer/End user, though other stakeholders may be involved. Customer may subcontract the Acceptance test to a third party Goal is to establish confidence in the system/part-system or specific non-functional characteristics (e.g. performance) Usually for ensuring the system is ready for deployment into production May also occur at other stages, e.g.
Acceptance testing of a COTS product before System Testing commences Acceptance testing a components usability during Component testing Acceptance testing a new significant functional enhancement/middleware release prior to deployment into System Test environment.

Acceptance Testing
User Acceptance Testing (UAT)
Usually the final stage of validation
conducted by or visible to the end user and customer testing is based on the defined user requirements Often uses the Thread Testing approach:
A testing technique used to test the business functionality or business logic of the application in an end -to-end manner, in much the same way a User or an operator might interact with the system during its normal use . - Watkins 2001

This approach is also often used for Functional Systems Test - The same Threads serve both test activities

Acceptance Testing
User Acceptance Testing

Often use a big bang approach


black box testing techniques most commonly used Regression testing to ensure changes have not regressed other areas of the system

Acceptance Testing
Operational Acceptance Testing (OAT)

The Acceptance of the system by those who have to administer it. Features covered include:
testing of backup/restore disaster recovery user management maintenance tasks periodic checks of security vulnerabilities

The objective of OAT is to confirm that the Application Under Test (AUT) meets its operational requirements, and to provide confidence that the system works correctly and is usable before it is formally "handed over" to the operation user. OAT is conducted by one or more Operations Representatives with the assistance of the Test Team
1 Watkins 2001

Acceptance Testing
Operational Acceptance Testing (OAT)

Employs a Black Box Approach for some activities


Also employs a Thread Testing approach Operations representatives performing typical tasks that they would perform during their normal usage of the system Also addresses testing of System Documentation, such as Operations manuals

Acceptance Testing
Contract/Regulation Acceptance Testing

Contract Acceptance Testing - testing against the acceptance criteria defined in the contract
final payment to the developer depends on contract acceptance testing being successfully completed acceptance criteria defined at contract time are often imprecise, poorly defined, incomplete and out-of-step with subsequent changes to the application

Regulation Acceptance testing is performed against any regulations which must be adhered to, such as governmental, legal or safety regulations

Acceptance Testing
Alpha & Beta Testing

early testing of stable product by customers/users feedback provided by alpha and beta testers alpha tests performed at developers site by customer beta tests conducted at the customer site by end user/customer published reviews of beta release test results can make or break a product (e.g. PC games)

Acceptance Testing
Other Acceptance test terms

Factory Acceptance Testing (FAT) Site Acceptance Testing (SAT)


Both address acceptance testing for systems that are tested before and after being moved to a customers site

Test Types The Targets of Testing


Definitions Functional Testing Non-Functional Testing Structural Testing Confirmation & Regression Testing

Test Types The Targets of Testing


Definitions
Target For Testing - A group of test activities aimed at verifying the software system (or a part of a system) based on a specific reason. Test type - A group of test activities aimed at testing a component or system regarding one or more interrelated quality attributes. A test type is focused on a specific test objective, e.g.
reliability test usability test Structure or architecture of the system/software regression test

and may take place on one or more test levels or test phases.

Test Types The Targets of Testing


Definitions

A model of the software may be developed and/or used in structural and functional testing For example, in functional testing
a process flow model a state transition model a plain language specification

and for structural testing


a control flow model a menu structure model

Functional Testing
functional testing: Testing based on an analysis of the specification of the functionality of a component or system. Specification E.g. Requirements specification, Use Cases, Functional specification or maybe undocumented.

Function what the system does


Functional test based on the Functions and features may be applied at all Test levels (e.g. Component Test, System Test etc)

Considers the external (not internal) behaviour of the software. Black- Box testing. What it does rather than how it does it. More on this later!

Non-Functional Testing
non-functional testing: Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, interoperability, maintainability and portability May be performed at all Test levels (not just Non Functional Systems Testing)

Measuring the characteristics of the system/software that can be quantified on a varying scale- e.g. performance test scaling

Structural Testing
Structural testing: Testing based on an analysis of the internal structure of the component or system Also known as White Box Testing or Glass Box Testing May be performed at all Test levels but more commonly during Component Test and Component Integration Test Coverage measured as a % of items tested i.e. how much the structure has been tested May be based on the system Architecture e.g. a calling hierarchy Need for use of Test Tools e.g. for testing coverage of Statements and Decision in the code More on White Box testing and Coverage later

Confirmation (Re-Testing) and Regression Testing


Re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions
Regression Testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

Confirmation Testing (Re-Testing)


Whenever a fault is detected and fixed then the software should be re-tested to show that the original fault has been fixed. This is known as Re-Testing. It is important that the test case is repeatable. In order to support this the test identifier should be included on the fault report. It is important that the environment and test data used are as close as possible as those used during the original test.

Regression Testing
If the test is re-run and passes you cannot necessarily say the fault has been resolved because .. You also need to ensure that the modifications have not caused unintended side-effects elsewhere and that the modified system still meets its requirements Regression Testing Regression testing should be carried out : when the system is stable and the system or the environment changes when testing bug-fix releases as part of the maintenance phase

It should be applied at all Test Levels It should be considered complete when agreed completion criteria for regression testing have been met Regression test suites evolve over time and given that they are run frequently are ideal candidates for automation

Selecting suitable tests involves :-

Regression Testing

The effectiveness of a regression test suite can diminish over time for a number of reasons : tests are added for short term goals but not removed tests become redundant due to functionality changes test suite is not updated when major functionality changes are implemented execution time becomes prohibitively high maintenance of the test suite becomes prohibitively high.

knowledge of the bug fixes and how they affect the system understanding the areas that have frequent faults understanding which areas of the system have undergone the most recent changes understanding the areas of the system which are most critical to the user understanding the core features of the system which must function correctly.

Reduction in effectiveness can be countered by : maintaining cross references between system features and their corresponding tests monitoring the addition of tests to the suite Periodic review and removal of redundant tests review of the test suite when major enhancements are made to the system evaluation of the effectiveness of the test suite using metrics.

Regression Testing

Regression Testing
The probability of making an incorrect change is more

than 50 %. Much of this is due to overconfidence and ineffective or nonexistent software change testing. We change just a couple of statements and believe we have not affected anything adversely. We execute one case that tests the path that was changed and tell ourselves that the change has been tested. IS IT, THEN, ANY WONDER THAT WE EXPERIENCE SO MANY PROBLEMS?!
Hetzel 1998

Maintenance Testing
What is Maintenance testing? Objectives of Maintenance testing Problems of Maintenance testing Concerns of Maintenance testing How can we test changes?

Maintenance Testing
What is Maintenance Testing?
Maintenance testing: Testing the changes to an operational system or the impact of a changed environment to an operational system testing changes to a Live System Triggered by, for example,
Modification software upgrades Operating system changes system tuning emergency fixes software Retirement (may necessitate data archiving tests) Migration System migration (including operational tests of new environment plus changed software) database migration

Maintenance Testing
Objectives of Maintenance Testing

Develop tests to detect problems prior to placing the change into production Correct problems identified in the live environment Test the completeness of needed training material Involve users in the testing of software changes

Maintenance Testing
Problems of Maintenance testing

all that is available is the source code (usually with poor internal documentation and no record of testing) poor or missing specifications program structure, global data structures, system interfaces and performance and/or design constraints are difficult to determine and frequently misinterpreted Baselined test plans and/or regression test packs often not updated

Maintenance Testing
Concerns of Maintenance testing

will the testing process be planned? will testing results be recorded? will new faults be introduced into the system? will system problems be detected during testing? how much regression testing is feasible? will training be considered?

Maintenance Testing
How can we test changes?

Maintenance testing involves testing what has been changed (i.e. Re-Testing) It also, importantly, utilises Impact Analysis as a method for determining what Regression testing is required for the whole system Traceability of Testware to source documents essential for effective impact analysis (we cover this more in a later topic) Scope of Maintenance tests based on Risk assessment including size of change and size of system Maintenance testing may involve one or more test levels and one or more test types

Testing throughout the Software Lifecycle Summary


Firstly we looked at Software Development Models:
The V-Model identifying the stages of testing, their relationship to the Development stages and the type of work products involved Iterative Development Models, as used in RAD, RUP and Agile developments Also the characteristics that make for good testing in ANY life cycle model And that development models must be adapted to the context of project and product characteristics

Next we talked about the different Test Levels:

Component Testing testing of individual software components by the development team Integration Testing at Component level looking at different approaches such as Top-Down, Bottom-up and Big Bang System Testing
System (level) Integration Testing - testing that a system interoperates with other systems or organisations
Functional System Testing requirements and business process based Non-Functional System Testing testing the non-functional attributes of a system

Testing throughout the Software Lifecycle Summary


And still under Test Levels .

Acceptance Testing, comprising:


User Acceptance Testing Operational Acceptance Testing Contract and Regulation testing Alpha and Beta testing

Next we talked about Test Types


Functional Testing Non-Functional Testing Structural testing Regression Testing Re-testing (confirmation)

A group of testing activities aimed at testing one or more quality attributes of the system, such as:

All Test Types can be performed at all test levels

Testing throughout the Software Lifecycle Summary


An finally we talked about Maintenance testing
Testing the changes (software and environmental) to an operational live system What the reasons (or triggers) are for Maintenance Testing We looked at the objectives of Maintenance Testing the problems that can be encountered during Maintenance testing and what we should consider for our Maintenance testing approach such as Impact Analysis and Regression testing

Potrebbero piacerti anche