Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Slide 1 Internal
Summary
V-Model
User/Business Requirements
Acceptance Test Plan
Acceptance Test
System Requirements
System Test
Integration
Test Plan
Integration
Test
Program Specification
Unit Test
Test Levels
Coding
V-Model
The benefits of the V Model include: The testing phases are given the same level of management attention and commitment as the corresponding development phases The outputs from the development phases are reviewed by the testing team to ensure their testability Verification and validation (and early test design) can be carried out during the development of the software work products The early planning and preliminary design of tests provides additional review comments on the outputs from the development phase
V-Model
The levels of development and testing shown in the model vary from project to project For example, there may additional test levels, such as System Integration Testing, sitting between System Testing and Acceptance Testing (more on these test levels later) The work products coming out from any one development level may be utilised in one or more test levels For example, whilst the prime source for Acceptance testing is the Business Requirement, the System Requirements (e.g. Use Cases) may also be needed to support detailed test design
Achieved with small developments Iterations and Increment within Iterations As Increments are developed and tested the System grows and grows. Need for more testing with Regression Testing paramount E.g. RAD, RUP and Agile development models Agile development
aim is to deliver software early and often Rapid production and time to market Can handle (and anticipates) changing requirements throughout all development and test phases
Code
Acceptance Test
Test Levels should be adapted depending on Project nature. May be better to combine Test Levels, e.g. with COTS testing.
Test Levels
Component Testing
Integration Testing System testing Acceptance Testing
Component Testing
User/Business Requirements
Acceptance Test Plan
Acceptance Test
System Requirements
System Test
Integration
Test Plan
Integration
Test
Program Specification
Unit/Component Test
Test Levels
Coding
Component Testing
Definition
Component Testing
Definition
Test-First/Test-Driven approach create the tests to drive the design and code construction!
Instead of creating a design to tell you how to structure your code, you create a test that defines how a small part of the system should function. Three steps:
1. Design test that defines how you think a small part of the software should behave (Incremental development). 2. Make the test run as easily and quickly as you can. Don't be concerned about the design of code, just get it to work! 3. Clean up the code. Now that the code is working correctly, take a step back and re-factor to remove any duplication or any other problems that were introduced to get the test to run. Russell Gold, Thomas Hammell and Tom Snyder - 2005
Integration Testing
Definition Component Integration Testing System Integration Testing
Integration Testing
Definition
Integration Testing - Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems
Components may be code modules, operating systems, hardware and even complete systems
There are 2 levels of Integration Testing
Component Integration Testing System Integration Testing
Acceptance Test
System Requirements
System Test
Integration
Test Plan
Integration
Test
Program Specification
Unit/Component Test
Test Levels
Coding
usually formal (records of test design and execution are kept) all individual components should be integration tested prior to system testing
Knowledge of the system architecture is important The greater the scope of the integration approach the more difficult it is to isolate defects Non-Functional requirements testing may start here e.g. early performance measures
Q S T
Stubs
Cons
stubs only provide limited simulations of lower level components and could influence spurious results breadth first means that higher levels of the system must be artificially forced to generate output for test observations
P
P is the driver for components Q and R
Cons
unavailability of a demonstrable system until late in the development process late detection of system structure errors
Component 1
Component 2
Component 3
Component 4
Component 5
Component 6
System Testing
Context Definition Functional Systems testing Non-Functional Systems Testing Good Practices for System Testing
System Testing
User/Business Requirements
Acceptance Test Plan
Acceptance Test
System Requirements
System Test
Integration
Test Plan
Integration
Test
Program Specification
Unit/Component Test
Test Levels
Coding
System Testing
Definition
System Testing - process of testing an integrated system to verify that it meets specified requirements Concerned with the behaviour of the whole system, not with the workings of individual components. Carried out by the Test Team
Interoperability
Ability to interact with specified systems Adhere to applicable standards, conventions, regulations or laws Ability to provide adequate and accurate audit data
Compliance
Auditability
Suitability
testing against requirements and specifications test procedures and cases derived from:
detailed user requirements system requirements functional specification User documentation/instructions high level System design
testing should reflect the business environment and processes in which the system will operate. therefore, test cases should be based on real business processes.
* Note that ISTQB treats this as a Functional test. From the syllabus:
Security Testing A type of functional testing, security testing, investigates the functions (e.g. a firewall) relating to detection of threats, such as viruses, from malicious outsiders.
It is also about:
how easy and quick the system is to install how robust it is how quickly the system can recover from a crash
Maintainability: The effort needed to make specified modifications. Is the software product easy to maintain?
Efficiency: The relationship between the level of performance of the software and the amount of resources used, under stated conditions. Does the software product use the hardware, system software and other resources efficiently? Is the number of resources required by the software product during use affordable and acceptable?
Portability: The ability of software to be transferred from one environment to another. Is the software product portable?
System Testing
Good Practices for System testing
implement documented procedures for requirements analysis, control and traceability
review deliverables to ensure feasible, testable requirements and associated acceptance criteria trace requirements to the design and tests which prove the requirement has been met test data, facilities and documentation must be sufficient to demonstrate conformance with requirements Test environment must closely mirror the target production system
System Requirements
Testing
System Test
Integration
Test Plan
Integration
Test
Program Specification
Unit/Component Test
Test Levels
Coding
System Integration Testing is testing between the System and Acceptance phases. The System has already proven to be functionally correct, what remains to be tested is how the system reacts to other systems and/or organisations.
Having completed Component integration testing and Systems testing, one must execute the plan for system-to-system integration Infrastructure may need to be transformed in order to feed to an external system Black Box testing techniques used
Acceptance Testing
Context
Definition
User Acceptance Testing Operational Acceptance Testing Contract/Regulation acceptance testing Alpha and Beta testing Other Acceptance Test terms
Acceptance Testing
User/Business Requirements
Acceptance Test Plan
Acceptance Test
System Requirements
System Test
Integration
Test Plan
Integration
Test
Program Specification
Unit/Component Test
Test Levels
Coding
Acceptance Testing
Definition
Acceptance testing: Formal testing with respect to user ne, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
Acceptance Testing
Definition
Usually the responsibility of the Customer/End user, though other stakeholders may be involved. Customer may subcontract the Acceptance test to a third party Goal is to establish confidence in the system/part-system or specific non-functional characteristics (e.g. performance) Usually for ensuring the system is ready for deployment into production May also occur at other stages, e.g.
Acceptance testing of a COTS product before System Testing commences Acceptance testing a components usability during Component testing Acceptance testing a new significant functional enhancement/middleware release prior to deployment into System Test environment.
Acceptance Testing
User Acceptance Testing (UAT)
Usually the final stage of validation
conducted by or visible to the end user and customer testing is based on the defined user requirements Often uses the Thread Testing approach:
A testing technique used to test the business functionality or business logic of the application in an end -to-end manner, in much the same way a User or an operator might interact with the system during its normal use . - Watkins 2001
This approach is also often used for Functional Systems Test - The same Threads serve both test activities
Acceptance Testing
User Acceptance Testing
Acceptance Testing
Operational Acceptance Testing (OAT)
The Acceptance of the system by those who have to administer it. Features covered include:
testing of backup/restore disaster recovery user management maintenance tasks periodic checks of security vulnerabilities
The objective of OAT is to confirm that the Application Under Test (AUT) meets its operational requirements, and to provide confidence that the system works correctly and is usable before it is formally "handed over" to the operation user. OAT is conducted by one or more Operations Representatives with the assistance of the Test Team
1 Watkins 2001
Acceptance Testing
Operational Acceptance Testing (OAT)
Acceptance Testing
Contract/Regulation Acceptance Testing
Contract Acceptance Testing - testing against the acceptance criteria defined in the contract
final payment to the developer depends on contract acceptance testing being successfully completed acceptance criteria defined at contract time are often imprecise, poorly defined, incomplete and out-of-step with subsequent changes to the application
Regulation Acceptance testing is performed against any regulations which must be adhered to, such as governmental, legal or safety regulations
Acceptance Testing
Alpha & Beta Testing
early testing of stable product by customers/users feedback provided by alpha and beta testers alpha tests performed at developers site by customer beta tests conducted at the customer site by end user/customer published reviews of beta release test results can make or break a product (e.g. PC games)
Acceptance Testing
Other Acceptance test terms
and may take place on one or more test levels or test phases.
A model of the software may be developed and/or used in structural and functional testing For example, in functional testing
a process flow model a state transition model a plain language specification
Functional Testing
functional testing: Testing based on an analysis of the specification of the functionality of a component or system. Specification E.g. Requirements specification, Use Cases, Functional specification or maybe undocumented.
Considers the external (not internal) behaviour of the software. Black- Box testing. What it does rather than how it does it. More on this later!
Non-Functional Testing
non-functional testing: Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, interoperability, maintainability and portability May be performed at all Test levels (not just Non Functional Systems Testing)
Measuring the characteristics of the system/software that can be quantified on a varying scale- e.g. performance test scaling
Structural Testing
Structural testing: Testing based on an analysis of the internal structure of the component or system Also known as White Box Testing or Glass Box Testing May be performed at all Test levels but more commonly during Component Test and Component Integration Test Coverage measured as a % of items tested i.e. how much the structure has been tested May be based on the system Architecture e.g. a calling hierarchy Need for use of Test Tools e.g. for testing coverage of Statements and Decision in the code More on White Box testing and Coverage later
Regression Testing
If the test is re-run and passes you cannot necessarily say the fault has been resolved because .. You also need to ensure that the modifications have not caused unintended side-effects elsewhere and that the modified system still meets its requirements Regression Testing Regression testing should be carried out : when the system is stable and the system or the environment changes when testing bug-fix releases as part of the maintenance phase
It should be applied at all Test Levels It should be considered complete when agreed completion criteria for regression testing have been met Regression test suites evolve over time and given that they are run frequently are ideal candidates for automation
Regression Testing
The effectiveness of a regression test suite can diminish over time for a number of reasons : tests are added for short term goals but not removed tests become redundant due to functionality changes test suite is not updated when major functionality changes are implemented execution time becomes prohibitively high maintenance of the test suite becomes prohibitively high.
knowledge of the bug fixes and how they affect the system understanding the areas that have frequent faults understanding which areas of the system have undergone the most recent changes understanding the areas of the system which are most critical to the user understanding the core features of the system which must function correctly.
Reduction in effectiveness can be countered by : maintaining cross references between system features and their corresponding tests monitoring the addition of tests to the suite Periodic review and removal of redundant tests review of the test suite when major enhancements are made to the system evaluation of the effectiveness of the test suite using metrics.
Regression Testing
Regression Testing
The probability of making an incorrect change is more
than 50 %. Much of this is due to overconfidence and ineffective or nonexistent software change testing. We change just a couple of statements and believe we have not affected anything adversely. We execute one case that tests the path that was changed and tell ourselves that the change has been tested. IS IT, THEN, ANY WONDER THAT WE EXPERIENCE SO MANY PROBLEMS?!
Hetzel 1998
Maintenance Testing
What is Maintenance testing? Objectives of Maintenance testing Problems of Maintenance testing Concerns of Maintenance testing How can we test changes?
Maintenance Testing
What is Maintenance Testing?
Maintenance testing: Testing the changes to an operational system or the impact of a changed environment to an operational system testing changes to a Live System Triggered by, for example,
Modification software upgrades Operating system changes system tuning emergency fixes software Retirement (may necessitate data archiving tests) Migration System migration (including operational tests of new environment plus changed software) database migration
Maintenance Testing
Objectives of Maintenance Testing
Develop tests to detect problems prior to placing the change into production Correct problems identified in the live environment Test the completeness of needed training material Involve users in the testing of software changes
Maintenance Testing
Problems of Maintenance testing
all that is available is the source code (usually with poor internal documentation and no record of testing) poor or missing specifications program structure, global data structures, system interfaces and performance and/or design constraints are difficult to determine and frequently misinterpreted Baselined test plans and/or regression test packs often not updated
Maintenance Testing
Concerns of Maintenance testing
will the testing process be planned? will testing results be recorded? will new faults be introduced into the system? will system problems be detected during testing? how much regression testing is feasible? will training be considered?
Maintenance Testing
How can we test changes?
Maintenance testing involves testing what has been changed (i.e. Re-Testing) It also, importantly, utilises Impact Analysis as a method for determining what Regression testing is required for the whole system Traceability of Testware to source documents essential for effective impact analysis (we cover this more in a later topic) Scope of Maintenance tests based on Risk assessment including size of change and size of system Maintenance testing may involve one or more test levels and one or more test types
Component Testing testing of individual software components by the development team Integration Testing at Component level looking at different approaches such as Top-Down, Bottom-up and Big Bang System Testing
System (level) Integration Testing - testing that a system interoperates with other systems or organisations
Functional System Testing requirements and business process based Non-Functional System Testing testing the non-functional attributes of a system
A group of testing activities aimed at testing one or more quality attributes of the system, such as: