Sei sulla pagina 1di 63

Information System Testing

Information System Design

Introduction
Programmers make mistake! It is not unusual for developers to spend more than 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. Testing programs to establish the presence of system defects. A "good" example of defective testing : the software system in new Hong Kong Airport did not work properly when it was opened in July 1998. Hundreds of scheduled flights were delayed, thousands of pieces of luggage were lost, millions of dollars in perishable goods were left to rot.
Information System Design 2

Testing Fundamentals
Error, Fault and Failures Causes of Errors Test Oracles Psychology of Testing Test Cases and Test Criteria

Information System Design

Error, Fault and Failures


Error refers to the discrepancy between a computed, observed or measured value and the truth value. Error is also used to refer to human action that results in software containing a defect or fault. Fault is a condition that causes a system to fail in performing its required function. Failure is the inability of a system or component to perform a required function according to its specifications. A failure is produced only when there is a fault is the system. Presence of an error implies that a failure must have occurred and the observance of a failure implies that a fault must be present in the system. The presence of a fault in a system only implies that the fault has a potential to cause a failure to occur.
Information System Design 4

Error Vs. Failures


An error is a defect or bug A linked list is used whose link field in the last node is not assigned a NULL value A failure is a manifestation of an error An array is used without its initialization and thus having garbage. Search an item may match a garbage value and the system than fails But the mere presence of an error may not necessarily lead to a failure Searching over a linked-list may goes on infinitely; system is running but no result. Setting a zero to all balances is an error + failure
Information System Design 5

Causes of Software Errors


Miscommunication or no communication As to specifics of what an application should or should not do (the applications requirement) Software complexity The complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development GUI interfaces, client-server, distributed and web savvy applications, sheer size of users have all contributed to the exponential growth in system complexity Technological advancement Rapid change in hardware and software technology does not permit to get acquaintance with the advancement
Information System Design 6

Causes of Software Errors


Programming errors Programmers, like anyone else prone to human errors Changing requirements If there are many minor or major changes, known and unknown dependencies among parts of project likely to interact and cause problems, and the complexity of coordinating changes may result in errors Time pressure Scheduling of software projects is difficult at best, often requiring a lot of guess work. When deadline loom and the crunch come, mistakes will be made
Information System Design 7

Test Oracles
Test oracle is a mechanism, different from the program itself, that can be used to check the correctness of the output of the program for the test cases. Test oracles can be automated or human beings. Automated oracles are automatically generated from specification of programs or modules.
Software under testing Testcases

Comparator

Result of Testing

Test Oracle
Information System Design 8

Psychology of Testing?
Testing attempts to assess how well tasks have been performed, i.e. expected output vs. output from test. Testing is a process of executing a program with the intent of finding an error. A good test case is one that has a high probability of finding an as yet undiscovered error. A successful test is one that uncovers an as yet undiscovered error. Testing begins at module level and works outward towards the integration of entire computer based system. Different testing techniques are required at different points in time. Testing is conducted by the software developer and ITG ( Independent Test Group ) for large projects.
Information System Design 9

Test Cases and Test Criteria


Test data: Inputs which have been devised to test the system. Test cases: Inputs to test the system and the predicted outputs at a particular state from these inputs if the system operates according to its specification. Having proper test cases is essential to successful testing. The goal is to prove that a program is incorrect, NOT to prove it is correct. A test suite is the set of all the test cases with which a given software product is to be tested. One possible ideal set of test cases is one that includes all the possible inputs to the program. This is called exhaustive testing. However, exhaustive testing is rarely practical as the number of elements in the input domain can be extremely large. Thus, in order to select test cases we need some testing criteria. A test criteria specifies the conditions that must be satisfied by a set of test cases.
Information System Design 10

Software Testing Strategy


System level testing
Testing in the large

Testing in the small

Integration level testing

Unit level testing

Information System Design

11

Levels of Testing
User needs Acceptance testing

Requirement specification

System testing

Design

Integration testing

code
Information System Design

Unit testing
12

Unit Testing
Unit testing (or module testing) is the testing of different components in isolation Testing without execution Code walk-through Code is given to the testing team. Each team members selects some test cases and simulates execution of the code by hand Discover any algorithmic or logical error is there in the module Focuses on discovery of errors and not on how to fix the discovered errors Informal meeting for debugging Code inspection To discover or fix any common types of errors caused due to oversight and improper programming Also identifies coding standard
Information System Design 13

Requirement of Unit Testing


Following are necessary to accomplish the unit testing The procedures belonging to other modules that the unit under tests calls Non local data structures that the module accesses A procedure to call the functions of the module under test with appropriate parameters
Caller procedure

Non local data structures

Unit under test

called procedure

Information System Design

14

Requirement of Unit Testing


Caller module or called module may not be available during the test of the unit under test Driver dummy for caller procedure Stub dummy for called procedure Driver and stub are to simulate the behavior of actual caller and called procedures

Non local data structures Unit under test

Driver

Test case

Stub

Stub

Test result

Information System Design

15

Integration Testing
Focuses on interaction of modules in a subsystem Unit tested modules combined to form subsystems Test cases to exercise the interaction of modules in different ways Interfacing Data can be lost across an interface One module can have an inadvertent, adverse effect on another Sub functions, when combined, may not produce the desired function Individually acceptable imprecision may be magnified to unacceptable levels Global data structures can causes problems and so on ..........
Information System Design 16

Integration Plan and Testing


Integration plan A systematic integration plan should be adopted prior to testing The integration plan specifies the steps and order in which modules are to be combined to realize the full system An important factor to guide the integration plan is the module dependency graph as obtained in the structured design of the system After each integration step, the partially integrated system is to be tested

Information System Design

17

Integration Testing Strategies


Following are the well known practices Big-bang integration testing Top down integration testing Bottom-up integration testing Mixed integration testing Smoke testing
Information System Design 18

Big-bang Integration Testing


The most simple integration testing (non incremental) approach All modules making up a system are integrated at one go The entire program is tested as a whole The chaos usually results! Errors debugging are very expensive to fix Suitable, only for very small system (or a part of a lager subsystem)
Information System Design 19

Top-down Integration Testing


It is an incremental approach to build and test a system Starts with main module and add the modules Depth-first integration Breadth-first integration
M1 Layer 0

M2

M3

M4

Layer 1

Layer 2 M5 M6 M7

Information System Design

20

Top-down Integration Testing


This testing strategy requires the use of stubs Depending on integration approach (depth or breadth first), stubs are replaced one at a time with actual components Retesting with actual modules usually recommended

Logistical problem can arise Low-level stub replacement at top-level does not ensure the data flow in upward direction as the integration continued

Information System Design

21

Bottom-up Integration Testing


Begins construction and testing at lowest level 1. Low-level modules are combined into clusters (also called builds) to build the higher level modules 2. A driver is required to simulate the behavior of a cluster 3. The cluster is tested 4. Drivers are removed and clusters are combined moving upward in the system structure

Information System Design

22

Bottom-up Integration Testing


M1

M2

M3

D1

D2

D3

Information System Design

23

Bottom-up Integration Testing


Advantages

The bottom-up integration conforms with the basic intuition of the system building
Several disjoint subsystem can be tested simultaneously

No stubs are required; only the test drivers are required

Disadvantage
Complexity increases as the number of subsystem increases Extreme case corresponds to the big-bang approach
Information System Design 24

Mixed Integration Testing


Also called sandwiched integration testing approach It combines the top-down and bottom-up integration testing approaches Using top-down approach, testing can start only after the toplevel module have been coded and unit tested Using bottom-up approach, start the bottom-up testing as soon as bottom-level modules are ready Then move up-ward as well as down-ward to perform tests with the currently available modules Advantages The mixed approach overcomes the shortcoming of the topdown and bottom-up approaches This is one of the most commonly adopted testing approach
Information System Design 25

Smoke Testing
Smoke testing - typically a testing effort to determine if a new software version is performing well enough to accept it for a major testing effort Example: If the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state

Smoke testing is also alternatively termed as Sanity Testing

Information System Design

26

Smoke Testing
Important Smoke test should exercise the entire system from end to end and on a regularly basis It does not have to be exhaustive, but it should be capable of exposing major errors Advantages Minimize the integration risk The quality of the end-product is improved, as it is likely to uncover both functional errors, architectural and component-level design defects Error diagnosis and corrections are simplified, as the test is associated with incremental (new build then smoke test and then rebuild etc.) Information System Design 27

System Testing
System tests are designed to validate a fully developed system to assure that it meets all the requirements as specified in the SRS document System testing can be considered as the black-box testing Entire software system is tested Focus: does the software implement the requirements? Validation exercise for the system with respect to the requirements Generally the final testing stage before the software is delivered May be done by independent people Defects removed by developers Most time consuming test phase Acceptance testing is a subclass of System Testing
Information System Design 28

Acceptance Testing
To check whether the system satisfies the functional requirements as documented in the SRS To judge the acceptability of the system by the user or customer Generally done by end users/customer in customer environment, with real data Only after successful AT software is deployed Any defects found,are removed by developers Acceptance test plan is based on the acceptance test criteria in the SRS

Information System Design

29

Acceptance Testing
Alpha testing The test is carried out by a customer but at the developer's site Developer looking over the shoulder of the user and recording errors and usage problems Beta testing The test is conducted at one or more customer sites by the enduser of the software Unlike the Alpha testing, the developer is generally not present Customers record all problems and report these to the developer at regular interval As a result of test report, developer makes modifications and then release next beta version

Information System Design

30

System Testing
Performance testing The performance testing is carried out on a system depend on the different nonfunctional requirement of the system documented in the SRS document There are several types of performance testing Stress testing Volume testing Compatibility testing Recovery testing Security testing
Information System Design 31

System Testing
Stress testing To test the behavior of the system against a range of abnormal and invalid input condition Input data volume, input data rate, utilization of memory etc. are tested beyond the designed capacity Example A system in a concurrent environment with 60 users with 20 transaction per second can be tested beyond this threshold

Information System Design

32

System Testing
Volume testing The test is to ensure the validity of the data structures (arrays, stacks, queues etc.) under some extraordinary situation

Example A compiler might be tested to check whether the symbol table overflows when a very large recursive program is compiled

Information System Design

33

System Testing
Compatibility testing To test whether the system is compatible with other types of system Basically to check whether the interface functions are able to communicate satisfactorily or not Example A Browser is compatible with Unix, Windows etc.

Information System Design

34

System Testing
Security testing To test that security mechanism built into a system will, in fact, protect it from unauthorized access or not The tester try to challenge the security measures in the system Example Attempt to acquire password, write some routine to breakdown the defenses, may overwhelm the system to deny the services to others etc.

Information System Design

35

Regression Testing
It is the practice of running an old test suite after each change to the system or after each bug fix to ensure that no new bug has been introduced as a result of the change made or bug fixed It does belong to all categories of the testing

Information System Design

36

Testing Strategies
The goal of test case selection is to select the test cases such that the maximum possible number of faults is detected by the minimum possible number of test cases. Two common testing strategies are used in software testing: Black-box Testing (Functional Testing) White-box Testing (Structural Testing)

Information System Design

37

Black-box Testing
"Black-box" testing strategies have little concern for the internal structure of the program, the only concern is that the box performs as it should. Don't care how program performs these functions. There are two steps in Black box testing: Identify the functions Create test data which will check the functions

Information System Design

38

Test cases in black-box testing


Black-box testing attempts to find errors in the following categories Incorrect or missing functions Interface errors Errors in data structures or external data access Behavior or performance errors Initialization or termination errors Test cases in black-box testing should be derived using the following methods: Exhaustive Testing Equivalence Class Partitioning Boundary value Analysis Error Guessing
Information System Design 39

Exhaustive Testing
For a set of all possible input, test case is derived as a permutation of all input Brain-less (or brute force) testing Suitable for units where number of input as well as their domain is less As the number of input values grows and number of discrete values for each data item increases, this testing strategy is infeasible

Information System Design

40

Equivalence Class Partitioning


For each of the inputs and outputs of the method, consider partitions of values; for inputs, consider in addition the possibility of invalid values (this is especially important for programs that interact with users). For example, an employeeAge value ranging between 16 and 65 and there are two categories:16-60 and 61-65; in addition to the valid range (16-65), two invalid partitions arise here: values less than 16; values greater than 65. Choose at least one value from each partition. There, there should at least 4 test data for this case.
Information System Design 41

Boundary value Analysis


Experience indicates that errors are likely to occur at the 'boundaries' of the above partitions. The reason behind such errors might purely be due to psychological factors. For example, with the age class, values like 15, 16, 17 60, 61 64, 65, 66 should be included in the test data.

Information System Design

42

Error Guessing
Experience also indicates that with any numeric input, problems arise when the value is 0, 1 or (if possible), maxInt. Similarly, with any collection input/output, errors arise when the collection's size is 0, 1 or maximal possible.

Information System Design

43

White-box Testing
Derivation of test cases according to program structure. Knowledge of the program is used to identify additional test cases "White-box" testing tries to choose a basis set of independent paths that guarantee each statement and each decision will have been executed at least once
T est data

T ests

Derives Component code


Information System Design

Test outputs
44

White-box Testing
Control flow based criteria- looks at the coverage of the control flow Statement coverage Branch coverage Condition coverage Loop testing Basis Path Testing McCabes Cyclomatic Complexity Metric Mutation testing- looks at various mutants of the program

Information System Design

45

Statement Coverage
The statement coverage strategy aims to design test cases so that every statement in a program is executed at least once Statement coverage debugs the failure due to some illegal memory access (e.g. pointers), wrong result computation etc. This technique derives test case so that all statements in a program is executed at least once

Information System Design

46

Statement Coverage: Example 1

int GCD(int x, int y) { 1 while (x != y) { 2 if (x>y) then 3 x=x-y; 4 else y=y-x; 5} 6 return x; }

Cover statement 1: [x = 5, y = 4] Cover statement 3: [x = 5, y = 4] Cover statement 4: [x = 4, y = 5]

So the test case: { [5,4], [4,5] }

Information System Design

47

Branch Coverage Testing


Criterion: Each edge should be traversed at least once during testing i.e. each decision must evaluate to both true and false during testing Branch coverage implies stmt coverage If multiple conditions in a decision, then all conditions need not be evaluated to T and F

Information System Design

48

Branch Coverage Testing


int GCD(int x, int y) { 1 while (x != y) { 2 if (x>y) then 3 x=x-y; 4 else y=y-x; 5} 6 return x; }

Branch 1: [x = 5, y = 3] [x = 3, y = 5] [x = 3, y = 3]

Branch 2: [x = 5, y = 3] [x = 3, y = 5]

(true) (false)

Information System Design

49

Condition Coverage Testing


Test cases are designed to make each component of a composite conditional expression assume both true and false values It is practical if the number of conditions are small. It is stronger than branch testing.

Information System Design

50

Condition Coverage Testing


1. 2. 3. if C then { S1 } else { S2 } While C then { S } do { S } while C
Example 1: C = (x and y) or z True False x y z x y z 1 1 1 0 0 0 0 0 1 0 1 0 1 0 1 0 1 1 1 1 0 # Test case = 23 Example 2: C = (x and y) and z True: 1 1 1 False: 0 1 0

4.

switch C case { S1 } case { S2 } . .

C = condition with boolean operators and/or relational operator

Information System Design

51

Loop Testing
It focuses exclusively on the validity of loop constructs. It is defined in the following categories: Simple loops Concatenated loops Nested loops

Information System Design

52

Loop Testing

Test for
Skip the loop entirely Only one pass Two passes m passes, m < n n-1, n, n+1 passes

Simple loop

Information System Design

53

Loop Testing

Test for
Innermost loop simple loop

Outermost loop simple loop

Nested loop

Information System Design

54

Loop Testing

Loop 1

Test for
Loop 1 simple loop

Loop 2 simple loop


Loop 2

Concatenated loop

Information System Design

55

Basis Path Testing


Path testing requires to design test cases such that all independent paths in the program are executed at least once. It is defined in terms of CFG. A Control Flow Graph describes the sequence in which the different instructions of a program get executed. All the statements should be numbered. The numbered statements represent nodes of the CFG. An edge from one node to another exists if the execution of the statement representing the first node can result in the transfer of control to other node. A path through a program is a node and edge sequence from the starting node to a terminal node of the CFG of a program.
Information System Design 56

Control Flow Graph


Sequence: 1. a = 5; 2. b = a*2-1; Selection: 1. if(a>b) 2. c=3 3. else c=5; 4. c=c*c; Iteration: 1. while(a>b){ 2. b=b-1; 3. b=b*a; } 4. c=a+b;

2 2 4

3 4

Information System Design

57

Control Flow Graph- Example


int GCD(int x, int y) { 1 while (x != y) { 2 if (x>y) then 3 x=x-y; 4 else y=y-x; 5} return x; } 1

6
Information System Design 58

Cyclomatic complexity metric (McCabes Number)


Cyclomatic complexity metric (McCabes Number) R=EN+2 E = # edge N = # node R = Number of independent paths = Number of predicate nodes + 1 Example: Number of basis paths = 4 Path 1: 1-11 Path 2: 1-2-3-4-5-10-1-11 Path 3: 1-2-3-6-8-8-10-1-11 Path 4: 1-2-3-6-7-9-10-1-11
Information System Design 59

11

R4

R3

3 6 7 8 4

R1
9

R2
5

10

Mutation Testing
It is a fault-based testing Faults are deliberately injected into a program, in order to determine whether or not a set of test inputs can distinguish between the original program and the programs with injected faults It is based on adequacy criteria: whether a test set is adequate to cover all faults or not Mutation: is a simple change (error) into the code being tested; each version is called a mutant Program neighborhood: The original program plus the mutant programs are collectively known as the program neighborhood Mutant dead: if the execution of the mutated code against the test set distinguishes the behavior or output from the original program Mutation adequacy score: To supply test set until all mutants are dead

Information System Design

60

Mutation Testing

program

Input test program

create mutants

Input test cases

Run test case on program

analysis and mark equivalent mutants

Fix program

program test correct?

Run test case on each live program

T quit all mutants dead?

Information System Design

61

Debugging
Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece of electronic hardware thus making it behave as expected. Debugging happens as a result of testing. When a test case uncovers an error, debugging is the process that causes the removal of that error. debugging approaches have been put forward: Brute force Back tracking Cause elimination.
Information System Design 62

Debugging
Brute force We apply brute force debugging methods when all else fails. This method is most common and least efficient for isolating the cause of a software error. We apply this method when all else fail. In this method, a printout of all registers and relevant memory locations is obtained and studied. All dumps should be well documented and retained for possible use on subsequent problems. Backtracking is a common debugging method that can be used successfully in small programs. Beginning at the site where a symptom has been uncovered, the source code is traced backwards till the error is found. Unfortunately, as the number of source lines increases, the number of potential backward paths may become unmanageably large. In cause elimination a list of possible causes of an error are identified and tests are conducted until each one is eliminated.
Information System Design 63

Potrebbero piacerti anche