Sei sulla pagina 1di 40

In this Course we will be covering the following modules.

1)Fundamentals of Testing 2)Testing Throughout Software Life Cycle 3)Test Management 4)Tool Support for Testing 5)Test Automation and Execution. Module -1 Fundamentals of Testing Why Testing is necessary? Testing is necessary because the existence of faults in software is inevitable. Beyond fault-detection, the modern view of testing holds that fault-prevention (e.g. early fault detection/removal from requirements, designs etc. through static tests) is at least as important as detecting faults in software by executing dynamic tests. Considerations for the necessity of Testing are : Software System Context Causes of Software Defects Why Testing is Necessary in the context of Software System Software that does not work correctly can lead to many problems Software systems are an increasing part of life from business application to health care system to consumer product etc. Most of the people use software in one form or the other. While using lifts , In washing machine , In microwave oven , In mobile , In air conditioner, In vehicles, In ATM , In internet etc Most of the people have had an experience that the software did not work as expected and due to which the people had to bear huge amount of losses in their business. Software that does not work correctly can lead to many problems If company website has some spelling mistake in the text, potential customer may put off the company as it looks unprofessional. Potential losses due to software not working correctly can be as below: Loss of money, Wastage of time, Losing the business reputation, Cause injury or a death etc Why Testing is Necessary in the context of the Cause of Software Defects

The defect in the software can occur because of : 1) Mistake/ Error A human action that produces an incorrect result. 2) Fault An incorrect step, process, or data definition in a computer program. The outgrowth of the mistake. ( Potentially leads to a failure ) 3) Failure - An incorrect result. The result (manifestation) of the fault. E.g. a System crash, user gets wrong message , the cash-point provides the wrong amount of money. Notes Mistake or Error A human action producing an incorrect result An act, assertion, or belief that unintentionally deviates from what is correct, right or true. The act or an instance of deviating from an accepted code of behavior. When programmers make errors, they introduce faults to program code. We usually think of programmers when we mention errors, but any person involved in the development activities can make the error, which injects a fault into a deliverable. Fault A single error can lead to one or more faults. One fault can result in changes to one product or multiple changes to multiple products A fault in software is caused by an unintentional action by someone building a deliverable. Human error causes faults in any project deliverable. Faults in software cause software to fail. Faults may occur in all software deliverables when they are first being written or when they are being maintained. Software faults are static. Once injected into the software, they will remain there until exposed by a test and fixed.

Failure 1) A deviation of the software from its expected delivery or service 2) Software fails when it behaves in a different way that we expect or require. 3) Software faults cause software failures when the program is executed with a set of inputs that expose the fault. 4) A failure occurs when software does the 'wrong' thing Why Testing is Necessary?- Error and how they Occur Errors and how they occur Can be introduced at any stage of SDLC We are all prone to making simple human errors

Typographical error. Misleading of specs Calculation errors Errors in handling/interpreting data Testing errors Miscommunication, Poor requirements Working under pressures such as tight deadlines, budget restrictions, conflicting priorities Complexity of technologies / infrastructure Role of Testing in Software Development It reduces the potential loss to the organisation and customer. It ensures that a system does what it is supposed to do To assess the quality of a system To demonstrate to the user that a system conforms to requirements To learn what a system does or how it behaves. A technicians view To find programming mistakes To make sure the program doesn't crash the system Rigorous testing of systems and documentation can help to reduce the risk. Highest pay back comes from detecting problems early in the lifecycle phase. Contribute to the quality of the software system. Using life cycle testing indicates that, while the test cost increases the net cost to develop the software decreases. It is possible to measure the quality of software in terms of defects found, for both functional and non-functional software requirements and characteristics. What are the Benefits of testing? The prime benefit of testing is that it results in improved quality, bugs get fixed. We take a destructive attitude towards the program when we test, but in larger extent our work is constructive. We are beating up the program in the service of making it stronger. Demonstrate that software functions appear to be working according to specification The performance requirement appears to have been met Essentials of software testing

Essentials of software testing Testing is destructive process: A creative destruction Test that detects a defect is valuable investment It is Key integral part of EACH phase of SDLC Concentrate on defect prevention. Definition of quality is : A predictable degree of uniformity and dependability at low cost and suited to the market (Dr. Edward Deming) Degree to which a set of inherent characteristics fulfills the requirements (ISO 9000) Conformance to explicitly stated and agreed functional and non functional (including performance) requirements. Meeting the customer requirement first time and every time which is fit for use to its intended functions with reasonable cost and within in time. Why Quality standards : The standard provides a framework for organizations to define a quality model for a software product. What is ISO 9126? : ISO 9126 is an international standard for the evaluation of software. It will be overseen by the project Square, ISO 25000:2005, which follows the same general concepts The ISO 9126 Quality standard is divided into four parts which addresses, respectively, the following subjects: Quality model Internal metrics External metrics Quality in use metrics. Quality model: The software quality model classifies software quality in a structured set of characteristics. Functionality A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs. Reliability A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time. Usability- A set of attributes that bear the effort needed for use, and on the individual assessment of such use, by stated or implied set of users. Ability to learn , Understandability and Operability Efficiency- A set of attributes that bear the relationship between the level of performance of the software and the amount of resources used, under stated conditions like Time Behavior and Resource Behavior Maintainability- A set of attributes that bear on the effort needed to make specified modifications. Stability, Analyzability, Changeability, Testability Portability- A set of attributes that bear on the ability of software to be transferred from one environment to another. Ability to install , ability to replace and Adaptability

Quality Metrics -Internal metrics are those which do not rely on software execution (static measures). -External metrics are applicable to running software. -Quality in use metrics are only available when the final product is used in real conditions. Some of the quality Standards SEI - Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes CMM/CMMI - Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI ISO - International Organization for Standardization IEEE - 'Institute of Electrical and Electronics Engineers How much testing Referring to the above graph, the number of defects goes on decreasing as time elapses but at the same time, cost of testing goes on increasing exponentially. The crossover of these two graphs gives an optimum testing. One can stop testing at this point to balance a risk and cost of testing. Testing is a matter of judging risks against the cost of extra testing efforts How much Testing is Enough? Stop Testing When? How much Testing is Enough? Stop Testing When? Deadlines, e.g. release deadlines, testing deadlines Test cases completed with certain percentage passed Test budget has been depleted Coverage of code, functionality, or requirements reaches a specified point Bug rate falls below a certain level; or Beta or alpha testing period ends. What is Testing Software Testing Testing is a risk reduction activity Aimed at discovering faults An iterative process The measurement of quality It is not sufficient to demonstrate that the software is doing what it is supposed to do. It is more important to demonstrate that the software is not doing what it is not supposed to do Difference Between Debugging and Testing Debugging is a process of line by line execution (White Box Testing) of the code/ script with the intent of finding errors/ fixing the defects. Debugging is the process of diagnosing the precise nature of a known error and then correcting it. Testing is a process of finding the defects from a user perspective (Black Box Testing). Debugging-Fixing the identified Bugs and Testing-Locating/Identifying Bugs.

Major difference is that debugging is conducted by a programmer and the programmer fixes the errors during debugging phase. Tester never fixes the errors, but rather finds them and returns them to programmer Principle 1- Testing shows presence of defects Principle 2- Exhaustive testing is impossible Principle 3- Early testing Principle 4- Defect clustering Principle 5- Pesticide Paradox Principle 6- Testing is context dependent Principle 7- Absence-of-errors fallacy Principle 1- Testing shows presence of defects Testing includes both verification and validation. Verification helps in identifying not only presence of defects but also their location. Validation helps in identifying the presence of defects, not their location. Testing results in improved quality, bugs get fixed. Principle 2- Exhaustive testing is impossible It's Impossible to Test a Program Completely The number of possible inputs is very large. The number of possible outputs is very large. The number of paths through the software is very large. The software specification is subjective. You might say that a bug is in the eye of the beholder Principle 3- Early testing The efficiency gained from early testing is just as appropriate to the programming phase as it is to other phases. Early testing finds defects in the initial phases, so cost of correcting the defect is less. Principle 4- Defect clustering A typical test improvement initiatives will initially find more defect as the testing improves, as the defect prevention kicks in, the defect number drops. Focus may change from finding coding bugs, to looking at the requirements and design document for defect and for process improvements so that we prevent defect in the product. Principle 5- Pesticide Paradox Test effectiveness: How well the user achieves the goals they set out to achieve using system. Effectiveness signifies how well the customer requirements have been meet. i.e. The final product provide the solution to the customer problem effectively. Effectiveness is customer response on meeting product requirement. Producing a powerful effect. Defect Removal in phase Test effectiveness = -----------------------------------* 100 (Defect Injected + Defect escaped) Principle 6- Testing is context dependent Testing is done differently in different contexts. Different approaches for testing will be used for different types of software. Principle 7- Absence-of-errors fallacy The system must meet customer requirement and it should be fit for use. If there is any difference between these two views, it will directly affect the quality. Test Planning and Control Test Analysis and Design Test Implementation and execution Evaluating exit criteria and reporting Test closure activities

Test plan: It contains : test scope test objectives test team goals and responsibilities of Testing members test case design techniques to be used defect management tool to be used risk assessment and mitigation test strategies And various test criteria Test plan template will have the following sections: Test Plan Identifier Introduction Test Scope Test Objectives Assumptions Risk Analysis Test Design Roles & Responsibilities Test Schedule & Resources Test Data Management Test Environment Communication Approach Test Tools Test control: Test control is the ongoing activity of comparing actual progress against the plan, and reporting the status, including deviations from the plan. Test control has the following major tasks: Measuring and analyzing results Monitoring progress Documenting Keeping History Making decisions Recording Decisions Test coverage and exit criteria Initiation of corrective actions Test Analysis and Design Identify test resources Testing using test cases can be as extensive or limited a process as desired. When time expires, testing is complete. Identify conditions to be tested A testing matrix is recommended as the basis for identifying conditions to test. Rank test conditions The objective of ranking is to identify high-priority test conditions that should be tested first. Ranking can be used for two purposes: First, to determine which conditions should be tested first Second, and equally as important, to determine the amount of resources allocated to each of the test conditions. Document test conditions A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements [http://en.wikipedia.org ]

Both the test situations and the results of testing should be documented. Conduct test The executable system should be run, using the test conditions. Depending on the extent of the test, it can be run under a test condition or in a simulated production environment. Verify and correct The results of testing should be verified and any necessary corrections to the programs performed. Problems detected as a result of testing can be attributable not only to system defects, but to test data defects. The individual conducting the test should be aware of both situations Test Analysis and Design Select conditions for testing Based on the ranking, the conditions to be tested should be selected. At this point, the conditions should be very specific. Determine correct results of processing A unique number should identify each test situation, and then a log made of the correct results for each test condition. Create test cases Each test situation needs to be converted into a format suitable for testing Test Implementation and Execution Test execution objectives To execute the appropriate collections of tests required to evaluate product quality To capture test results that facilitate ongoing assessment of the product Execution considerations Execute tests which are deemed highest risks first Tests on which there are many dependent tests will be executed first Test cases central to the architecture Test execution on number of Operating system, browsers, servers etc. Execution -manual or automated? Execution activities Test platforms Set up test environment Hardware Software Tools Data Executing the unit test Executing the integration test Executing the system test Test cycle strategy Number of test cycles Set up system test environment, mirroring the planned production environment as closely as possible. Establish the test bed. Identify test cycles needed to replicate production where batch processing is involved. Assign test cases to test cycles Execute the tests Evaluating exit Criteria Evaluating exit criteria is the activity where test execution is assessed against the defined objectives All test cases are executed This should be done for each test level Ex.: Exit criteria for unit testing Unit testing of the components is complete No open defect exists in the unit

Test Closure Activities Test closure document is the note prepared before you formally complete the testing process. This note contains Total number of test cases Number of test cases executed Number of defects found Number of defects resolved (fixed) Number of defects rejected Total number of bugs differed Defect density Slippage ratio Consolidated test result report etc. Lessons learned Successes: This section highlights successes of the project. Areas for improvement : This section highlights areas for improvement of the project. Project Kick-Off (E.g. Project Proposal) Definition of Requirements System Design System Build Testing Training Implementation Rollout Communication Project Management The Psychology of Testing "The psychology of the persons carrying out the testing will have an impact on the testing process [Meyer 1979]. Definition of Software testing and Psychology of Tester: 1. Software testing is the process to prove that the software works correctly" 2. Testing is the process to prove that the software doesn't work" "Testing is the process to detect the defects and minimize the risks associated with the residual defects" Tester mindset Some years ago, there was a popular notion that testers should be put into black teams. Testers must have a split personality Perhaps you need to be more mature than a developer to see the fault in the software or system from both the angles of development and testing We must trust the developers, but we doubt the product Most developers are great people and do their best, and we have to get on with them were part of the same team, but when it comes to the product, we distrust and doubt it. Testers Characteristics Keen Observation Detective Skills Destructive Creativity Understanding Product as integration of its parts Customer Oriented Perspective Cynical but Affable Attitude Organized, Flexible & Patience at job Objective & Neutral attitude Test Independence Independent testing is more effective

The author should not test their own work Level of Independence where test cases are Designed by the person who writes the software under test (very very low) Designed by another person (low) Designed by people from another department (medium) Designed by people from another organization (very medium) Not chosen by a person (very very high) 1. Test Execution activity includes Defect Management a. TRUE b. FALSE 2. Rigorous testing of systems and documentation can help to reduce the risk. a. TRUE b. FALSE 3. What is the 5th Testing Principle? a. Defect Clustering b. Pesticide Paradox c. Testing is context dependent d. Early Testing e. What is the Level of Independence where test cases are Designed by another person a. Low b. Medium c. Very very high

Module 2 Testing Throughout the Software Life Cycle On completing this lesson, we will be able to discuss: Software development models Test Team Structure Test Types: The targets of Testing Maintenance Testing

Left arm of V model is the conventional waterfall model of the development phase and the right arm is for the corresponding testing phases. The major purpose of acceptance testing is to validate / confirm that the system meets business requirements and provides confidence before it is delivered to the end user. This testing involves user representative and testing team members. Once the entire system has been developed, it has to be tested against the "System Specification" to validate if it delivers the features required. In short, System testing is not just about checking the individual parts of the design, but about checking the system as a whole. System testing can involve several specialist types of test to see if all the functional and non-functional requirements have been met. Besides functional requirements, system testing may include, testing, couple of the non-functional requirements like performance, documentation & robustness etc. Integration testing will test high level design (components with their interface) while unit testing will be testing low level design, implemented in the form of code. The development team conducts integration testing in co-ordination with dedicated testing team, while, unit testing will be done by developers alone. The V Model : Integrates development and testing activities. Shows development activities on the left-hand side and the corresponding test activities on the right-hand side. Testing is not seen as a phase that happens at the end of development. For every stage of development, an equal stage of testing needs to occur. Testing is viewed as an activity throughout the SDLC. Validation is really concerned with testing the final deliverable a system, or a program against user needs or requirements. Whether the requirements are formally documented or exist only as per the user expectations, validation activities aim to demonstrate that the software product meets these requirements and needs. Typically, the end-user requirements are used as the baseline. An acceptance test is the most obvious validation activity. "did we build the right system?" Validation is asking the question, 'Did we build the right system?'. Where the focus is entirely on verification, it is possible to successfully build the wrong system for users. Both verification and validation activities are necessary for a successful software product.

The V&V model explains the activities that are carried out during verification and validation. Basically, verification methods are static, i.e. the code is not executed. Verification methods are: reviews, inspections, code walkthroughs, etc.. As seen on the left side of the VV model Validation activities include, executing the code, in unit testing, integration testing, system testing & acceptance testing, as seen on the right hand side of the V-V model. Advantages of the V-V -Model Testing is carried out in parallel to the development at all phases. Closer coordination can exist between development and testing teams. Testing is done in phases. Early detection of defects. All the major development phases with corresponding testing phases ensure proper planning and a higher quality end-product. Closer coordination, reduces, the problems that occur between the development and test teams. Both have greater visibility into each other. A phase will have a well-defined list of tasks and deliverables. Focus of activities is spread phase wise increasing the quality of the deliverables. Early detection of defects at each stage helps to minimize costs. Disadvantages of the V-V -Model In practice there could be an overlap between phases during development. Phases appropriate for a specific development project could be too long or too short for the corresponding testing activities. When there is an overlap between phases during development, the complexity of the model might increase. Too Long or too short testing activities could result in phase mismatch or idle time. Iterative Development Model A few examples of the Iterative Development Model are: Incremental model

Spiral RAD Rational unified process Agile development Advantages of the Iterative Development Models are: Simple and easy to use. Each phase has specific deliverables. Higher chances of success over the waterfall model, due to development of test plans early during the life cycle. Works well for small projects where requirements are easily understood. Disadvantages of the Iterative Development Models are Very rigid, like the waterfall model. Has very little flexibility, adjusting scope is difficult and expensive. Software is developed during the implementation phase, so no early prototypes of the software are produced. Model doesnt provide a clear path for problems found during testing phases. Multiple development cycles take place here, making the life cycle a multi-waterfall cycle. Cycles are divided into smaller, more easily managed iterations. Each iteration passes through the requirements, design, implementation and testing phases. A working version of software is produced during the first iteration, so you have a working software early on during the software life cycle. Subsequent iterations are built on the initial software produced during the first iteration.

**The incremental model is a method of software development where the model is designed, implemented and tested incrementally (a little more is added each time) until the product is finished. **As per Wikipedia Advantages Generates working software quickly and early during the software life cycle.

More flexible less costly to change scope and requirements. Easier to test and debug during a smaller iteration. Easier to manage risk because risky pieces are identified and handled during its iteration. Each iteration is an easily managed milestone. Disadvantages Each phase of an iteration is rigid and do not overlap each other. Problems may arise pertaining to system architecture because not all requirements are gathered up front for the entire software life cycle. Incremental Model : The incremental model is an intuitive approach to the waterfall model. Multiple development cycles take place here, making the life cycle a multi-waterfall cycle. Cycles are divided into smaller, more easily managed iterations. Each iteration passes through the requirements, design, implementation and testing phases. A working version of software is produced during the first iteration, so you have a working software early on during the software life cycle. Subsequent iterations are built on the initial software produced during the first iteration.

A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, where, requirements are gathered and risk is being assessed. Each subsequent spiral builds on the baseline spiral. Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions for them. A prototype is produced at the end of the risk analysis phase. Spiral model: The Spiral model of development acknowledges the need for continuous change to systems as business change proceeds. Large developments never hit the target 100% first time round (if ever). The Spiral model regards the initial development of a system as simply the first lap around a circuit of development stages. Development never stops, in that a continuous series of projects refine and enhance systems continuously.

Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.

Rapid Application Development Rapid Application Development or it also called as RAD. In the past, mostly 80% of the project budget was spent on the 20% of functionality. Perhaps that wasnt so important. So, the idea with RAD is that we try to spend 20% of the money and get 80% of the valuable functionality and leave it at that. One starts the project with specific aims of achieving maximum business benefits with minimum delivery. This is achieved by time-boxing. Limiting the amount of time that would be spent on any phase and cutting down on documentation that, otherwise in theory, isnt going to be useful because, its always obsolete. In a way, RAD is a reaction to the waterfall model, as the Waterfall model commits a project to spending much of its budget on activities that do not enhance the customers perceived value for money.

Problems addressed by RAD With conventional methods, there is a huge delay before the customer gets to see any results. With conventional methods, there is nothing achieved until 100% of the process is completed, and then 100% of the software is delivered. Important RAD Constraints/Limitations "Fitness for a business purpose All constituencies that can impact application requirements must have representation in the development team throughout the process. Informal deliverables

In Rational Unified Process the overall project is broken down into phases and iterations. Iterations are risk driven and oriented toward mitigating those risks. Each iteration should deliver executable software that is demonstrable and testable against the project requirements. The team decides on the priorities and starts with the most important functionalities. The feedback from this can then be used to refine the requirements, and identify problems before moving on to the next step. This process is repeated, until eventually all requirements are implemented and the product is completed.

The RUP lifecycle consists of 4 phases and 9 disciplines The 4 phases are , Inception, Elaboration, Construction and Transition. The phases determine important milestone of a project. For e.g. Elaboration represents Architectural baseline and thus resolve major technical risks.

The thickness of the bars indicate the amount of effort invested in each discipline. For e.g. we see more efforts invested in Analysis and Design in Elaboration phase. Agile software development The agile software development primarily focuses on an iterative method of development and delivery

A working piece of software is delivered in a short span of time and based on the feedback more features and capabilities are added The delivery timelines are short and the new code is built on the previous one The agile software development primarily focuses on an iterative method of development and delivery. The developers and end users communicate closely and the software is built. The focus is on satisfying the customer by delivering a working software quickly with minimum features and then improvising on it based on the feedback. The customer is thus closely involved in the Software Design and Development Process. On completing this lesson, we will be able to discuss: Software development models Test Team Structure Test Types: The targets of Testing Maintenance Testing What Does Testing Involve? Testing = Verification + Validation Verification: building the product right. Validation: building the right product. A broad and continuous activity throughout the software life cycle. An information gathering activity to enable the evaluation of our work, e.g. -Does it meet the users requirements? -What are the limitations? -What are the risks of releasing it? Testing Involves both Verification and Validation Verification is : 1. The process of confirming whether the software meets its specification. 2. The process of reviewing /inspecting deliverables throughout the life cycle. 3.The process of determining if each phase is completed correctly. 4.Verification is also the process of examining the product to discover its defects. 5.Inspections, Walkthroughs and Reviews are some examples of Verification techniques. 6. Verification is usually performed by Static Testing; Inspecting without executing on a computer. 7.Verification finds the errors much early in the requirement & design phase, hence reduces the cost of fixing. Validation is: 1.The process of confirming whether software meets the user requirements. 2.The process of executing something to see how it behaves. 3.Usually performed by Dynamic Testing; Testing with execution on a computer.

4.Unit, Integration, System, & Acceptance testing are some of the examples of Validation Testing. 5.Various Manual and Automated test tools are available. Like : (MS Excel, Win Runner, QTP, etc.) 6. It finds the errors only after the code is ready, Hence the cost of fixing is higher than it is, at the verification stage.

Early identification of defects & prevention of defect migration are key goals of the testing process. Test levels can be combined or reorganized depending on the nature of the project or the system architecture. For example:- for the integration of a commercial: off the shelf (COTS) software product into a system, the purchaser may perform integration testing at the system level (e.g. integration to the infrastructure and other systems, or system deployment) followed by acceptance testing (functional and/or non-functional, and user and/or operational testing).

Summary:

1. Prevention is better than cure 1. testing should start early both in terms of early testing and planning for the future testing. 2. Planning is crucial, given the time & limited nature of the testing activity 1. planning should be, as far as possible, integrated within your design notations and documentation. Test Team Structure : The Test Team structure may typically contain the following Roles: Test Manager Test Lead Test Engineer

Test Manager is: Is in charge of Project Management Is the Client interface from a business perspective Test Lead is : In charge of preparation of test strategy Client interface at a technical level In-charge of Project Tracking , Defect Management , People management ( Work allocation, defining objectives for test engineers ) Test Engineer is involved in: Test Case Preparation Preparing the test Environment Test Case Execution Defect Reporting Why Testing Types? Test types are introduced as a means of clearly defining the objective of a certain test level for programs or projects. Testing the functionality of the component or system may not be sufficient at each level to meet the overall test objectives. Focusing testing on a specific test objective. Depending on its objective, testing will be organized differently. For example: component testing aimed at performance would be quite different to component testing aimed at achieving decision coverage. There can be many targets for testing. Broadly speaking Testing targets are classified as: Testing of functions Non-functional testing Structural testing Testing related to changes

These targets of testing can be achieved by a variety of Testing types. Test types are introduced as a means, of clearly, defining the objective of a certain test level for a program or a project. Selecting the appropriate type of test helps communicating and making and decisions easier against a test objective. Testing of Functions (Functional Testing)

Functional tests are based on the functions, described in documents or understood by testers and may be performed at all test levels. E.g. test for a component may be based on component specification. Testing functionality can be done from two perspectives Requirement based perspective Business process based perspective Functional Testing is testing the function of a system ( or component) or what it does. Functional testing can be based upon ISO 9126 which focuses on suitability, interoperability, security and compliance. Specification-based techniques may be used to derive test conditions and test cases from the functionality of the software or system. Experience based techniques can also be used. Functional testing considers the external behavior of the software. (black-box testing) Test condition and test cases are derived from the functionality of the component or system. The simplest way to look at functional testing is that users will normally write down what they wish/want the system to do, what features they want to see, what behaviour they expect to see in the software. These are the functional requirements. The key to functional testing is to have a document stating these things. Once we know what the system should do, then we have to execute tests that demonstrate, so that the system does what it says in the specification. Within system testing, fault detection and the process of looking for faults is a major part of the test activities. Its less about being confident. Its more about making sure that the bugs are gone. Thats a major focus of system testing. Non Functional Testing Testing of Software Product Characteristics : A second target for testing is the testing of quality characteristics, or non-functional attributes of the system. Testing something that we need to measure on a scale of measurement, for example time to respond. Non-functional testing includes performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing. It is testing of how well the system works or performs. Non-functional testing is more concerned with what we might call technical requirements like performance, usability, security, and other associated issues. These are things that, very often, users dont document well. Its usual to see a functional requirement document containing hundreds of pages and a non-functional requirement document of one page. Requirements are often a real problem for non-functional testing. Another way of looking at non-functional testing is to focus on how it delivers the specified functionality. How it does what it does. Functional testing is about what the system must do. Non-functional is about how it delivers that service. Is it fast? Is it secure? Is it usable? Thats the non-functional side of testing. Testing of Software Product Characteristics -Security testing Planning the test - Can effectively be performed by using penetration point matrix - One dimension has potential perpetrators - Other dimension has potential points of penetration - Intersection has the probability of penetration Identify potential perpetrators Identify potential points of penetration Create a penetration matrix Identify high risk points of penetration Execute test Usability testing Checks for human factor problems :

-Are outputs meaningful? -Are error diagnostics straightforward? -Does GUI have conformity of Syntax, conventions, format, style abbreviations? -Is it easy to use? -Is there an exit option in all choices? It should not -Annoy intended user in function or speed -Take control from the user without indicating when it will be returned. It should -Provide on-line help or user manual -Consistent in its function and overall design Performed to check ease of use of an application To determine -How simple it is to understand an application usage -How easy it is to execute an application process Direct observation of people using the system Usability surveys Beta tests Performance testing Is to determine whether the system meets its performance requirements e.g. x transactions should be processed in y seconds or data should be retrieved in z seconds with say 100 concurrent users. Can also be called as compliance testing with respect to performance Design should address performance issues Testing of Software Product Characteristics Is to determine whether the system meets its performance requirements e.g. x transactions should be processed in y seconds or data should be retrieved in z seconds with say 100 concurrent users. Can also be called as compliance testing with respect to performance Design should address performance issues Stress testing Whether system continues to function when subjected to large volumes (larger than expected) System should run as it would in production environment. Should be performed when there is uncertainty about volume of expected work that the system can handle. Testing of Software Structure / Architecture (Structural Testing) A third target of testing is the structure of the system or component. Structural testing assumes that the procedural design is known to us. Test cases are designed which can test internal logic of the program. Disadvantages: Does not ensure that users requirements are met Does not establish if the decisions / conditions / paths / statements are insufficient Testing Related to Changes Confirmation /Retesting It is the process of re-executing all the failed test cases to check if the development team has really fixed the defect or not. Regression Testing It is the re-execution of few or all the test cases to check: 1) Any addition to the software 2) Any deletion to the software 3) Any updation or fixing of defect does not affect the functionality of unchanged modules. Globalization/ Localization Testing

Globalization testing checks proper functionality of the product with any of the culture / locale settings using every type of international input possible. The localization process also includes translating any help content associated with the application. Most localization teams use specialized tools that aid in the localization process by recycling translations of recurring text and resizing application UI elements to accommodate Localized text and Graphics. Localization Testing check the quality of a products Localization for a particular target culture / locale. Benchmarking Benchmarking involves activities that compare an organization product with similar market leading one The comparison is based on the following factors. Performance Parameters User Friendliness Richness of Features Maintenance Testing Why maintain software? Model of reality: As the reality changes, the software must either adapt or die. Pressures from satisfied users, to extend the functionality of the product. Software is much easier to change than hardware. As a result, changes are often made to the software whenever possible. Successful software ,survives ,well beyond the lifetime of the environment for which it was written. Types of Maintenance Testing Corrective maintenance is maintenance performed to correct faults in hardware or software [IEEE 1990] Determining how the existing system may be affected by changes is called impact analysis, and is used to help decide how much regression testing to do. Adaptive maintenance is software maintenance performed to make a computer program usable in a changed environment [IEEE 1990]. Perfective maintenance is software maintenance performed to improve the performance, maintainability, or other attributes of a computer program. [IEEE 1990] Preventive maintenance is maintenance preformed for the purpose of preventing problems before they occur [IEEE 1990] Corrective Maintenance involves fixing faults Perfective maintenance involves improving the performance Adaptive maintenance is software maintenance focused on environmental changes

Lesson 3 Test Management After completing this lesson you will be able to discuss Test Organization Test Planning and Estimation Test Progress Monitoring and Control Configuration Management Risk and Testing Incident Management Under Test Organization we shall be talking about: Independent and integrated testing Working as test leader Working as tester Defining the skills test staff need Independent testing : Team of testers who are independent and outside the development team, but reporting to project management. Independent testing benefits Team can often see more, other, and different defects than a tester working with programming team. Brings a different set of assumptions to testing and reviews. Brings a skeptical attitude of professional pessimism. Has a separate budget so proper level of money is spent on testers training, testing tool. Isolations from the development team. Independent testing limitation Leads to communication problem, feelings of alienation and antipathy, lack of identification with the project and support for a project diminishes. Developers loose a sense of responsibility for quality. Integrated testing : Testers working alongside the programmer, but still within the same project and reporting to the development manager. Working as Test Leader Test leader is involved in the planning, monitoring and control of the testing activities. Test leaders in collaboration with the stakeholders; set up the test objective, test policies, test strategies and test plan. They make sure test environment is put into place before test execution. They recognize when test automation is appropriate. They lead, guide and monitor the analysis, design, implementation and execution of the test cases and test suite. They ensure the proper configuration management of the testware produced and traceability of the tests. Working as a Tester Review and contribute to test plans.

Analyzing, reviewing and assessing requirements and design specification. They may be involved in identifying test condition and creating test design, test cases and test data. They may automate or help to automate tests. They may often set up the test environment. Testers execute and log the tests. Defining the Skills Test Staff Need Application or business domain: In order to spot improper behavior while testing and recognize the must work functions and features Technology In order to effectively and efficiently locate problems and recognize likely to fail functions and features The essential activities that are required to manage the project will be covered here : Test Planning Exit Criteria Test Estimation Test Plan Document Test Approaches Test Strategies Document Template Test Scheduling A Test Plan talks about : Test scope Features to be tested Features not to be tested Test objectives Test team Goals and responsibilities of Testing members Test case design techniques to be used Defect management tool To be used Risk assessment and mitigation Test strategies these will have statements that will state the Test Plan is complete, all high severity defects fixed and retested, and regression tests are complete. Installation (optional) Test entry criteria, test suspension criteria, and test exit Criteria (also called Acceptance Criteria). Types of metrics to be collected and Report format Exit Criteria(Also called Acceptance Criteria): This contains the criteria for test completion The typical factors considered are: Acquisition and supply Test items Defects Tests Few other factors for Exit criteria: Coverage Quality Money Risk A Successful project is a balance of quality, budget, schedule and feature considerations. Test Estimation Why estimation is necessary ?

30% of projects never complete 100-200% cost overruns not uncommon Average project exceeds cost by 90% Average project exceeds Schedule by 120% 15% of large project never deliver anything Only 16.2% of projects are successful Sell the estimate to management on a dollars-and-cents basis. Adjust the estimated schedule and budget to fit project constraints without undermining accuracy or unduly increasing risk. Test Estimation is nothing but Estimating what you can test. Refinements to be done Selling your estimate Techniques for Test Estimation Involves consulting the people who will do the work and other people with expertise on the task to be done. (Expert-based) This technique is often called bottom up estimation because you start at lowest level of the hierarchical breakdown in the work breakdown structure. Involves analyzing metrics from past project and from industry data. To look at the average effort per test case in similar past project and to use the estimated number of test cases to estimate the total effort. The testers-to-developer ratio is an example of a Top down estimation technique, in that the entire estimate is derived at project level. Test Approaches or strategies Analytical The risk-based strategy involves performing a risk analysis using project documents and stakeholder input. Another analytical test strategy is requirements-based strategy. Analytical test strategy has in common the usage of some formal or informal analytical technique. Model-based Model-based strategy has in common the creation of some formal or informal model for critical system behaviors. Methodical Methodically design, implement and execute tests following this outline. Methodical test strategy has in common the adherence to a pre-planned, systematized approach that has been developed in-house. Process or standard-compliant Process or standard-compliant strategies have in common reliance upon an externally developed approach to testing. Dynamic Dynamic strategy, such as exploratory testing, have in common concentrating on finding as many defects as possible during test execution. Consultative or directed Consultative of directed strategies have in common the reliance on a group of non-testers to guide or perform testing effort. Regression-averse You might automate all the tests of system functionality so that, whenever anything changes, you can re-run every test to ensure nothing has broken. Which approach to select depends on following factors Risks Skills Objectives Regulations

Product business Wipro Test Plan template contains the sections mentioned above: Information on the Project Code,Project Manager email ID , Authors Followed by the revision history that needs to be filled in after each review 1) TEST PLAN ID: 2) INTRODUCTION: 3) TEST SCOPE: 3.1) FEATURES TO BE TESTED: 3.2) FEATURES NOT TO BE TESTED 4) TEST OBJECTIVES: 5) ASSUMPTIONS: 6) RISK ANALYSIS: 7) TEST DESIGN 8) ROLES AND RESPONSIBILITIES 9) TEST SCHEDULE AND PLAN RESOURCES 10) TEST DATA MANAGEMENT 11) TEST ENVIRONMENT 12) COMMUNICATION APPROACH 13) TEST TOOLS 14) ACRONYMS AND GLOSSARY 15) ASSOCIATED DOCUMENT The test Case template will include a few details about the project . The main sections of the test case template are Feature/Functionality ID, Test Case ID ,Step No of the test Case, Expected/Actual Behavior, Status of the Test Case (Pass/Fail) and Defect ID in case the test Case has failed.

The Defect Template will include a few details about the project . The main sections of the test case template are Defect ID, Test Case ID for which the Defect has been raised, Description of the defect, Expected/Actual Result, Status of the defect, Severity/Priority of the defect, Type of defect ex. Whether it is an environment/code defect, Assigned to, Defect Close Date The following will be discussed: Traceability Test Progress Monitoring Test Reporting Test Control Requirement ID Requirement Test case ID Mapped Status (Pass/Fail/Not Completed/Not executed) R001 On successful Authentication the user must be allowed to view screen he/she has privileges TC_001 Not Executed TC_002 Not Executed

TC_003 Not Executed Let us take a look at a generic requirement example for any secure application. Let us assume that the requirement merely states on successful user id authentication, the user must be allowed to view screen he/she has privileges for. Let us assume that three test cases have been written to test this functionality; one test cases tests a valid login scenario, another one tests the invalid login scenario and the third one tests role-based privileges. Your traceability matrix would typically look like the one shown here. Test Progress Monitoring : Purpose: Give feedback about test activities to guide and improve the testing and the project. Measure the status of testing, test coverage and test items against the exit criteria to determine whether test work is done. Information to be monitored is collected Manually For small project, test leader can gather test progress monitoring information using documents, spreadsheets, and simple database. Automated Working with large teams, distributed projects and long-term test efforts, we find that efficiency and consistency of data collection is aided by the use of automated tools. Common metrics which need to be collected throughout the testing process: The extent of completion of test environment preparation. The extent of test coverage achieved, measured against requirements, risk, code, configurations. Percentage of work done in test case preparation. Percentage of work done in test environment. Test case execution. The economy of testing. Defect reporting. Test coverage of requirements. Dates of milestones. Testing costs etc Test Reporting Reporting test status is about effectively communicating our findings to other project stakeholders. The needs and goals of the project, regulatory requirement,s time and money constraints and limitations of tools available for test status report. Metrics should be collected during and at the end of testing process to assess Adequacy of test approaches taken. Adequacy of test objective for that level of testing. Effectiveness of testing with respect to objectives of that test level. Test Control Describes guiding and corrective actions taken after collecting and analyzing the information and metrics. Test control actions may cover any test activity and may affect any other software test cycle activity or task. Examples of test control actions are Re-prioritize tests when an identified risk occurs Change the test schedule due to unavailability of test environment. Set an entry criterion requiring fixes to have been retested by developer before accepting them in the build.

Configuration Management means: A disciplined approach in the management of software and the associated design, development , testing, operations and maintenance of testing. Change management and Configuration management both are not quite the same. Change management manages the changes made to items. Configuration management manages all the items and the status of all the items of the system as a whole. Symptoms of poor Configuration Management: Unable to match the program source code and object code Unable to identify source code changes for a particular version of software More than one person changing the same source code at the same time for different reasons Source documentation changes made without developers being aware Source documentation changes made without saving the previous version Unable to determine what testing was performed for a release of software Poor Configuration Management which can lead to: Error-prone code Missed deadlines Applications that do not reflect (changed) requirements Applications that are difficult to maintain Configuration identification It is the process of identifying all the items within a project that are going to be subject to version control. Configuration control The maintenance of configuration Items over time Status accounting Recording and reporting on the status of configuration items. Includes initial approved versions, status of requested changes and implementation status of approved changes. Gives the ability to see the current state of a configuration item. Configuration auditing The Verification that the configuration management process is being adhered to The verification that configuration items reflect their defined physical and functional characteristics. What are the configurable items ? All types of plans e.g. project plan, test plan, quality plan etc. Software code Test scripts and test case documents Defect log Test reports User documentation Baselines Current state of product is frozen and further changes to the product are done along with change in version of the product. Defects are logged only for baselined items Tools Most popular Tools used for Configuration Management: Visual Source Safe (VSS) Concurrent Version System (CVS)

Clear case

Risk 1. Potential loss to the customer or Organization is called as Risk. 2. Vulnerability and hazards are not dangerous, taken separately. But if they come together, they become a risk or, in other words, the probability that a disaster will happen Project Risk Some typical risks to project Logistics or product quality problem that block tests: These can be mitigated through careful planning, good defect triage and management, and robust test design. Test items that wont install in the test environment: These can be mitigated through smoke testing prior to starting test phases or as part of a continuous integration. Excessive change to the product that indicates updates to the test results or require updates to test cases, expected results and environments: These can be mitigated through good change control process, robust test design and light weighted test documentation. Insufficient or unrealistic test environment that yield misleading result: Mitigation sometimes complete alleviation- can be achieved by outsourcing tests such as performance tests. Organizational financial problems can be handled by preparing a briefing document for the senior management. Alert the customer of potential difficulties when the risk is Recruitment problems On encountering Staff illness risk, reorganize the team In case of defective components, replace the potentially defective components with reliable ones. In case of Requirement Changes, derive the traceability information When there is an Organizational restructuring, prepare a briefing document for senior management. For Database performance risk, investigate the possibility of buying a higher-performance database. For Underestimated development time, Investigate buying in components, investigate use of a program generator Product Risk Product risk is the possibility that the system or software might fail to satisfy some reasonable customer, user, or stakeholder expectation. These are risks to the quality of the product, such as: Error-prone software delivered. The potential that the software/hardware could cause harm to an individual or company. Poor software characteristics (e.g. functionality, security, reliability, usability and performance). Software that does not perform its intended functions. Risk Mitigation A good mitigation plan will eliminate the occurrence of a risk. Contingency Planning A contingency plan is nothing but a backup plan. Risk Based Testing Is used to organize testing efforts in a way that reduces the residual level of product risk.

Involves measuring how well we are doing at finding and removing defects in critical areas. Involves risk analysis to identify proactive opportunities to remove or prevent defect through nontesting activities. Conventional Approach to Risk-Based Testing Identify and assess the risk per feature/function Whats most likely to fail? What will trigger that failure? Whats the damage? Test the highest-risk things the most What things are done most frequently? What things are most likely to fail? What things will have the biggest impact if they do fail? Pro-active Approach to Risk-Based Testing Testing is our primary means of reducing risk Design and prioritize tests based on What can go wrong? What must go right? Test earlier in the development cycle

Testing is risk based activity All type of test planning considers reducing the risks to acceptable level Three methods of measuring magnitude of risk Intuition / Judgment One or more individuals state that they believe the risk is greater than the cost of controls Consensus A team or group of people agrees to the severity of magnitude of a risk. Risk Formula Risk = Probability or Frequency * Loss per occurrence

For any risk, product or project, we have four typical potions: Mitigation: Take steps in advance to reduce the likelihood of the risk. Contingency: Have a plan in place to reduce the impact should the risk become an outcome. Transfer: Convince some other member of the team or project stakeholder to reduce the likelihood or accept the impact of the risk. Ignore: Do nothing about the risk, when likelihood and impact are low. Risk Management Process consists of the following steps:

1. Risk identification Identify project, product and business risks; 2. Risk analysis Assess the likelihood and consequences of these risks; 3. Risk planning Draw up plans to avoid or minimize the effects of the risk; 4. Risk monitoring Monitor the risks throughout the project; What is an incident? Unplanned events occurring during testing that have a bearing on the success of the test. It may be something thats outside the control of the testers, like machine crashes, or theres a loss of the network, or maybe a lack of test resource. Incident management is about logging and controlling those events. Incidents should be logged when independent testers do the testing. Incidents are formally logged during system and acceptance testing, when independent teams of testers are involved. For example, it could be something wrong with the test itself; The test script may be incorrect or the expected results may have been predicted incorrectly. Incident Logging. Tester should stop and complete an incident log What goes into an incident log? The tester should describe what exactly is wrong. What did they see? They should record the test script theyre following and potentially, the test step at which the software failed to meet an expected result. If appropriate, they should attach any output screen dumps, print outs, any information that might be deemed useful to a developer so that they can reproduce the problem. Potentially, if a test fails, It may be a test that has no bearing on the successful completion of any other test. Its completely independent. However, some tests are designed to create test data for later tests. Incident Report Objectives Provides the feedback about the problem to enable identification, isolation and correction as necessary. To track the quality of the system under test. To track progress of the testing process. To provide ideas for test process improvement. Incident Report IEEE 829 standard: Test incident report template. Test incident report identifier

Summary Incident description Inputs Expected result Actual result Anomalies Date and time Procedure step Environment Attempts to repeat Testers and observers

Module 4 Tool Support for Testing After completing this course you will be able to discuss Tool Support for Testing - Types of Test Tool - Effective use of Tool - Introducing a tool into an Organization Types of Test Tools Test Tool Classification Some tools perform a very specific and limited function. Tool support is useful for repetitive task. A tool that measures timing for non-functional (performance) testing needs to interact very closely with that software in order to measure it. In order to measure coverage, tool must first identify all of the structural element might be exercised to see whether a test exercise it or not. Debugging tool is used to try to find a particular defect. Tool Support for Management of Testing and Tests The management of testing applies over the whole of the software development life cycle. A test management tool may also manage the tests, which would begin early in the project. Test management tools are typically used at system or acceptance test level Tool Support for Management of Testing and Tests Test management tool features Management of test Schedule of tests to be executed Management of testing activities Interface to other tool Traceability of tests, test results and defects to requirement or other sources. Logging test results Preparing progress reports based on metrics - Tests run and tests passed - Incident raised - Defects fixed - Outstanding defects Helps to gather, organize and communicate information about the testing on a project. Tool Support for Management of Testing and Tests Requirement management tools Tests are based on requirements, better the quality of requirements, easier it will be to write test from them. It is also important to be able to trace tests to requirements and requirements to tests. Requirement management tool features 1. Storing requirement statements 2. Checking consistency of requirement 3. Traceability of requirement to tests and tests to requirement, functions or features Tool Support for Management of Testing and Tests Incident management tools Incidence may also be perceived problems, anomalies, or enhancement requests. Incident report go through number of stages from initial identification and recording of the details, through analysis, classification, assignment for fixing, fixed, re-tested, and closed.

Incident management tool features Storing information about the attributes of incidents (e.g. severity) Storing attachment (e.g. screen shots) Prioritizing incidents; Assigning actions to people (fix, confirmation to test, etc.) Status (e.g. open, rejected, closed, etc.) Reporting of statistics/ metrics about incidents. (e.g. average time open, total number raised, open or closed) Tool Support for Management of Testing and Tests Configuration management tools Good configuration management is critical for controlled testing. Test ware needs to be under configuration management. Test ware also has different version and is changed over time. Configuration management tool features Storing information about versions and builds of the software and testware. Tool Support for Static Testing Review process support tool features It can be called as a repository for rules, procedure and checklists to be used in reviews. A common reference for the review process or process to use in different situation. Tool Support for Static Testing Static analysis tools (D) Static analysis tools are extension of compiler technology. Static analysis can also be carried out on things other than software code. Also can be used to enforce coding standards. Static analysis tool features (D) Calculate metrics such as, Cyclomatic complexity. Enforce coding standards. Analyze structure and dependencies. Aid in code understanding. Identify anomalies or defect in the code. Tool Support for Static Testing Modeling tools ( D) Helps to validate models of the system or software. They can be used before dynamic tests can be run. Model-based testing tools are actually tools that generates test inputs or test cases from stored information about a particular model. Modeling tools features ( D) Helping to identify and priorities areas of the model for testing. Predicting system response and behavior under various situations, such as level of load. Helping to understand system functions and identify test conditions using a modeling language such as UML. Tools Support for Test Specification Test design tools Helps to construct test cases or at least test inputs. If automated oracle is available, then tool can also construct the expected result. Test design tool may be bundled with coverage tool. Tools Support for Test Specification Test data preparation tools Enables data to be selected from an existing database or created for use in test. Test data preparation tool features

Enables records to be sorted or arranged in a different order. Generate new records populated with pseudo-random data, or data set up according to some guidelines. Construct a large number of similar records from a template, to give a large set of records for volume tests. Tools Support for Test Execution and Logging Test execution tools Tools is also referred as a test running tool. Test execution tools use a scripting language to drive the tool. Test execution tool features Executing tests from stored scripts and optionally data files accessed by the script. Dynamic comparison of screen, elements, links, control, objects and values. Ability to initiate post-execution comparison. Logging results of test run. Masking or filtering of subsets of actual and expected results. Measuring timing for tests. Synchronizing inputs with the application under test. Sending summary results to a test management tool. Tools Support for Test Execution and Logging Test harness/ unit test framework tools (D) Test harness provides stubs and drivers, which are small programs that interact with software under test. Some unit test framework tools provide support for object oriented software. Test harness/ unit test framework tool features (D) Executing a set of tests within the framework or using the test harness. Recording the pass/fail results of each test. Storing tests Support for debugging. Coverage measurement at code level. Tools Support for Test Execution and Logging Test comparator Dynamic comparison is where the comparison is done dynamically i.e. while the test is executing. Post-execution comparison, where the comparison is performed after the test has finished executing and the software test is no longer running. Test comparator features Post-execution comparison of stored data. Masking or filtering of subsets of actual and expected results. Tools Support for Test Execution and Logging Coverage measurement tools features (D) Calculating the percentage of coverage items that were exercised by a suit of tests. Reporting coverage items that have not been exercised as yet. Identifying test inputs to exercise as yet uncovered items. Generating stubs and drivers (if part of unit test framework). Tools Support for Performance and Monitoring Dynamic analysis tools features (D) They are analysis rather than testing tool. Detecting memory leaks. Identifying pointer arithmetic errors such as null pointer. Identifying time dependencies.

Tools Support for Performance and Monitoring Performance-testing, Load-testing and Stress-testing Tools Performance-testing tools are concerned with testing at system level to see whether or not the system will stand up to a high volume of usage. A load-test checks that the system can cope with its expected number of transaction. A volume test checks that the system can cope with a large amount of data. A stress test is one that goes beyond the normal expected usage of the system. Performance-testing, Load-testing and Stress-testing Tool features Generating a load on a system to be tested. Measuring the timing of specific transaction as the load on the system varies. Measuring average response times. Producing graphs or charts of response over time. Tools Support for Performance and Monitoring Monitoring tools Monitoring tools are used to continuously keep track of the system in use. Monitoring tool features Identifying problems and sending an alert message to the administrator. Logging real time and historical information. Finding optional setting. Monitoring the number of users on a network. Monitoring network traffic. Tools Support for Specific Application Areas There are web-based performance-testing tools. Performance testing tool for back-office. There are static analysis tools for specific development platform and programming languages. There are dynamic analysis tools that focus on security issues. Dynamic analysis tools for embedded systems. Tools Support suing other Tools A word processor or spreadsheet acts as a testing tool Tool used by developers when debugging To help localize defect and check their fixes. Potential Benefit of using Tools Repetitive work is reduced. Greater consistency and repeatability Objective assessment Ease of access to information about tests or testing Risk of using Tools Test execution tool In order to know what test to execute and how to run them, test execution tool must have some way of knowing what to do- this is the script for the tool. The scripting language may be specific to a particular tool, or it may be more general language. There are different levels of scripting - Liner scripts - Structured scripts - Shared scripts - Data-driven scripts - Keyword-driven scripts

Special Considerations for some types of Tools Performance testing tool When using a performance testing tool we are looking at - The transaction throughput - The degree of accuracy of a given computation - The computer resource being used for a given level of transactions. - The time taken for certain transaction - The number of users that can use the system at once. Special Considerations for some types of Tools Performance testing tools issues The design of the load to be generated by the tool. Timing aspects The length of the test and what to do if a test stops prematurely. Narrowing down the location of a bottleneck. Exactly what aspects to measure How to present the information gathered. Special Considerations for some type of Tools Static analysis tools Helps to check that the code is written to coding standards. Static analysis tools can generate the large number of messages. A filter on the output of static analysis tool could make the more important messages more likely to be noticed and fixed. Special Considerations for some types of Tools It is important to define test process before test management tool are introduce. The reports need to be designed and monitored so that they provide benefit. The tool should help to build on the strengths of organization and address it weaknesses. The following factors are important in selecting a tool Assessment of the organizations maturity E.g. Readiness for change Identification of the areas within the organization where tool support will help to improve testing processes. Evaluation of tools against clear requirement and objective criteria. Proof-of-concept to see whether the product works as desired and meets the requirements and objectives defined for it. Evaluation of the vendor or open-source network of support. Identifying and planning internal implementation. Including coaching and mentoring for those new to the use of the tool Pilot Project Objectives should be set for the pilot in order to assess whether or not the concept is proven. Objectives for a Pilot Project To learn more about the tool. More details, more depth To decide on standard ways of using the tool that will work for all potential users. E.g. naming conventions, creation of libraries, defining modularity. To evaluate the pilot project against its objectives. Have the benefits been achieved at reasonable cost? Success Factors Adapting and improving processes, testware and tool artifacts to get the best fit and balance between them and use of the tool. Defining and communicating guidelines for the use of the tool, based on what was learned in the pilot.

Monitoring tool use and benefits.

Module 5 Test Automation and Execution After completing this module you will be able to discuss Test Automation Prerequisites Test Execution Activities Test Automation is the act of converting test cases to machine executable code using a Test Tool. Saves TIME, EFFORT and MONEY, reduces boredom and makes test execution easier. Only Long-term solution for reduced costs in software testing and better quality products. Questions To Be Answered Before Automation are: Is it necessary to repeat sequence of actions many times? Do you need to run the same tests on multiple hardware configurations? Do you need to test with several concurrent users? Do you have to exercise multiple components, options and configurations? Does this Save time and money?

Pre-requisites -Detailed test cases including predictable expected results which have been developed from business functional specification and design documentation. -Standalone Test Environment, including a Test Database that is restorable to a known constant, such that the test cases can be repeated each time there are modifications made to the application. -Making use of any automated test tool requires at least one trained , technical person in other words - A programmer. Test Execution refers to the process of executing test cases, either manually or through automated scripts. Some of the activities are: 1. Preparation of Test Bed 2. Execution of Test Cases 3. Collection of Metrics 4. Defect Management Test Execution Activities Preparation of Test Bed It is extremely important to have a separate environment for testing. A Test Environment is also known as Test Bed. This includes all the hardware and software .Test Bed preparation also involves loading any test data which are required for testing . Test environment must accurately replicate the production environment in a small scale. One must be able to restore the Test Environments database to a known baseline. If this is not done, tests performed against the database cannot be repeated.

If automation scripts are being used , then separate PCs must be allocated for the same. 2. Execution of Test Cases 1. Done either manually / through automation 2. Test Case have to be executed in a particular order, depending on various factors like Test Case priority and Test Case Inter dependency 3. Collection of Metrics Some typical metrics are: a) Defect density Defect Density = { Number of Known Defects / Size } b) Residual Defect density c) Code Coverage d) Number of Test Cases executed per day e) Number of Test Cases that passed f) Number of Test Cases that failed. 2. Test Case execution can be done manually/automated. Test Case priority and Test Case Inter dependency needs to be considered while executing Test Cases.

3. Metrics are the various measurements used in order to asses the performance of any activity. a) Defect density is a measure of the total known defects divided by the size of the software entity being measured. Defect Density = { Number of Known Defects / Size } The Number of Known Defects is the count of total defects identified against a particular software entity, during a particular time period. Examples include: defect to date since the creation of module defects found in a program during an inspection defects to date since the shipment of a release to the customer b) Code coverage is a measure used in software testing. It describes the degree to which the source code of a program has been tested. It is a form of testing that inspects the code directly and is therefore a form of white box testing 4. Defect Management Defines the entire process of raising defects and tracking them. Defect Logging Sometimes client provides a Defect Management tool or simple excel spreadsheet with relevant columns to act as a tool. The tool helps to log details of defect. Some of the typical fields used to describe a defect are: Defect ID: Incase of a tool this is auto generated. Defect Severity: The usual severity classifications are Fatal/Critical, major or minor. Numbers are assigned to each severity. For example Fatal is assigned 1.So, a Severity 1 defect would mean Fatal, which has to be fixed immediately. The classification may vary

from project to project , but the concept remains the same i.e. higher the impact, higher the severity. Defect Management Defect Logging 1. Date: To log when the defect was raised and when it was closed 2. Description: A detailed description of the defect. E.g. The scenario that was being tested , the configurations that were used etc. 3. Logged by : The name of the person who logged the defect. 4. Status: This gives the status of the defect: 1. Open 2. Fixed 3. Tested 4. Closed 5. Tested and Reopened 6. Assigned etc.

Potrebbero piacerti anche