Sei sulla pagina 1di 15

Test Effort Estimation Using System Test Points: A New Metric for Practitioners

K.Mahesh Kumar, Member of IEEE Computer Society, Tata Consultancy Services, India. Email : mahesh_kuruba@mumbai.tcs.co.in Prof.A.K.Verma, Senior Member of IEEE, Fellow (Life) IETE, Reliability Engineering group, Indian Institute of Technology, Powai, India. Email: akv@ee.iitb.ac.in Dr. Gargi Keeni, Member of IEEE, Tata Consultancy Services, India. Email: gkeeni@mumbai.tcs.co.in Prof. A.Sri Vidya, Reliability Engineering group, Indian Institute of Technology, Powai, India. Email: asvidya@ee.iitb.ac.in

Abstract: This paper proposes System Test Point (STP), a new metric for estimating system test effort. The proposed metric encompasses 12 identified attributes, which affect system testing effort. Expert ratings were obtained for the 12 identified attributes by conducting a survey. The correlation between the identified attributes and system test effort has been calculated using a software package, RISK V4.0. The correlation coefficients are calculated based on the ratings obtained by survey. The STP was applied for two projects to estimate the test effort and the results were observed to be positive. System test point is a useful metric for test managers, which aids in precise estimation of project effort. This paper addresses the interests of metric group, software managers and test managers of the software organization who are interested in estimating system test effort. The framework proposed allows the organization to calculate the weights used in calculating STP. Keywords: Test metrics, System test points, System testing, System test effort, Test management

1.0.Introduction:
Software metrics provide quantitative information to managers in making decisions in day-to-day life. Efficient and reliable metrics are foundation stones for project management. The projects can be managed smoothly by deploying such metrics. This paper proposes a new metric along with a framework for system test effort estimation. Estimation plays a vital role in all phases of software development. One of the major challenges for the project managers is effort estimation, which is key to the success of the project. An accurate and feasible estimate makes the schedule come true and software be delivered on time, thus enhancing customer satisfaction. State-of-the-art effort estimation metrics deal with the project as a whole. The proposed system test effort estimation metric, System Test Point (STP), has been conceptualized based on the fact that accuracy of analysis increases when the system components are dealt with separately. Also, as testing plays a significant role in the software engineering community as a software reliability improvement activity and is being considered as a separate branch of software engineering, the need of estimating system test effort separately has been felt. Many formulae proposed for software size estimation [14] have pre-defined values. The values specified in the formulae vary from project to project and organization to organization. Hence, this paper proposes a framework, which can be adopted by organizations to estimate system test effort. The proposed metric is a generalized metric for system testing which can be tailored to the operational or architectural processes of the organization or project. As shown in Figure 1, system testing is an activity of validating software against the stated requirements. The environment of system testing should be similar to the operational environment, to uncover the system defects pertaining to interfaces with other systems and environments before the software is deployed. So, system testing is an important phase in the software development process. Hence, it is essential for the test managers to estimate the effort required for system testing. Stark et.al. [2] evaluated the applicability of five identified software metrics to aid in monitoring the test process. The five identified metrics are: Software reliability Software size Test session efficiency Test focus Software maturity

Menzis and Cukic [3] argue that elaborate and expensive testing regimes will not yield much more than inexpensive manual or simple automatic testing schemes. Ravichandran and Shareef [6] suggested a test effort metric as the effort of testing*100/Total project effort. This metric would help in a post project implementation to find the actual system test effort. Pol, Veenendaal [13] have developed and applied Test Points successfully to estimate test effort for black-box testing. This paper proposes a framework for estimating the system test effort using a new metric called System Test Points. The organization deploying this metric initially has to conduct a survey to calculate the weights of the identified attributes, which will be used in estimating the system test points. Estimated STP should be compared against the actual STP to improve the estimation accuracy for the subsequent projects. The metric proposed is suitable for the projects having a system testing team headed by a test manager or project manager. In scenarios, where the system testing team is an independent organization, it is assumed that the necessary information/data for estimating STP is provided to the test manager. This paper starts with a brief introduction of the attributes identified for the proposed metric, which affect system test effort. The survey conducted for identifying the relationship between the 12 identified attributes on the system test effort has been explained. Simulation has been performed using RISK Page 2 of 15

software to calculate the correlation coefficients of the 12 identified attributes. The proposed methodology for estimating STP has been explained together with the methodology of evaluating STP by applying it for two projects. The two projects considered for estimating and evaluating STP have independent testing teams headed by a test manager. The two projects were operating at the same level of maturity.

2.0. Attributes Affecting System Test Effort


The ideal system test process considered for conceptualizing the System Test Point (STP) metric is shown in Figure 2, which depicts the various activities to be performed for system testing. The activities of system testers start from the requirements phase. It is a best practice to review the requirements document, which helps in addressing the non-testable requirements during the upstream software development. The preparation and review of test artifacts can be performed once the requirements are baselined. The preparation of test cases and its peer review can be performed during the design phase. Some of the test teams perform the review of design document as well. The test cases prepared will be executed during the system testing phase. The phases, activities and artifacts, as shown in Figure 2, might vary from project to project and organization to organization. The process group or software manager needs to identify the relevant activities that will be performed in their project depending upon the project commitments. Many factors affect effort of testing, the accuracy of estimate increases when all the factors are considered. Broadly, system test effort depends upon the process of testing, type of testing, and test tools used in the organization. The cognitive factors and skills are not considered for calculating STP. The 12 attributes identified for the proposed metric that affect system test effort are as follows: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Observed requirement stability Observed software stability Test process tools used Testing tools used Type of testing (e.g., Black box, White Box, etc.) to be performed Test artifacts required System knowledge of the tester Complexity of the system Availability of reusable test cases and test artifacts Environment (UNIX, Windows, Web based, Client-Server) Application type (Real time systems, Database applications) Required software reliability

The descriptions of the attributes identified are as follows: 2.1. Observed requirements stability Requirements stability indicates the number of changes to the requirements and the amount of information needed to complete the requirements definition. Ravichandran and Shareef [6] define requirements stability index as the number of requirements changed, added or deleted to total number of requirements. Tools are available to measure the volatility of the requirements. The Software Assurance Technology Center (SATC) developed the Automated Requirements Measurement (ARM) tool [1] for NASA managers to assess and measure the traceability and volatility of the requirements. The stability of the requirements is the confidence level of requirements. Unstable requirements increase the risk of completing the software as per schedule. Any change in the requirements will have a ripple effect on the design, code, and test phases of software development. The stability of the requirements affects the test plan and effort. The risk of testing for wrong requirements is not ruled out in case of volatile requirements, so it is essential to have stable requirements. The requirements analyst and the customer/client are the appropriate personnel to judge the stability of requirements. Testing the highly stable requirements requires less effort when compared with less stable requirements. Thus, highly stable requirements have to be rated low (i.e. 1) and least stable Page 3 of 15

requirements have to be rated high (i.e. 10). The number of defects, type of defects, and severity of defects in the requirements document is an indicator of the stability of the requirement. Since, system testing is a validation technique, which aims at validating the software against the requirements, any discrepancy between software and requirements would be a defect irrespective of root cause of the defect. Fixing of defects either in software or in requirements would be a change to the configured item (software or artifacts). 2.2. Observed software stability Stable software requires less test effort than unstable software. As the number of defects in the software increases, the effort required to test increases. Some of the factors that affect injection of defects into software are: Maturity of the process Skills and skill level of the designer/developer If the software is being developed for the first time, then the software stability can be assessed based upon the above factors. If the software is in operation and is undergoing changes, then the stability of the existing system has to be considered for software stability. All the above factors affect system test effort, so this attribute has to be rated high for highly defective software and low for less defective software. 2.3. Test process tools used Test process tools include test management tools. Adoption of a test management tool decreases the communication required and enhances the visibility of test progress. This kind of tool simplifies the job of the software manager. Usage of test process tools affects the productivity of the test team. Effective usage of the appropriate test process tools decreases effort thus improving productivity. Hence, the testing effort depends upon the usage of test process tools. The software manager identifies a test process tool that meets the project requirements. Every test process tool follows a specific process, which needs to be mapped to the test process of the organization [12]. The rating to be assigned for this attribute has to be high if the complexity of the tool is high and/or the organizational process needs to be adjusted to use the process prescribed by the test tool. The attribute has to be rated low if the complexity of the tool is less and fewer process adjustments are required. 2.4. Testing tools used Testing tools include automated testing tools. Use of testing tools for automated testing would decrease the effort of the test team. The percentage of testing that can be performed by the automated test tool should be considered for estimating the test effort. If the team is not using any test tool then the rating for this attribute should be high and if all the testing has been automated, it should be rated low. There might be cases where use of automated testing increases effort for a particular version of the software, so the appropriate rating has to be given. For instance, an automated test tool used for regression testing may not be feasible for testing one-time released software. In the above mentioned cases, the software manager has to rate the attribute high even though the automated testing tool is reducing effort for the future releases but not for that specific release. 2.5. Type of testing (e.g., black box, white box, etc.) to be performed Test effort depends upon the type of testing like white box or black box and positive testing or negative testing. In practice, a combination of black box and white box testing will be performed depending upon the scenario to improve the quality of software delivered. Menzies and Cukic [3] gave the probability of finding fault in a black box testing approach as Y= 1N (1-x) Where x is the probability that the random input will cause the failure N is the random black box probes. As the objective of the system testing is to find defects, test cases for negative testing consume more effort than positive testing. Black box testing needs more data generation and executions to test the software, but the analysis to prepare the test cases will be relatively less. There will be one Page 4 of 15

path to satisfy the requirements and there will be several paths to fail the software, which is called negative testing. Negative testing needs more test data and test executions than positive testing. If the test team performs negative testing, then the effort would be more, so the attribute has to be rated high. If the test team performs positive testing then the attribute has to be rated low. 2.6. Test artifacts required Test artifacts are the products of testing process. Test artifacts are essential for reusing the test cases, tracing the requirements to the test cases and analyzing the test process. As production of the test artifacts consumes effort, the manager has to identify the required artifacts for testing before hand. If the software under test is a one-time development effort, then the call for preparing and maintaining the various test artifacts needs to be evaluated. Also, if the requirements are too volatile, baselining of test artifacts needs to be considered. Production of the various artifacts prepared during the test process would consume effort. Listed below are the various artifacts that are output of the testing process. Test strategy Testability assessment Test automation assessment Test plan Test cases and results The greater the number of artifacts, the greater will be the effort and higher will be the rating. The lesser the number of artifacts, the lesser will be the effort and lower will be the rating. 2.7. System knowledge of the tester The system knowledge of the tester affects the preparation of test cases to a greater extent. So, if the tester has a good knowledge of the system as well as good testing skills, then the attribute should be rated low. If the tester has little knowledge about the system, then this attribute should be rated high. Skill levels are classified into 6 categories Level 0 is the person who does not know anything Level 1 is the person who knows something Level 2 is the person who can do something Level 3 is the person who can do well Level 4 is the person who can teach others Level 5 is the person who can do research in that area. The knowledge levels are classified in a similar manner. In practice, the test team would be a mixture of experiences, so the formula below can be used to calculate the system knowledge factor of the testing team: System knowledge factor = (Sli * NSi) + (Kli * NKi)/ i where Sli is the skill level of the tester i, NSi is the number of skills the tester i possess, NKi is the number of knowledge areas the tester i possess, and i is the number of testers. The higher the system knowledge factor of the test team, the lower the rating for the attribute, as the skilled testers consume less effort for preparing test cases. The lower the system knowledge factor of the test team, the higher is the rating for the attribute, as the unskilled testers consume more effort for preparing test cases. 2.8. Complexity of the system The complexity of system testing does not depend upon the complexity of the solution, rather it depends upon the complexity of the problem. Cardoso, Crespo and Kokol [11] distinguish complexity as problem complexity and solution complexity. In order to estimate the test effort, managers need to consider the complexity of the problem, i.e. complexity of the requirements. The attribute complexity of the system has to be rated high for complex requirements and have to be

Page 5 of 15

rated low for uncomplicated requirements. Complexity of the system can be judged from the requirements document. 2.9. Availability of reusable test cases and test artifacts Reusability has received much attention with the widespread use of object-oriented technology. The term software reuse refers to the use of earlier developed software product in a new software product development. This reuse can occur on various products. These products could be [10]: Any physical component of a computer program code Any other tangible product of a software development process, e.g., tools, documentation, requirements, specifications Any knowledge gained during a previous software development project Barnard [4] proposed a reusability metric for software reuse of object-oriented code and Gokhale [9] identified quite a few available reusable metrics, which consider code reusability. But software managers from a testing perspective are concerned about the reusability of test artifacts and test cases. Paradiso, Scaraggi [5] proves with their pilot project that the testing process will be improved using the reusable test cases. Since automation testing is meant for regression testing, most of the automated test cases are being reused. The test cases prepared for one release would be used for the future releases, thus the effort required for the future releases decrease. Also, the review effort decreases for reused artifacts and reused test cases. Reusability is the percentage of test cases that can be reused; if all the test artifacts can be reused, then the attribute has to be rated low. If all the test artifacts needs to be prepared from the scratch, then attribute has to be rated high. 2.10. Environment (UNIX, Windows, Web based, Client-server) The environment in which the software system is being tested affects the effort of the testers. Testing the software system in the UNIX environment consumes more effort than testing the software in a user-friendly environment like Windows. If the testing is being performed in a web based or client-server environment, then performance issues come into the picture. If the tester is testing in a user friendly and good response time environment, then the environment has to be rated low. If the testing is being carried out in a non-user friendly and low response time environment, then the environment should be rated high. 2.11. Application type (Real time systems, Database applications) If the application is a mission-critical application, the effort required for testing would be much higher than the non-mission critical applications. So, the mission critical applications have to be rated high and the non-mission critical applications have to be rated low. 2.12. Required software reliability Software reliability is the probability of failure free operation of a computer program for a specified time in a specified environment [3]. Software reliability is defined as the probability that the software will not fail for specified conditions. Reliability requirement depends upon the purpose of software; the reliability requirement for a MIS application is not stringent whereas reliability requirement for a software product is high. Also, for mission critical applications (flight control, missile control etc.), reliability requirements are very high. This is very similar to the type of application. So, depending upon the software reliability requirement of the system, the rating has to be given. The highly reliable system has to be rated high and the less reliable system has to be rated low.

3.0.Survey
The weights of the attributes identified can be calculated by either of the following approaches, a. Top-down approach b. Bottom-up approach

a. Top-down approach: In the top-down approach, a survey is conducted for calculating the weights by the metrics/process group of an organization to obtain the expert or practitioner rating on the 12 identified attributes. The collected expert data is then simulated to calculate the correlation of the attributes with respect Page 6 of 15

to system test effort. The weightings obtained are project-specific or organization-specific. The weightings are updated, if the variation of estimated system test effort and actual system test effort is consistently observed. b. Bottom-up approach: In the bottom-up approach, the weights are calculated based on the historical data of the projects. The metrics group of an organization collects the data pertaining to projects such as system test effort, total effort of the project, size of the software and rating for 12 identified attributes, and calculates the correlation of the 12 identified attributes with respect to system test effort. In this paper, the top-down approach was followed for calculating the weights used in system test point estimation. A survey has been conducted for calculating the effect of 12 identified attributes on system testing effort. Eleven respondents were identified based upon their experience in software management, specifically in test management and testing. The experience of the respondents varied from 1-15 years. The mean and mode of the respondents experience was 5 years. Respondents have been provided with a questionnaire. The questionnaire used for the survey had the 12 attributes and brief description of the attributes to enhance understanding of the respondents. The questionnaire is given in the Appendix. Weighting represents the influence of the attribute on system test effort. The weights have been calculated using the respondents rating. Data collected from respondents has been simulated using the RISK 4.0 software. Montecarlo simulation has been performed using the RISK software to calculate the correlation coefficients. Correlation coefficients of simulated data have been considered as weights of the attributes. The simulated results were shown in Figure 3 and the correlation coefficients are shown in Table 1. The results represent that the identified attributes have a positive correlation on the system test effort with the magnitudes as shown in Table 1. Results show that the requirements stability and type of testing has a minimum effect when compared with the artifacts and reliability on system test effort. The other factors that have higher affect on system test effort include reusability of test cases, application type and usage of testing process tools. Weights in Table 1 might vary from organization to organization, due to the varied project settings. So, either the organization can have its own values or the values need to be calculated based on the data collected across the organizations. Will [8] suggests test performance benchmarking, where the test effort is measured, compared and adjusted. I2 I3 I4 I5 I6 I7 I8 I9 I10 I11 I12 I13 Requirements stability Software stability Testing process tools Testing tools Type of testing Artifacts System knowledge of the tester Complexity Reusable test cases Environment Application Reliability Table 1. Correlation coefficients 0.16 0.267 0.308 0.265 0.181 0.364 0.231 0.205 0.323 0.247 0.317 0.343

The obtained weights were used in estimating system test points, which is explained in the next section of the paper.

4.0. System Test Point (STP) Estimation


Page 7 of 15

An STP estimation table has been shown in Table 2. The factors affecting the system testing have been listed in Table 2; the manager has provided corresponding rating (A) as per the instructions for rating the attributes. The weight (B) is obtained from Table 1. The STP method has been applied to a project M and project N and it was observed that the values obtained from estimation were approximately the same as the actual results. Test managers provided ratings for project M and N as shown in Table 2. Subsequently, the system test points and percentage of system test effort have been estimated. The total size of the project in function points is considered for calculating percentage of system test effort. Systems test points have been estimated using eq.1. Estimated System Test Points (ESTP) = ( (A* B) /( B *120)) * NR Percentage of System Test effort has been estimated using eq.2. Percentage of Estimated System Test Effort (PESTE) = (ESTP/ FP)*100 (2) (1)

The results of the estimation for project M and N are also shown in Table 2. System test effort can be estimated based upon the percentage of system test effort and the project effort. Rating (A) Weight (B) Project M Project N 0.16 8 0.267 6 0.308 3 0.265 6 0.181 8 0.364 8 0.231 4 0.205 8 0.323 7 0.247 4 0.317 4 0.343 4 3.211 A*B Project M Project N 2 1.28 0.32 1 1.602 0.267 8 0.924 2.464 8 1.59 2.12 4 1.448 0.724 6 2.912 2.184 0.924 0.924 4 9 1.64 1.845 6 2.261 1.938 2 0.988 0.494 1.268 1.268 4 4 1.372 1.372 21.738 460 80 27.17 14.05 340 52 27.02

Attribute Requirements stability Software stability Test process tools Testing tools Type of testing Artifacts System knowledge of the tester Complexity Reusable test cases Environment Application Reliability B Estimated System Test Points(ESTP) Number of business requirements (NR) Function points (FP) Percentage of Estimated System Test Effort (PESTE)

Table 2. STP applied for project M and N

5.0. System Test Point (STP) Evaluation


STP has been evaluated for two projects, Project M and Project N. The actual system test points were calculated, based on the actual system test effort. The actual system test effort is calculated using eq.3. Page 8 of 15

Actual System Test Effort (in person-months) = Requirements review effort of test team + Design review effort of test team + Test artifact development effort + Test artifact review effort + Manual testing + Automated testing + Performance testing (3) The actual percentage of system test effort has been calculated using eq.4, based on testing metric suggested by Ravichandran and Shareef [6]. Percentage Actual System Test Effort (PASTE) = (Actual system test effort / Total effort of the project)*100 The actual system test points have been calculated using eq.5. Actual System Test Points (ASTP) = (Actual system test effort / Total effort of the project) * Function points

(4)

(5)

The actual system test effort and system test points for project M and N are listed in table.3. Project M 7 22.58 18.06 Project N 24 32 16.64

Actual system test effort (person-months) Percentage of actual system test effort (PASTE) Actual system test points (ASTP)

Table 3. Actual values of Project M and N The variation in the estimated and actual values was calculated. The percentage of effort spent for system testing is approximately the same as the estimated percentage of system test effort for the projects. The difference between percentage actual system test effort and percentage estimated system test effort is represented by . The difference between actual system test points and estimated system test points is represented by . The and values are calculated for projects M and N, and the results are shown in Table 4. PASTE Project M Project N 22.58 32 PESTE 27.17 27.02 -4.58 4.98 ASTP 18.06 16.64 ESTP 21.738 14.05 -3.67 2.59

Table 4. Variation of actual and estimated values

The variation in the actual and estimated STP and percentage of test effort is plotted in the estimation accuracy chart as shown in Figure 4. Data points below 0 represents over estimation and data points above 0 represents under estimation. So, from Figure 4 it can concluded that project M is over estimated and project N is under estimated. The estimation accuracy chart provides information to the manager about variation in estimation. For example, the results percentage of actual and estimated test effort varies slightly in the projects M and N, which suggests adjustment of weights to fine tune the estimation. The managers may need to update the weights only when the actual system test points consistently vary from the estimated system test points.

6.0. Conclusion
The framework proposed for calculating system test effort using system test points provides guidance to the managers to estimate effort required for system testing. System test point metric is calculated Page 9 of 15

based on the attributes of system test and their weightings applied to the projects demonstrate positive results.

Page 10 of 15

Requirements

System Testing

Design

Integration Testing

Construction/ Implementation

Unit Testing

Figure 1. V-Model of software development

Requirements

Design Design Doc

Construction

System Testing

Requirements Doc Test Automation Assessment Test Assessment

Review R E V I E W Test Cases Review Testing

Test Results

Test Plan

Test Strategy

Figure 2. System testing activities

Page 11 of 15

/ I7 / I1 3 / I1 0 / I1 2 / I4 / I3 / I5 / I1 1 / I8 / I9 / I6 / I2

.3 6 4 .3 4 3 .3 2 3 .3 1 7 .3 0 8 .2 6 7 .2 6 5 .2 4 7 .2 3 1 .2 0 5 .1 8 1 .1 6

-1

-.7 5

-.5

-.2 5

.2 5

.5

.7 5

Figure 3. Calculated correlation coefficients of the 12 attributes

6 STP/ % of effort Variation 4 2 0 -2 -4 -6 -8 -10 -8.97


Project M Under Estimation

4.98 2.59

Over Estimation

-4.58

Project N

Project

Figure 4. Estimation accuracy Chart

Page 12 of 15

References: 1. Linda Rosenberg, Lawrence Hyatt, Theodore Hammer, Lenore Huffman and William Wilson, Testing Metrics for Requirement Quality, 2nd Quality week Europe 98 conference, Belgium, Nov 1998. 2. George E. Stark, Robert C. Durst, Tammy M.Pelnik, An Evaluation of software testing metrics for NASAs mission control center, Software Quality Journal, vol 1, Jun 1992, pp 115-132. 3. Tim Menzies, Bojan cukic, When to test less, IEEE Software, pg.107-112, sep 2000. 4. Judith Barnard: A new reusability metric for object-oriented software, Software Quality Journal, Vol.7, No.1, 1998,pp. 35-50. 5. M.Paradiso, L.Scaragi, Test Process improvement, ESSi Number 21385, April, 1997. 6. S.Ravichandran, P.Mohammed Shareef, Software process assessment through metric models, European Software Control and Metrics conference, April 2001. 7. Joachim Wegener, Matthias Grochtmann and Bryan Jones, Testing Temporal correctness of Real-Time systems by means of Genetic algorithms, Quality Week, 1997. 8. Will, Test performance bench marking, A Technical white paper, Paroxys. 9. Yashodhan B. Gokhale, Measuring Software Reuse, White Paper, Dept. of computer science, Texas A&M University. 10. R. A. Paul, Metrics-Guided Reuse, Proceedings of the Seventh International Conference on Tools with Artificial Intelligence,5-8 November, 1995, pp. 120-127. 11. Ana Isabel Cardoso, Rui Gustavo crespo, Peter kokol, Two different views about complexity, European Software control and Metrics Conference, April 2000, pp 433-438. 12. Wanda j. Orlikowski, CASE Tools as Organizational change: Investigating Incremental and Radical changes in Systems development, Management informations systems quarterly, vol. 17,No.3, Sep 1993. 13. Erik van Veenendaal, Julie Mc Mullan, Achieving Software Product Quality, UTN Publishers, 1997, Den Bosch, Netherlands. 14. Barry Boehm, Software Engineering Economics, Prentice-hall Inc.,1981, New Jersey.

Acknowledgements: We would like to thank Palisade Corporation for providing us with a trial version of RISK 4.0.

Page 13 of 15

Appendix Questionnaire
Name Software Experience Software Testing Experience Role (Test Manager/Project Manager/Tester/Metric/SEPG)

S. No Attributes 1 2 3 4 5 6 7 What is the effect of Requirements Stability on system test effort What is the effect of Software Stability on system test effort What is the effect of Testing Process Tools on system test effort What is the effect of Testing tools on system test effort What is the effect of Type of Testing on system test effort What is the effect of Number of Artifacts on system test effort What is the effect of System Knowledge of the Tester on system test effort What is the effect of Complexity of the System on system test effort What is the effect of Reusable Test Cases on system test effort What is the effect of Environment on system test effort What is the effect of Application on system test effort What is the effect of Reliability requirement of Software on system test effort (Number of defects customer is expecting)

Rating

9 10 11 12

Very Little:1-2; Little:3-4; Moderate 5-6; High: 7-8; Very High 9-10

Page 14 of 15

Mahesh Kumar Kuruba received his Master of Engineering degree in Manufacturing Systems from Birla Institute of Technology and Science, Pilani in 1999. He is currently working as an Asst. systems engineer in Tata Consultancy Services, Mumbai. He has been performing various roles in software development. He was a test lead for the American Exchange Surveillance Automation program. He has published papers in international conferences. His research interests include software quality, software productivity, process improvement, total quality management and software test management. Prof. A.K.Verma received his B.Tech (Hons) in Electrical Engineering and Ph.D (Engineering) from Indian Institute of Technology, Kharagpur. He is currently a Professor in Reliability Engineering in the department of Electrical Engineering, Indian Institute of Technology, Mumbai. He has published about 60 research papers in journals and conferences. He is also on the editorial board of international journals and has been a guest editor of special issue of IETE technical review on Quality Management. His research interests include software reliability, reliable computing, reliability-centered maintenance and reliability in engineering design. He has been a principal investigator of research funded projects. He has been the conference chairman of International Conference on Quality, Reliability and Control, 2001 (ICQRC-2001) and International Conference on Multimedia and Design, 2002 (ICMD-2002). He is a senior member of IEEE and Fellow (life) of IETE. Dr. Gargi Keeni has over 20 years experience in software project management, software tools development and software process improvement. Under her leadership as the Corporate Quality Head, 15 development centers of Tata Consultancy Services (TCS) were assessed at Software CMM level 5 and TCS was the first organization to be assessed at Level 4 of the People CMM v2.0. A Vice President at TCS, currently she heads the Quality Consulting practice and is involved in assisting organizations achieve their software process improvement goals. An SEI-authorized Software CMM Lead Assessor, People CMM Lead Assessor, Candidate Lead Appraiser for SCAMPI, examiner for the Tata Business Excellence Model (based on the MBNQA quality criteria of USA) and a Certified Quality Analyst, Dr. Keeni has published several papers in International conferences and journals on software process improvement. Her current research interests include software process improvements and quality management systems. She is a member of Computer Society of India and IEEE. A doctorate in Nuclear Physics from Tohoku University, Japan, she was a research fellow at Saha Institute of Nuclear Physics, Calcutta and a systems engineer at Fujitsu, Japan. Prof. A.Srividya received her M.Tech in Reliability Engineering and Ph.D from IIT Bombay. She is currently an Associate Professor in Reliability Engineering. She has published many papers in the areas of quality and reliability and her research interests include application of quality tools and techniques in software and the service sector. She has been a co-editor (guest) for the special issue of IETE technical review on quality management. She has been the co-chairperson of the International Conference on Quality, Reliability and Control, 2001 (ICQRC-2001).

Page 15 of 15

Potrebbero piacerti anche