Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
fundamental metrics –rule: size is either measured as lines of code (loc), the
number of function points or change complexity
rules:
for measuring loc
•count statements that are executed or interpreted
•count only once for any statement
•do not count blank lines and comments
for measuring function points:
•use the ifpug guide
for estimating and measuring complexity of change in maintenance requests:
•define program complexity and change complexity. (for example the complexity can
be estimated as simple =1, medium=3 or complex=5.)
notes:
1. for development projects size is estimated at initiation, analysis and design
stages. the size is measured after the code is constructed.
2. for conversion projects the size of source code is measured and converted code is
estimated in the initiation and design phase. after the code is converted to target
platform, the size is measured.
3. tcs-qms-103, software estimation guidelines provides details on size estimation
4. size estimation and measurement need to be done at functionality level and
aggregated to get the application level
5. for individual maintenance requests use complexity as the measure of size
estimation:
1. effort is estimated at the initiation of project and recalculated at every phase.
2. in case of development projects effort estimation is done at functionality level and
aggregated up for the application level.
3. in case of conversion projects effort estimation is done at module level and
aggregated up for the application level.
4. the effort for sdlc phases is calculated by apportioning the total effort. example
the apportion is (refer the estimation guidelines for details)
analysis = 25%; design = 20%; construction = 38%; system testing = 17%
5. in case of maintenance projects, the effort estimation is done at maintenance
request level
notes:
1. productivity is derived for a given technology from the projects done earlier. adjust
the productivity figures in case if the phases or life cycle has differences from the
original project.
2. in case productivity figures are not available for estimation purpose, estimate
effort using cocomo model.
3. measure actual effort spent for each phase either by functionality or module as
done in the estimation. effort spent on testing, reviews and rework is also recorded
1.in the above example, if the change in size estimation at the design stage was due
to a change request from the customer, then the size deviation will be calculated as
(100-105)*100/105 = - 4.7%.
2. size deviation is calculated for
a. development projects
b. conversion projects where the conversion involves rewriting of code
3. size deviation is calculated at each phase end. in case of spiral/iterative model
based development projects it is calculated at each delivery. the estimated size is
taken from at requirement phase or when the last change in requirement was base
lined.
in a development project the life cycle effort was estimated for a delivery module as
100 person days.
the effort spent at analysis phase was 40 days. the effort spent at design
stage was 22 days. the construction effort was 48 days. the testing effort was 15
days. on the basis of standard apportioning of effort to life cycle phases, calculate
the effort slip and infer the data:
in the previous example for schedule slippage calculate the % end timeliness for a&d
document and tested code.
for the a& d document
1. the % end timeliness is computed for each deliverable as identified in the plan for
all types of projects.
even if the deliverable is not made at each phase calculating the end
timeliness at the end of each phase is recommended.
2. this metric is part of service quality. missed deliveries can be derived as deliveries
where end timeliness > 0%, if delivery is committed as per the plan. the size
deviation, effort slippage and schedule slippage are the lead process metrics to %end
timeliness
1. during a defect fixing of complex maintenance request, review on the code fix
and subsequently regression test were conducted. two defects in the review and one
defect in the regression testing were logged. calculate the defect density.
for defect fixing in maintenance projects, the defect density is calculated for
the overall cycle. the size of complex request is 5.
2. in a development project during design review 3 defects were reported. the design
was developed for a module called “warranty”. the estimated size of the module is 10
kloc. calculate the defect density.
for development projects defect density is calculated in each phase at single review
level.
defect density at design phase = 3/10
= 0.3 defect /kloc
1. cumulative defect density can be calculated by adding the individual values across
the phases at delivery/module level.
2. defect density is calculated for all types of projects.
1.in a maintenance project, for a complex request the following reviews were
conducted.
a. impact analysis document review – 1 defect reported
b. code fix review - 2 defects reported.
the following tests were conducted.
a. unit testing on the code fix - 1 defect reported
b. regression testing of the module - no defect reported.
calculate review effectiveness.
total number of defects in reviews = 3; total number of defects in testing=1
review effectiveness = (3/4) *100 = 75%
1. higher review effectiveness implies that more defects are removed in reviews. the
cost of fixing a defect during review is much cheaper than cost of fixing the defect
found in testing.
2. review effectiveness is calculated for deliverables. in case of maintenance projects
it is calculated at request level.
3. review effectiveness is also applied in narrow contexts such as effectiveness of
code review as below
total number of errors found in code review
------------------------------------------------------------------------* 100
total number of errors found in code review + unit testing
the defects found in the acceptance phase for the previous example on phase
containment effectiveness were 2. consider that only single delivery is made after
system testing
calculate the total defect containment effectiveness.
1. in a maintenance project, 40 requests were raised in the last six months. all the
requests were serviced after making corrective changes in the code. in that period 2
bad fixes were reported. calculate the %bad fixes for this month.
2.
number of requests = 40
number of bad fixes = 2
% bad fixes = (2/40)*100
= 5%
a maintenance project had an sla of 8 hours to complete the requests for service for
level 2 support. the project in the last month had 12 level 2 support calls and took
100 hours in total to complete all these requests. calculate the rti.
estimated /agreed mean time of closure = 8 hours
actual mean time of closure = 100/12 = 8.33 hours
calculate the bmi for the project for the month. (no sla defined)
bmi = 18 / (12+10) = 0.81
1. bmi and its trend indicate how effectively the queue of requests are managed and
the adequacy of resources
2. if the customer maintains the queue, the bmi is not calculated by tcs.
1. sla compliance is calculated only when the customer agreed to the sla definitions
for various severity.
2. sla compliance is calculated for internal service such as idm and qa services
1. the preventive cost includes the cost for training, developing methods/procedures
and defect prevention activities
2. the appraisal cost includes the cost of inspections, reviews, testing and associated
software
3. the failure cost includes the cost of rework due to failures and defects
metrics:
units: %
units: %
units: %
units: %
units: %