Sei sulla pagina 1di 14

I

ABSTRACT

II

Table of Contents
ABSTRACT .......................................................................................................... II
1.

INTRODUCTION .............................................................................................. 1

2.

MEASURES, METRICS, AND INDICATORS ........................................................ 2

3.

Types of Metrics............................................................................................. 3
3.1 Product Metrics ......................................................................................... 3
3.1.1

Object Oriented Metrics ................................................................... 4

3.1.2

Communication Metrics ................................................................... 4

3.1.3

Design Metrics.................................................................................. 5

3.2 Process Metrics ......................................................................................... 5


3.3 Requirements Metrics ............................................................................... 6
3.3.1

Use Case/Size Metrics ...................................................................... 7

3.3.2

Requirements Traceability Metrics ................................................... 8

3.3.3

Requirements Completeness Metric ................................................ 9

3.3.4

Requirements Specification Metrics ................................................. 9

3.3.5

Requirements Volatility Metric....................................................... 10

Bibliography ........................................................................................................... i

III

1. INTRODUCTION
The issue of quality and quality assurance has become generally important
throughout the industrialized world. It has become particularly important for
software because of the increasing reliance of business on software and because
of the notorious reputation that software enjoys. As a minimum, software is of
good quality if it (1):
Meets its users needs
Is reliable (i.e. it does not fail)
Is easy to maintain (i.e. to correct or change).
There are, of course, other factors involved in quality. Quality assurance
means the measures that are taken to ensure that software is of good quality [2].
A vital aspect of quality is measurement. In software engineering, measures
are termed metrics. A metric is a measure, such as cost or size, which can be
applied to software. Metrics can be obtained and applied throughout the
software development process. Metrics address issues like (1):

How big will the software be?


How complex is this component?
How reliable is this software?
How much will this software cost?
How much effort will be needed to alter this software to meet changed need?

2. MEASURES, METRICS, AND INDICATORS


Although the terms measure, measurement, and metrics are often used
interchangeably, it is important to note the subtle differences between them.
Because a measure provides a quantitative indication of the extent, amount,
dimension, capacity, or size of some attribute of a product or process.
Measurement is the act of determining a measure. The IEEE Standard Glossary of
Software Engineering Terms defines metric as a quantitative measure of the
degree to which a system, component, or process possesses a given attribute (2).
When a single data point has been collected (e.g., the number of errors
uncovered in the review of a single module), a measure has been established.
Measurement occurs as the result of the collection of one or more data points
(e.g., a number of module reviews are investigated to collect measures of the
number of errors for each).The Software metric relates the individual measures in
some way (e.g., the average number of errors found per review or the average
number of errors found per person- hour expended on reviews) (2).
A software engineer collects measures and develops metrics so that
indicators will be obtained. An indicator is a metric or combination of metrics that
provide insight into the software process, a software project, or the product itself
[RAG95]. An indicator provides insight that enables the project manager or
software engineers to adjust the process, the project, or the process to make
things better (2).
For example, four software teams are working on a large software project.
Each team must conduct design reviews but is allowed to select the type of
review that it will use. Upon examination of the metric, errors found per personhour expended, the project manager notices that the two teams using more
formal review methods exhibit an errors found per person-hour expended that is
40 percent higher than the other teams. Assuming all other parameters equal,
this provides the project manager with an indicator that formal review methods
may provide a higher return on time investment than another, less formal review
approaches. She may decide to suggest that all teams use the more formal
2

approach. The metric provides the manager with insight. And insight leads to
informed decision making (2).

3. Types of Metrics
Different types of metrics are used to measure software. Metrics are categorized
into products, process and resource metrics (3).

3.1 Product Metrics


Product metrics measure the software products at any stage of software
development. They can be applied from requirements phase through installation
these metrics measures size of the program. They measure number of pages of
documents and complexity of software design. Primitive metrics or direct
measurement of an attribute involves no other entity. It is independent of other
entities. It can be LOC, duration of tests calculated in number of hours, months
and number of defects discovered. The Size of the software can be measured by
its length, functionality and complexity. LOC is a common criteria used to measure
length of the software. It is the physical size of the programs used to write code.
There are different notations for writing the code. Code may have only executable
statements, blank lines, comments and headers. Industry standard for measuring
LOC is to count non-commented code. Programs have statement declarations,
header statements and executable statements in which only executable
statements are counted as LOC and commented lines are ignored. Software
Engineering Institute suggest to measure code by keeping in view of automatic
code generators. Software engineering institute (SEI) also suggest to consider
differentiated physical lines of code from logical lines of code and reused or
copied and modified code (3).

3.1.1 Object Oriented Metrics


Object Oriented Metrics are helpful for the allocation of resources in software
development. These metrics are particularly used to identify fault-prone classes
and predicts maintenance efforts, error proneness, and error rate (3).
Weighted Methods per Class - (WAC) This metric is used to measure the
complexity of a class. This is computed as the sum of the complexity of each
method of the class. In Depth of Inheritance Tree - (DID) the Numbers of the
ancestors of the class are measured. The Responses for a Class - (REC) this metric
measures the number of methods which are executed on objects of a class (3).
Bellini [7] categorizes different groups of objects. Group 1 measure structures
such as methods which measure complexity of objects, number of classes provide
idea of overall complexity, number of messages and global variables. Object
oriented metrics are mainly limited to the changes during the design and
implementation phases of software development (3).
3.1.2 Communication Metrics
Communication with in a team is defined as intra-team communication while
communication among members of different teams is known as inter-team
communication. Communication artifacts for example electronic mail, weekly
meetings, informal interactions, liaisons, personal e-mail, and informal
interactions are available in entire project capturing information such as code,
politics and process. This communication metrics give a good picture of software
development process. These metrics measure the incidents of a specified type
within a process. These are available in initial phases of software development
and are easier to collect. As communication is done regularly during software
development excommunication can result in costly defects later (3).

3.1.3 Design Metrics


Design metrics in contrast to traditional metrics does not rely on syntax or data
structure. These metrics are computed from requirements or design documents
before the system has been implemented. These metrics are calculated on the
basis of codes functionality in problem domain. This calculates psychological
complexity of module instead of number of paths through a piece of code. As
these metrics are independent of code structure they can be calculated from
design specifications which allow an earlier sense of quality of software (3).
Semantic metrics includes Logical Relatedness of Methods - (LORD) the
number of relations in a class divided by the number of pairs of methods in the
class. Class Domain Complexity - (CDs) is the sum of concepts associated with a
class with their associated conceptual relations. Class Overlap - (COB) is the sum
of concepts in common between two classes. Stein discuss relationships of output
in design documents such as Coincidental relationship - here two outputs are
independent of each other, conditional relationship -two outputs are dependent
on the same input and functional relationship - here module has only one output.
Design metrics are useful means for improving the quality of software (3).

3.2 Process Metrics


These metrics focus on software development and maintenance. They are used to
assess peoples productivity (known as private metrics), productivity of the entire
organization (known as public metrics), and software process improvement.
Process assessment is achieved by measuring specific attributes of the process,
developing a set of metrics based on the identified attributes, and, finally, using
the metrics to provide indicators that lead to the development of process
improvement strategies. Private metrics are designed to help individual team
members in self-assessment allowing an individual to track work tasks and
evaluate self-productivity. Public metrics, on the other hand, help evaluate the
organization (or a team) as a whole, allowing teams to track their work and
evaluate performance and productivity of the process. A good example is a teams
5

effectiveness in eliminating defects through development, detecting defects


through testing, and improving response time for fixes (4). See table 4.3 (3):

Category

Formula

Programmer productivity

LOS Produced/Persons months of


effort

Module defect density

no. of defects/module size

Defect detection efficiency


Requirement stability
Test effectiveness ration
System spoilage

Number of defects detected/total no.


of defects
no. of initial requirements/total no.
of requirements
Number of items covered/total no.
of items
effort spent fixing faults/total project
effort

Table 3-1: Examples of common measures used in software engineering

3.3 Requirements Metrics


Requirement engineering deals with elicitation, analysis, communication and
validation of the requirements. As early as errors are identified it is easy to fix
them compared to later identification of errors. It is obvious that the cost of fixing
these errors in initial stages is lower than fixing them in later stages of software
development. Many of these errors are caused due to changes in the
requirements. In order to eliminate errors there should be some measurement.
Metrics that can be gathered from requirements are the size of the requirements,
requirements traceability, requirements completeness and requirements volatility
(3).

3.3.1 Use Case/Size Metrics


Size is an important and general metrics used to measure requirements. LOS is
common metrics used to measure size of software. Comment lines and blank lines
are not considered as code. A non-commented line is considered as a code but it
also includes executable commands, executable statements and data
declarations. Use Cases can also be considered as a size measurements when they
are used to describe requirements. For example number of use cases, number of
functions covered etc (3).
The Use Case model is used to ensure that the right system is built. Use cases
are at high level of abstraction. However they can capture only functional
requirements of a system. Non-functional requirements are not captured in use
cases. Bullishness [1] discuss metrics for use case can be derived by counting
number of actors according to weights. Number of use cased according to
number of transactions that they perform and number of analysis classes used to
implement each use case according to their complexity (3).
A cumulative sum, Unadjusted Use Case Points - UNC, is derived based on
use case points - UP. UP are weights of the function points where effort is
calculated by number of hours per person multiplied by UP. An Atomic action is
action which cannot be further decomposed in to deeper level. An atomic action
is used as a unit of measurement for use cases. Bellahsene [1] categorized use
case metrics into 3 categories. Use case size metrics includes number of atomic
actions in main flow. The Number of atomics actions in alternative flow. Longest
path between first and last atomics actions of using the case and number of
alternative flows. Here, alternative flows are measured from start and end of use
case. Size of use case can be determined by counting number of actors weighted
by their complexity. Number of use cases weighted by number of transactions
they contain and number of analysis classes used to implement each use case.
Use case environment metrics are the factors which has influence on complexity
level of the use cases. These are independent of the size of the use cases. These
include number of stake holders, number of actors and total number of goals. Use
case composite metrics are derived from size metrics and environment metrics
7

(3). They include total number of atomic actions in the alternative flows. The Total
number of atomics actions in all flows. Number of atomics actions per actor.
Number of atomics actions per goal and number of goals per stakeholder (3).
3.3.2 Requirements Traceability Metrics
Traceability is the ability to trace requirements in a specification to their origin
from higher level to lower level requirements in a set of documented links.
Traceability provides information which helps in determining whether all
relationship and dependencies are addressed. This is also make requirements
leakage - existence of lower level requirements with no valid origin from higher
level visible. It help in preventing wrong interpretations of other metrics. There
exist 5 types of traceability metrics. The Next level coverage metric - COB. This
metric trace number of requirements to next level up and next level down. This
also traces number of requirements in both directions. Full depth and height
coverage - DECO. This is similar to COB but it traces requirements to highest and
lowest level of specifications instead of only immediate next level in both
directions. This the best suit for 3 or less level specifications. Inconsistent
traceability metrics - This include the numbers of requirements in a specification
that have ablest one inconsistent link in both upward and downward directions.
Undefined traceability metrics - It include the numbers of requirements in a
specification that have no traceability links upward and download. However, the
highest link will not have any upward link as it is derived from design document
and the lowest level link has no down link. Linkage statistic metric measure the
complexity level of traceability data by counting the number of higher or lower
level requirements to which each requirements is a specification is traced (3).

3.3.3 Requirements Completeness Metric


Requirements Completeness Metrics are used to assess whether a requirement is
at wrong level of hierarchy or too complex. Requirements are considered as
complete if all pages are numbered, all figures and tables have captions and
numbers, all references are present, all sections and subsections are numbered
and no TB to be determine should be present see section 2.3 for details. This
metric quantify decomposition level of higher level requirements allocated to a
specification. These are marked with to be completed tag and the number of it
in lower level specification. Requirements Completeness Metrics are of 3 types.
Requirements Decomposition Metrics - This metric is used to know whether the
requirements specified have been decomposed sufficiently to be used in next
phase. A complex function will have many levels of specification and a simple
function will have few levels. Requirements Specification Development - This
metric is used to provide information about the work completed and the work
remaining for a given specification. This metric is a solution to the question that
queries have all the requirements in higher levels have sufficient low level
specifications (3).
3.3.4 Requirements Specification Metrics
This metric provide information about the quantity of items for which issue
resolution is needed. It also provides information about the unresolved issues
affected by higher level of requirements. This metric used to be determined, to
be supplied tags used for completing the requirements document. This is also
trace number of high level requirements allocated to specification for incomplete
requirements (3).

3.3.5 Requirements Volatility Metric


The degree to which requirement changes over a time period is called volatility of
requirements. This metric is used to assess reasons for change of requirements
over a period of time. Requirements degree of change is also checked by this
metric. Both these factors are checked to know whether the changes are
consistent with current development activities. It indicates changes such as
addition, deletion and modifications. It helps in tracing future requirements,
design and code volatility. Volatility can be high in the initial phase of software
development. It should be reduced as the project progress so that further
development should not be affected. This metric is a good indication of other
metrics for example, if a specific portion of a software is tested without knowing
the volatility of the requirements than this testing would not be wrathful (3).
NASA has quality indicators such as imperatives for example, shall, is
required to etc. Continuances for example, as follows, listed etc. Weak phrases
such as a minimum, easy. Options words such as can, may etc. see section 5.2.1
for details. These indicators are used in 4 metrics discussed earlier in chapter
namely size, traceability, completeness and volatility (3).

10

Bibliography
1. Bell, Douglas. Software Engineering for Student. s.l. : ADDISON-WESLEY, 2004.
0 321 26127 5.
2. Roger S. Pressman, Ph.D. Software Engineering, Practical Approach. s.l. :
Thomas Casson, 2001. ISBN 0-07-365578-3.
3. Ali, Mohammed Javeed. Metrics for Requirements Engineering. Department of
Computing Science. SWEDEN : Umea University, June 15, 2006. p. 61, Masters
Thesis.
4. Hisham M. Haddad, Nancy C. Ross, and Donald E. Meredith. A Framework for
Instituting Software Metrics in Small Software Organizations. Computer Science
Department. Kennesaw : Kennesaw State University (USA), 2003. p. 30.

Potrebbero piacerti anche