Sei sulla pagina 1di 27


Introduction to Software Engineering

Software Engineering Process paradigms - Project management -Process and Project Metrics – software estimation
- Empirical estimation models - Planning- Risk analysis - Software project scheduling.


1. Software Engineering Process paradigms

"Paradigm", a Greek word meaning "example", is commonly used to refer to a category of entities that share a
common characteristic. Software engineering paradigms are also known as Software engineering models or
Software Development Models.

In order to reduce the potential chaos (confusion) of developing software applications and systems, we use
software process models and paradigms that describe the tasks that are required for the building of high-quality
software systems. The specific process model or paradigms used to develop a given system depends heavily on
the nature of the target system. Use of software paradigms in the development of the software processes has
many benefits, including supporting systematic approach and the use of standard approaches and methodologies.

The software engineering paradigm which is also referred to as a software process model or Software
Development Life Cycle (SDLC) model is the development strategy that encompasses the process, methods and
tools. SDLC describes the period of time that starts with the software system being conceptualized and ends with
the software system been discarded after usage.

The following are the major software paradigms

1. The waterfall model

2. Prototype model
3. Increment model
4. Evolutionary process models

a) Waterfall model

The waterfall model is also called as linear sequential model or classic cycle model. It is the oldest software
paradigm. This model suggests a systematic approach to software development.

In a waterfall model each phase must be completed before the next phase can begin and there is no overlapping
in phase.

The Waterfall model is the oldest SDLC (Software development life cycle) approach that was used for software

The software development starts with requirements gathering phase then processing through analysis, design,
coding, testing and maintenance. The diagram illustrates Waterfall model.

1 © Prepared By Dodda.Venkata Reddy M.Tech

and Analysis





Fig: The linear sequential Model / Water fall model

Requirement Specification and Analysis

The aim of the requirement analysis and specification phase is to understand the exact requirements of the
customer and document them properly. This phase consists of two different activities.

1. Requirement gathering and analysis

2. Requirement specification


The aim of the design phase is to transform the requirements specified in the SRS document into a structure that
is suitable for implementation in some programming language.


In coding phase software design is translated into source code using any suitable programming language. Thus
each designed module is coded. The aim of the unit testing phase is to check whether each module is working
properly or not.

2 © Prepared By Dodda.Venkata Reddy M.Tech


Testing begins when coding is done. While performing testing the major focus is on logical intervals of software.
The testing ensures execution of all the paths, fundamental behavior. The purpose of testing is to uncover errors,
fix the bugs and meet the customer requirements.


Maintenance is the most important phase of a software life cycle. The effort spent on maintenance is the 60% of
the total effort spent to develop full software. There are basically three types of maintenance:

1) Corrective Maintenance: This type of maintenance is carried out to correct errors that were not
discovered during the product development phase.
2) Perfective Maintenance: This type of maintenance is carried out to enhance the functionalities of the
system based on the customer’s request.
3) Adaptive Maintenance: Adaptive maintenance is usually required for porting the software to work in a
new environment such as work on a new computer platform or with a new operating system.

When to use waterfall model

1) Requirements are very well known, clear and fixed

2) Product definition is stable.
3) Technology is understood
4) There are no ambiguous requirements
5) The project is short.


1) Easy to understand even by non-technical person

2) Each phase has well defined inputs and outputs
3) Each phase of development proceeds sequentially
4) Helps the project manager in proper planning of project
5) Helps in controlling schedules, budgets and documentation.


1) Requirements need to be specified before the development proceeds.

2) Users have little interaction with the project
3) After the development process starts changes can’t be accommodated early
4) It is very time consuming and costly model.

3 © Prepared By Dodda.Venkata Reddy M.Tech

b) Prototype model

The Prototyping Model is a systems development method (SDM) in which a prototype (an early approximation
of a final system or product) is built, tested, and then reworked as necessary until an acceptable prototype is
finally achieved from which the complete system or product can now be developed. This model works best in
scenarios where not all of the project requirements are known in detail ahead of time. It is an iterative, trial-and-
error process that takes place between the developers and the users.

In Prototyping Model initially the requirement gathering is done. Developer and customer define overall
objectives; identify areas needing more requirements gathering.

Then a quick design is prepared. This quick design represents what will be visible to user input and output

From the quick design prototyping is prepared. Customer or user evaluates the prototype in order to refine the
requirements. Interactively prototype is turned for satisfying customer requirements.

When working prototype is built, developer use existing program or fragments generators to throw away
prototype and rebuild the system to high quality.

When to choose it?

 Software application those are relatively easy to prototype. Almost always involve human machine
interaction, the prototype model is suggested.
 The general objective of software is defined but not detailed input, processing or output requirements.
Then in such a case prototype model is useful.
 When the developer is unsure of the efficiency of an algorithm or the adoptability of an operating system
then prototype serves as a better choice.

4 © Prepared By Dodda.Venkata Reddy M.Tech

Drawbacks of prototyping

 In the first version itself, customer often wants “few fixes” rather than rebuilding of the system where as
rebuilding of the new system maintenance high level of quality.
 The first version may have some compromises.
 Sometimes developers may make implementation compromise to get prototype working quickly. Later
developer may become comfortable with compromises and forget why they are inappropriate.

C) Incremental Models

Incremental process model is also known as successive version model.

First, a simple working system implementing only a few basic features is built and then that is delivered to the
customer. Then thereafter many successive iterations/ versions are implemented and delivered to the customer
until the desired system is realized.

A, B, C are modules of Software Product that are incrementally developed and delivered.

Each iteration passes through the requirements, design, coding and testing phases. And each subsequent release of
the system adds function to the previous release until all designed functionality has been implemented.

5 © Prepared By Dodda.Venkata Reddy M.Tech

The system is put into production when the first increment is delivered. The first increment is often a core
product where the basic requirements are addressed, and supplementary features are added in the next
increments. Once the core product is analyzed by the client, there is plan development for the next increment.

When to use this

 Funding Schedule, Risk, Program Complexity, or need for early realization of benefits.
 When Requirements are known up-front.
 When Projects having lengthy developments schedules.
 Projects with new Technology.


 Error Reduction (core modules are used by the customer from the beginning of the phase and then these
are tested thoroughly)
 Uses divide and conquer for breakdown of tasks.
 Lowers initial delivery cost.
 Incremental Resource Deployment.


 Requires good planning and design.

 Total cost is not lower.
 Well defined module interfaces are required.

Rapid application development model (RAD)

The Rapid Application Development Model was first proposed by IBM in 1980’s. The critical feature of this
model is the use of powerful development tools and techniques.

A software project can be implemented using this model if the project can be broken down into small modules
wherein each module can be assigned independently to separate teams. These modules can finally be combined
to form the final product.

Development of each module involves the various basic steps as in waterfall model i.e. analyzing, designing,
coding and then testing, etc. as shown in the figure.

Another striking feature of this model is a short time span i.e. the time frame for delivery (time-box) is generally
60-90 days.

6 © Prepared By Dodda.Venkata Reddy M.Tech

Following are the various phases of the RAD Model

Business Modeling

In business modeling, the information flow is modeled into various business functions. These functions collect
following information

1. Information that derives the business process

2. The type of information being generated
3. The generator of information
4. The information flow
5. The process of information.

Data Modeling

In this phase the information obtained in business model is classified into data objects. The characteristics of data
objects (attributes) are identified. The relationship between various data object is identified.

Process Modeling

In this phase the data objects are transformed into process. These processes are useful to extract the information
from data objects and are responsible for implementing business software.

Application Generation

7 © Prepared By Dodda.Venkata Reddy M.Tech

For creating software various automation tools can be used. RAD also makes use of reusable components of

Testing and Turnover

As RAD uses reusable components the testing efforts are reduced. But if new components are added in software
development process then such component need to be tested. It is equally important to test all interfaces.

D) Spiral Model

Spiral Model is a combination of a waterfall model and iterative model. Each phase in spiral model begins with a
design goal and ends with the client reviewing the progress. The spiral model was first mentioned by Barry
Boehm in his 1986 paper.

The development team in Spiral-SDLC model starts with a small set of requirement and goes through each
development phase for those set of requirements. The software engineering team adds functionality for the
additional requirement in every-increasing spirals until the application is ready for the production phase.

8 © Prepared By Dodda.Venkata Reddy M.Tech

Spiral Model Phases

Spiral Model Phases Activities performed during phase

 It includes estimating the cost, schedule and resources for the iteration. It also
Planning involves understanding the system requirements for continuous communication
between the system analyst and the customer

 Identification of potential risk is done while risk mitigation strategy is planned and
Risk Analysis

Engineering  It includes testing, coding and deploying software at the customer site

 Evaluation of software by the customer. Also, includes identifying and monitoring

risks such as schedule slippage and cost overrun

When to use Spiral Methodology?

 When project is large

 When releases are required to be frequent

 When creation of a prototype is applicable

 When risk and costs evaluation is important

 For medium to high-risk projects

 When requirements are unclear and complex

 When changes may require at any time

 When long term project commitment is not feasible due to changes in economic priorities

9 © Prepared By Dodda.Venkata Reddy M.Tech

Advantages and Disadvantages of Spiral Model

Advantages Disadvantages

 Additional functionality or changes can be done

 Risk of not meeting the schedule or budget
at a later stage

 Cost estimation becomes easy as the prototype  It works best for large projects only also
building is done in small fragments demands risk assessment expertise

 Continuous or repeated development helps in  For its smooth operation spiral model protocol
risk management needs to be followed strictly

 Development is fast and features are added in a  Documentation is more as it has intermediate
systematic way phases

 It is not advisable for smaller project, it might

 There is always a space for customer feedback
cost them a lot

2. Project Scheduling
Project scheduling is an important step in the software development process, software project managers often
uses scheduling to perform preliminary time and resources estimates, general guidance and analysis of project

One of the major challenges in software project management is that it is difficult to follow to the schedules due
to the uncertainties related to requirements, schedules, personnel, tools, architectures, budgets, etc…

In order to schedule project activities, a software manager needs to do the following basic principles.

Compartmentalization: The project must be compartmentalized into a number of manageable activities and
tasks. To accomplish compartmentalization, both the product and the process are refined.

Interdependency: The interdependency of each compartmentalized activity or task must be determined. Some
tasks must occur in sequence, while others can occur in parallel. Some activities cannot commence until the work
product produced by another is available.

Time allocation: Each task to be scheduled must be allocated some number of work units (e.g., person-days of
effort). In addition, each task must be assigned a start date and a completion date that are a function of the
interdependencies and whether work will be conducted on a full-time or part-time basis.

10 © Prepared By Dodda.Venkata Reddy M.Tech

Effort validation: Every project has a defined number of people on the software team. As time allocation occurs,
you must ensure that no more than the allocated number of people has been scheduled at any given time.

Defined responsibilities: Every task that is scheduled should be assigned to a specific team member.

Defined outcomes: Every task that is scheduled should have a defined outcome.

Defined milestones: Every task or group of tasks should be associated with a project milestone. A milestone is
accomplished when one or more work products has been reviewed for quality and has been approved.

The Relationship between People and Effort

In small software development project a single person can analyze requirements, perform design, generate code,
and conduct tests. As the size of a project increases, more people must become involved. Adding people later in
a project often has a wrong effort on the project, causing schedule to slip even further.

An Empirical equation

We can determine the highly non-liner relationship between chronological time to complete a project and
human effort applied to the project. The number of delivered lines of code

Where “L” is the effort and development time, “E” is the development effort in person /month, “P” is the
productivity parameter that leads to high quality software engineering work, “t” is the project duration in
calendar months.

Effort distribution

Each of the software project estimation technique leads to estimates of worth units requires completing software
development. A recommended distribution of effort across the definition and development phases is often
referred to as 40-20-40 rule.

3. Software project planning / Project planning Activities

Software project management begins with a set of activities that are collectively called project planning. The
objective of software project planning is to provide a frame work that enables the project manager to make
some reasonable estimates of resources, cost, and schedule. There are 3 steps in the software project planning

1. Software Scope

Software scope describes the functions and features that are to be delivered to end users; the data that are input
and output; the “content” that is presented to users as a consequence of using the software; and the
performance, constraints, interfaces, and reliability that bound the system. Scope is defined using one of two

1. A narrative description of software scope is developed after communication with all stakeholders.

2. A set of use cases3 is developed by end users.

11 © Prepared By Dodda.Venkata Reddy M.Tech

2. Feasibility

Once the scope of the software has been identified, you need to study whether the project is worthwhile to
design within the determined scope.

3. Resources Estimation

Once the scope and its feasibility study is completed, the next step in software project planning is estimation of
resources that are required to develop the new project. The resources are represented as follows

Each resource is specified with four characteristics:

 Description of the resource,

 A statement of availability,
 Time when the resource will be required, and
 Duration of time that the resource will be applied.

a) Human Resources

The planner begins by evaluating software scope and selecting the skills required to complete development. Both
organizational position (e.g., manager, senior software engineer) and specialty (e.g., telecommunications,
database, client-server) are specified. For relatively small projects (a few person-months), a single individual may
perform all software engineering tasks, consulting with specialists as required. For larger projects, the software
team may be geographically dispersed across a number of different locations. Hence, the location of each human
resource is specified.

12 © Prepared By Dodda.Venkata Reddy M.Tech

The number of people required for a software project can be determined only after an estimate of development
effort (e.g., person-months) is made.

b) Reusable software resources

Reusability of software resources is very important in the software project planning. The creation and use of
software building blocks must be catalogued for easy reference, standardized for easy application and validation
for easy integrity. There are four software resource categories should be considered

Off-the-shelf components: Existing software that can be acquired from a third party or from a past project. COTS
(commercial off-the-shelf) components are purchased from a third party, are ready for use on the current project,
and have been fully validated.

Full-experience components: Existing specifications, designs, code, or test data developed for past projects that
are similar to the software to be built for the current project. Members of the current software team have had
full experience in the application area represented by these components. Therefore, modifications required for
full-experience components will be relatively low risk.

Partial-experience components: Existing specifications, designs, code, or test data developed for past projects that
are related to the software to be built for the current project but will require substantial modification. Members
of the current software team have only limited experience in the application area represented by these
components. Therefore, modifications required for partial-experience components have a fair degree of risk.

New components: Software components must be built by the software team specifically for the needs of the
current project.

c) Environmental Resources

The environment that supports a software project, often called the software engineering environment (SEE),
incorporates hardware and software. Hardware provides platform that supports the software.

4. Software Estimation / software project estimation / Software Effort Estimation

Estimation is the process of finding an estimate, or approximation, which is a value that can be used for some
purpose even if input data may be incomplete.

The estimation is prediction or a rough idea to determine how much effort would take to complete a defined
task. Here the effort would be time or cost.

In the early days of computing, Software cost constituted a small percentage in the overall computer based
system. Today, software is the most expensive element of overall computer based system. Software cost and
effort may change, because too many variables – human, technical, environmental - can affect the ultimate cost
of software.

To achieve reliable cost and effort estimates, a number of options arise:

1. Delay estimation until late in the project.

2. Base estimates on similar projects that have already been completed.

13 © Prepared By Dodda.Venkata Reddy M.Tech

3. Use simple decomposition techniques to generate project cost and effort estimation.

4. Use one or more empirical models for software cost and effort estimation.

The first option is attractive but not practical. Cost estimates must be provided “up front”.

The second option works well, but past experiences has not always good indicator of future results.

The remaining options are viable approaches for software estimation. Decomposition techniques take “divide
and conquer” approach. Empirical models can be used to complement decomposition techniques.

Decomposition techniques in software estimation

Software project estimation is a form of problem solving, and in most cases, the problem to be solved is too
complex to be considered in one piece. For this reason, we decompose the problem, re-characterizing it as a set
of smaller problems.

1. Software Sizing: Before an estimate can be made, the project planner must understand the scope of the
software to be built and generate an estimate of its "size.

In the context of project planning, size refers to a quantifiable outcome of the software project. If a direct
approach is taken, size can be measured in LOC. If an indirect approach is chosen, size is represented as FP. There
are four different approaches to the sizing problem:

 Fuzzy logic” sizing:

 Function point sizing:
 Standard component sizing:
 Change sizing:

2. Problem based Estimation:

Lines of code and function points were described as measures from which productivity metrics can be computed.
LOC and FP data are used in two ways during software project estimation: (1) as an estimation variable to "size"
each element of the software and (2) as baseline metrics collected from past projects and used in conjunction
with estimation variables to develop cost and effort projections LOC and FP estimation are distinct estimation

Yet both have a number of characteristics in common. The project planner begins with a bounded statement of
software scope and from this statement attempts to decompose software into problem functions that can each be
estimated individually. LOC or FP (the estimation variable) is then estimated for each function. Alternatively, the
planner may choose another component for sizing such as classes or objects.

3. Process based Estimation:

The most common technique for estimating a project is to base the estimate on the process that will be used.
That is, the process is decomposed into a relatively small set of tasks and the effort required to accomplish each
task is estimated.

14 © Prepared By Dodda.Venkata Reddy M.Tech

Like the problem‐based techniques, process‐based estimation begins with a delineation of software functions
obtained from the project scope.

A series of software process activities must be performed for each function. Functions and related software
process activities may be represented as part of a table similar to the one presented. Once problem functions and
process activities are melded, the planner estimates the effort (e.g., person-months) that will be required to
accomplish each software process activity for each software function.

Senior staff heavily involved in early activities is generally more expensive than junior staff involved in later
design tasks, code generation.

5. Empirical models
An estimation model for computer software uses empirically derived formulas to predict effort as a function of
LOC and FP. The empirical data that support most estimation models are derived from a limited sample of

The structure of estimation models: A typical estimation model is derived using regression analysis on data
collected from past projects. The structure of such models takes the form

E = A + B x (ev) C

Where A, B and C are empirically derived constants, E is the effort in person-months, and ev is the estimation

LOC oriented models

E = 5.2 * ( KLOC )0.91 Walston-Felix model

E = 5.5 + 0.73 * ( KLOC )1.16 Bailey-Basili model

E = 3.2 * ( KLOC )1.05 Boehm simple model

E = 5.288 * ( KLOC )1.047 Doty model

FP oriented models

E = -13.39 + 0.0545 * FP Albrecht and Gaffney model

E = 60.62 * 7.728 * 10-8 * FP3 Kemerer model

E = 585.7 + 15.12 FP Matson, Barnett, Mellichamp

15 © Prepared By Dodda.Venkata Reddy M.Tech

The COCOMO model: The Constructive Cost Model (COCOMO) is a procedural software cost estimation
model developed by Barry W. Boehm. The model parameters are derived from fitting a regression formula
using data from historical projects.

The hierarchy of COCOMO models takes the following form:

Model 1. The Basic COCOMO model is a static, single-valued model that computes software development effort
(and cost) as a function of program size expressed in estimated lines of code (LOC).

Model 2. The Intermediate COCOMO model computes software development effort as a function of program
size and a set of "cost drivers" that include subjective assessments of product, hardware, personnel and project

Model 3. The Advanced COCOMO model incorporates all characteristics of the intermediate version with an
assessment of the cost driver's impact on each step (analysis, design, etc.) of the software engineering process.

The COCOMO models are defined for three classes of software projects. Using Boehm's terminology these are:

(1) Organic mode–relatively small, simple software projects.

(2) Semi-detached mode –an intermediate (in size and complexity) software projects.

(3) Embedded mode –a software project that must be developed within a set of tight hardware, software and
operational constraints.

The Basic COCOMO equations take the form:

E = ab KLOC b

D = cb E d

Where E is the effort applied in person-months, D is the development time in chronological months and KLOC
is the estimated number of delivered lines of code for the project (express in thousands). The
coefficients ab and cb and the exponent’s bb and db

The intermediate COCOMO model takes the form:

E = ai KLOC b
i x EAF

Where E is the effort applied in person-months and KLOC is the estimated number of delivered lines of code
for the project. The coefficient ai and the exponent bi

16 © Prepared By Dodda.Venkata Reddy M.Tech

6. Project Management / Software project Management Role / Software Management
Software project management is an art and science of planning and leading software projects. It is a sub-discipline
of project management in which software projects are planned, implemented, monitored and controlled.

Need of software project management

Software is said to be an intangible product. Software development is a kind of all new streams in world business
and there’s very little experience in building software products. Most software products are tailor made to fit
client’s requirements

The most important is that the underlying technology changes and advances so frequently and rapidly that
experience of one product may not be applied to the other one. All such business and environmental constraints
bring risk in software development hence it is essential to manage software projects efficiently.

A software project manager is a person who undertakes the responsibility of executing the software project.
Software project manager is thoroughly aware of all the phases of SDLC that the software would go through

The role and responsibility of a software project manager

Software project managers may have to do any of the following tasks:

1. Planning: This means putting together the blueprint for the entire project from ideation to fruition. It will
define the scope, allocate necessary resources, propose the timeline, delineate the plan for execution, lay
out a communication strategy, and indicate the steps necessary for testing and maintenance.

2. Leading: A software project manager will need to assemble and lead the project team, which likely will
consist of developers, analysts, testers, graphic designers, and technical writers. This requires excellent
communication, people and leadership skills.

3. Execution: The project manager will participate in and supervise the successful execution of each stage of
the project. This includes monitoring progress, frequent team check-ins and creating status reports.

4. Time management: Staying on schedule is crucial to the successful completion of any project, but it’s
particularly challenging when it comes to managing software projects because changes to the original plan

17 © Prepared By Dodda.Venkata Reddy M.Tech

are almost certain to occur as the project evolves. Software project managers must be experts in risk
management and contingency planning to ensure forward progress when roadblocks or changes occur.

5. Budget: Like traditional project managers, software project managers are tasked with creating a budget
for a project, and then sticking to it as closely as possible, moderating spend and re-allocating funds when

6. Maintenance: Software project management typically encourages constant product testing in order to
discover and fix bugs early, adjust the end product to the customer’s needs, and keep the project on
target. The software project manager is responsible for ensuring proper and consistent testing, evaluation
and fixes are being made.

How to manage a software project successfully?

A recent article in Forbes suggests that there are eight ways to improve and streamline the software project
management process; these eight suggestions include:

 Take non-development work off your team’s plate to let them focus on developing

 Motivating your team by sharing others’ success stories—like those of tech giants, which will inspire and
excite your team

 Avoid altering the task once its assigned

 Try to stick to the plan (until it needs to be changed)

 Encouraging organization by being organized

 Streamline productivity through effective delegation

 Get to know your team and build a rapport

 Break down the plan and give them specific daily tasks

Software Management Activities

Software project management comprises of a number of activities, which contains planning of project, deciding
scope of software product, estimation of cost in various terms, scheduling of tasks and events, and resource
management. Project management activities may include:

 Project Planning

 Scope Management

 Project Estimation

18 © Prepared By Dodda.Venkata Reddy M.Tech

7. Process and Project Metrics
Software process and project metrics are quantitative measures that enable you to gain insight into the efficacy of
the software process and the projects that are conducted using the process as a framework. Basic quality and
productivity data are collected. These data are then analyzed, compared against past averages, and assessed to
determine whether quality and productivity improvements have occurred. Metrics are also used to pinpoint
problem areas so that remedies can be developed and the software process can be improved.

Metrics in the process and project domains

Metrics should be collected so that process and product indicators can be ascertained

1. Process metrics used to provide indictors that lead to long term process improvement
2. Project metrics enable project manager to
 Assess status of ongoing project
 Track potential risks
 Uncover problem are before they go critical
 Adjust work flow or tasks
 Evaluate the project team’s ability to control quality of software work products

1. Process Metrics

Process metrics are the set of process indicators that are used to improve the software processes. Process metrics is
collected over the complete software life cycle. The software process can be improved with the help of process

The Process at the center connecting 3 factors that have a profound influence on software quality and
organizational performance. The skill and motivation of people has been shown to be the single most influential
factor in quality and performance. The complexity of the product can have a substantial impact on quality and
team performance. The technology that populates the process also has an impact.

19 © Prepared By Dodda.Venkata Reddy M.Tech

We can measure the effectiveness of a process by deriving a set of metrics based on outcomes of the process such

 Errors uncovered before release of the software

 Defects delivered to and reported by end users,
 Work products delivered (productivity),
 Human effort expended,
 Calendar time expended,
 Schedule conformance

Process Metrics Guidelines

 Use common sense and organizational sensitivity when interpreting metrics data.
 Provide regular feedback to the individuals and teams who collect measures and metrics.
 Don’t use metrics to appraise individuals.
 Work with practitioners and teams to set clear goals and metrics that will be used to achieve them.
 Never use metrics to threaten individuals or teams.
 Metrics data that indicate a problem area should not be considered “negative.” These data are merely
an indicator for process improvement.
 Don’t obsess on a single metric to the exclusion of other important metrics.

2. Project Metrics

 A software team can use software project metrics to adapt project workflow and technical activities.
 Project metrics are used to avoid development schedule delays, to mitigate potential risks, and to assess
product quality on an on-going basis.
 Every project should measure its inputs (resources), outputs (deliverables), and results (effectiveness of

Intent of project metrics is twofold.

1. Used to minimize the development schedule

2. Used to assess product quality on an ongoing basis and, when necessary, modify the technical approach
to improve quality.

 As quality improves, defects are minimized, and as the defect count goes down, the amount of rework
required during the project is also reduced. This leads to a reduction in overall project cost.

3. Software Quality Measurement

Measurements in the physical world can be categorized in two ways: direct measures and indirect measures.

 Direct process measures include lines of code (LOC), execution speed, memory size, defects reported over
some time period.

 Indirect product measures examine the quality of the software product itself (e.g. functionality, complexity,
efficiency, reliability, maintainability).

20 © Prepared By Dodda.Venkata Reddy M.Tech

A) Size-Oriented Metrics:

 These are derived by normalizing (dividing) any direct measure (e.g. defects or human effort) associated
with the product or project by LOC.
 Size oriented metrics are widely used but their validity and applicability is widely debated.

 Software organization can maintains simple records as shown in fig.

 The table lists each software development project that has been completed over the past few years and
corresponding measures for that project.
 In order to develop metrics that can be understood with similar metrics from other projects, we choose
lines of code as our normalization value.
 Errors per KLOC (thousand lines of code)
 Defects per KLOC
 $ per LOC
 Pages of documentation per KLOC
 Size-oriented metrics are widely used, but debate about their validity and applicability continues.
 LOC measures are programming language dependent.
 Their use in estimation requires a level of detail that may be difficult to achieve


 Artifacts of software development which is easily counted

 Many existing methods use LOC as a key input
 A large body of literature and data based on LOC already exists.


 This measure is dependent upon the programming language.

 This method is well designed but shorter program many get suffered.
 It does not accommodate non procedural languages.
 In early stage of development it is difficult to estimate LOC.

21 © Prepared By Dodda.Venkata Reddy M.Tech

B) Function-Oriented Metrics:

 Function points are computed from direct measures of the information domain of a business software
application and assessment of its complexity.
 Once computed function points are used like LOC to normalize measures for software productivity,
quality, and other attributes.
 The relationship of LOC and function points depends on the language used to implement the software.
 It uses a measure of functionality delivered by the application as a normalization value.
 Since ‘functionality’ cannot be measured directly, it must be derived indirectly using other direct measures
 Function Point (FP) is widely used as function oriented metrics.
 FP is based on characteristic of Software information domain.
 FP is programming language independent.

FP- Five information domain characteristics

Measurement parameter Weighting factor

Simple Average Complex

Number of user inputs 3x_ 4x_ 6x_

Number of user outputs 4x_ 5x_ 7x_

Number of user inquiries 3x_ 4x_ 6x_

Number of files 7x_ 10 x _ 15 x _

Number of external interfaces 5x_ 7x_ 10 x _

Simple Total Total Complex total

Count Total

 Number of user inputs - Each user input that provides distinct data to the software is counted
 Number of user outputs - Each user output that provides information to the user is counted. Output
refers to reports, screens, error messages, etc
 Number of user inquiries - An inquiry is defined as an on-line input that results in the generation of
some immediate software response in the form of an on-line output. Each distinct inquiry is counted.
 Number of files -Each logical master file (i.e. large database or separate file) is counted.
 Numbers of external interfaces - All machine readable interfaces (e.g., data files on storage media) that
are used to transmit information to another system are counted.

22 © Prepared By Dodda.Venkata Reddy M.Tech


 This method is independent of programming languages.

 It is based on the data which can be obtained in early stage of project.


 This method is more suitable for business systems and can be developed for that domain.
 Many aspects point of this method are not validated.
 This functional point has no significant meaning. It is just numerical value.

C) Metric for software quality

 Measuring Quality
 It consists of 4 parameter.
1. Correctness
2. Maintainability
3. Integrity
4. Usability


 A program must operate correctly or it provides little value to its users.

 Correctness is the degree to which the software performs its required function.
 The most common measure for correctness is defects per KLOC, where a defect is defined as a verified
lack of conformance to requirements.
 When considering the overall quality of a software product, defects are those problems reported by a
user of the program


Maintenance required more effort than any other software engineering activity. It is the ease with which a
program can be corrected if an error is encountered, adapted if its environment changes, or enhanced if the
customer desires a change in requirement. There is no way to measure maintainability directly; therefore, we
must use indirect measures.


Software integrity has become increasingly important in the age of hackers and firewalls. This attribute measures a
system's ability to withstand attacks (both accidental and intentional) to its security. Attacks can be made on all
three components of software:

1. Programs
2. Data
3. Documents

23 © Prepared By Dodda.Venkata Reddy M.Tech

To measure integrity, two additional attributes must be defined:

1. Threat
2. Security

Threat is the probability (which can be estimated or derived from practical evidence) that an attack of a
specific type will occur within a given time.

Security is the probability (which can be estimated or derived from practical evidence) that the attack of a
specific type will be prevent.

Integrity of a system can then be defined as

Integrity = summation [(1 – threat) X (1 – security)]

, Where threat and security are summed over each type of attack.

the phrase "user-friendliness" has become everywhere in discussions of software products. If a program is not user-
friendly, it is often doomed to failure, even if the functions that it performs are valuable. Usability is an attempt
to quantify user-friendliness and can be measured in terms of four characteristics:

1. the physical and or intellectual skill required to learn the system,

2. the time required to become moderately efficient in the use of the system

3. productivity measured when the system is used by someone who is moderately efficient

4. A subjective assessment (sometimes through a questionnaire) of users’ attitudes toward the system.

Defect Removal Efficiency

A quality metric that provides benefit at both the project and process level is defect removal efficiency (DRE).
DRE is a measure of the filtering ability of quality assurance and control activities as they are applied throughout
all process framework activities. To compute DRE:

DRE = E / (E + D)

Where E= no. of error before release and D = defect found after release of software to end users

The ideal value for DRE is 1. That is, no defects are found in the software. Realistically, D will be greater than 0,
but the value of DRE can still approach 1. As E increases (for a given value of D), the overall value of DRE begins
to approach 1. In fact, as E increases, it is likely that the final value of D will decrease (errors are filtered out
before they become defects). DRE encourages a software project team to institute techniques for finding as many
errors as possible before delivery. DRE can also be used within the project to assess a team’s ability to find errors
before they are passed to the next framework activity or software engineering task.

For example, the requirements analysis task produces an analysis model that can be reviewed to find and correct
errors. Those errors that are not found during the review of the analysis model are passed on to the design task.

24 © Prepared By Dodda.Venkata Reddy M.Tech

When used in this context, we redefine DRE as

DREi = Ei/(Ei + Ei+1)

Ei is the number of errors found during software engineering activity i. Ei+1 number of errors found during
software engineering activity i+1

A quality objective for a software team is to achieve DREi that approaches 1. That is, errors should be filtered out
before they are passed on to the next activity.

8. Risk Analysis
Risk is an expectation of loss, a potential problem that may or may not occur in the future. It is generally caused
due to lack of information, control or time. A possibility of suffering from loss in software development process is
called a software risk. Loss can be anything, increase in production cost, development of poor quality software,
not being able to complete the project on time.

Software risk exists because the future is uncertain and there are many known and unknown things that cannot
be incorporated in the project plan.

types of Software Risks

Software risk encompasses the probability of occurrence for uncertain events and their potential for loss within
an organization. The following are some of the risks related to project, product, and business risks.

Project Risks: Project risk is an uncertain event or condition that, if it occurs, has an effect on at least one project
objective. Risk management focuses on identifying and assessing the risks to the project and managing those risks
to minimize the impact on the project.

Technical Risks: The probability of loss incurred through the execution of a technical process in which the
outcome is uncertain. Untested engineering, technological or manufacturing procedures entail some level
technical risk that can result in the loss of time, resources, and possibly harm to individuals and facilities.

Business Risks: Business risk is the possibilities a company will have lower than anticipated profits or experience a
loss rather than taking a profit. ... A company with a higher business risk should choose a capital structure that
has a lower debt ratio to ensure it can meet its financial obligations at all times.

Known Risks: Known risks are those that can be uncovered after careful evaluation of the project, the plan, the
business and technical environment in which the project is being developed.

Predictable Risks: Predictable risks are extrapolated from past project experience.

Unpredictable Risks: These are the risks, which are extremely difficult to identify in advance.

25 © Prepared By Dodda.Venkata Reddy M.Tech

Risk Management Process

Two type of risk managements are available

Reactive risk management: Reactive risk management attempts to reduce the tendency of the same or similar
accidents which happened in past being repeated in future.

Proactive risk management: Proactive risk management attempts to reduce the tendency of any accident
happening in future by identifying the boundaries of activities, where a breach of the boundary can lead to an

Risk management is the identification, projection, refinement and management of risks.

1. Risk identification

Risk identification is a systematic attempt to specify threats to the project plan (estimates, schedule, resource
loading, etc.). By identifying known and predictable risks, the project manager takes a first step toward avoiding
them when possible and controlling them when necessary.

The checklist can be used for risk identification and focuses on some subset of known and predictable risks in the
following generic subcategories:

 Product Size
 Business Impact
 Customer Characteristics
 Process Definition
 Development Environment
 Technology to be build
 Staff size and experience

26 © Prepared By Dodda.Venkata Reddy M.Tech

2. Risk Projection: Risk projection, also called risk estimation, attempts to rate each risk. The project planner,
along with other managers and technical staff, performs four risk projection activities:

 Establishes a scale that understands the probability of the risk

 Describes the consequences of the risk
 Estimate the impact of the risk on the project and on the product
 Identifies the overall accuracy of the risk projection.

3. Risk Refinement: Risk refinement is the Process of restating the risks as a set of more detailed risks that will be
easier to mitigate, monitor, and manage. CTC (condition-transition-consequence) format may be a good
representation for the detailed risks.

RMMM Plane (Risk Mitigation, Monitoring and Management)

Risk analysis activities are used to assist the project team in developing a strategy for dealing with risk. An
effective strategy must consider three issues:

 Risk Avoidance – Leads to mitigation

 Risk Monitoring – The project manager monitors factors those indicate whether the risk is being more or
less likely.

 Risk Management – It assumes that mitigation efforts have failed and the risk has become a reality.

27 © Prepared By Dodda.Venkata Reddy M.Tech