Sei sulla pagina 1di 134

www.missionmca.

com 0



Software Engineering

As per revised syllabus of

MCA SEMESTER 1

(Mumbai University)

Year 2012



Prepared By



(For private circulation only)



www.missionmca.com 1

INDEX

S Sr r. .N No o T To op pi ic c P Pa ag ge e N No o. .

1 1. .

Software Engineering

0 02 2

2 2. .

Approaches to system development

1 11 1

3 3. .

Software Analysis and Design

3 33 3

4 4. .

Software Project Planning

5 55 5

5 5. .

Software Scheduling and Tracking

7 75 5

6 6. .

Design phase activities

9 92 2

7 7. .

Software Quality

1 11 15 5

8 8. .

Software Reliability and Maintenance

1 12 25 5

Prepared By : Mission MCA


www.missionmca.com 2

UNIT 1
Software Engineering

1.1 THE EVOLVING ROLE OF SOFTWARE
Today, software takes on a dual role. It is a product and, at the same time, the vehicle for delivering a
product. As a product, it delivers the computing potential embodied by computer hardware or, more
broadly, a network of computers that are accessible by local hardware. Whether it resides within a
cellular phone or operates inside a mainframe computer, software is an information transformer
producing, managing, acquiring, modifying, displaying, or transmitting information that can be as simple
as a single bit or as complex as a multimedia presentation. As the vehicle used to deliver the product,
software acts as the basis for the control of the computer (operating systems), the communication of
information (networks), and the creation and control of other programs (software tools and
environments).

Software delivers the most important product of our timeinformation. Software transforms personal
data (e.g., an individuals financial transactions) so that the data can be more useful in a local context; it
manages business information to enhance competitiveness; it provides a gateway to worldwide
information networks (e.g., Internet) and provides the means for acquiring information in all of its
forms.

The role of computer software has undergone significant change over a time span of little more than 50
years. Dramatic improvements in hardware performance, pro found changes in computing
architectures, vast increases in memory and storage capacity, and a wide variety of exotic input and
output options have all precipitated more sophisticated and complex computer-based systems.
Sophistication and complexity can produce dazzling results when a system succeeds, but they can also
pose huge problems for those who must build complex systems.

Popular books published during the 1970s and 1980s provide useful historical insight into the changing
perception of computers and software and their impact on our culture. Osborne characterized a "new
industrial revolution." Toffler called the advent of microelectronics part of "the third wave of change" in
human history, and Naisbitt predicted a transformation from an industrial society to an "information
society." Feigenbaum and McCorduck suggested that information and knowledge (controlled by
computers) would be the focal point for power in the twenty-first century, and Stoll argued that the
"electronic community" created by networks and software was the key to knowledge interchange
throughout the world.

As the 1990s began, Toffler described a "power shift" in which old power structures (governmental,
educational, industrial, economic, and military) disintegrate as computers and software lead to a
"democratization of knowledge." Yourdon worried that U.S. companies might loose their competitive
edge in software related businesses and predicted the decline and fall of the American programmer.
Hammer and Champ argued that information technologies were to play a pivotal role in the
reengineering of the corporation. During the mid-1990s, the pervasiveness of computers and software
spawned a rash of books by neo-Luddites (e.g., Resisting the Virtual Life, edited by James Brook and


www.missionmca.com 3

Iain Boal and The Future Does Not Compute by Stephen Talbot). These authors demonized the
computer, emphasizing legitimate concerns but ignoring the profound benefits that have already been
realized. During the later 1990s, Yourdon re-evaluated the prospects for the software professional and
suggested the the rise and resurrection of the American programmer. As the Internet grew in
importance, his change of heart proved to be correct. As the twentieth century closed, the focus shifted
once more, this time to the impact of the Y2K time bomb

Although the predictions of the Y2K doomsayers were incorrect, their popular writings drove home the
pervasiveness of software in our lives. Today, ubiquitous computing [NOR98] has spawned a
generation of information appliances that have broadband connectivity to the Web to provide a
blanket of connectedness over our homes, offices and motorways. Softwares role continues to
expand.

The lone programmer of an earlier era has been replaced by a team of software specialists, each
focusing on one part of the technology required to deliver a complex application. And yet, the same
questions asked of the lone programmer are being asked when modern computer-based systems are
built:
Why does it take so long to get software finished?
Why are development costs so high?
Why can't we find all the errors before we give the software to customers?
Why do we continue to have difficulty in measuring progress as software is being developed?

These, and many other questions,1 are a manifestation of the concern about software and the manner
in which it is developeda concern that has lead to the adoption of software engineering practice.

1.2 SOFTWARE
In 1970, less than 1 percent of the public could have intelligently described what "computer software"
meant. Today, most professionals and many members of the public at large feel that they understand
software. But do they? A textbook description of software might take the following form: Software is
(1) instructions (computer programs) that when executed provide desired function and performance,
(2) data structures that enable the programs to adequately manipulate information, and
(3) documents that describe the operation and use of the programs.
There is no question that other, more complete definitions could be offered. But we need more than a
formal definition.

1.2.1 Software Characteristics

To gain an understanding of software (and ultimately an understanding of software engineering), it is
important to examine the characteristics of software that make it different from other things that
human beings build. When hardware is built, the human creative process (analysis, design, construction,
testing) is ultimately translated into a physical form. If we build a new computer, our initial sketches,
formal design drawings, and breadboarded prototype evolve into a physical product (chips, circuit
boards, power supplies, etc.).

Software is a logical rather than a physical system element. Therefore, software has characteristics that
are considerably different than those of hardware:


www.missionmca.com 4


1. Software is developed or engineered, it is not manufactured in the classical sense. Although some
similarities exist between software development and hardware manufacture, the two activities are
fundamentally different. In both activities, high qual ity is achieved through good design, but the
manufacturing phase for hardware can introduce quality problems that are nonexistent (or easily
corrected) for software.

Both activities are dependent on people, but the relationship between people applied and work
accomplished is entirely. Both activities require the construction of a "product" but the approaches are
different. Software costs are concentrated in engineering. This means that software projects cannot be
managed as if they were manufacturing projects.

2. Software doesn't "wear out."

Figure 1.1 depicts failure rate as a function of time for hardware. The relationship, often called the
"bathtub curve," indicates that hardware exhibits relatively high failure rates early in its life (these
failures are often attributable to design or manufacturing defects); defects are corrected and the failure
rate drops to a steady-state level (ideally, quite low) for some period of time.

As time passes, however, the failure rate rises again as hardware components suffer from the
cumulative affects of dust, vibration, abuse, temperature extremes, and many other environmental
maladies. Stated simply, the hardware begins to wear out. Software is not susceptible to the
environmental maladies that cause hardware to wear out. In theory, therefore, the failure rate curve for
software should take the form of the idealized curve shown in Figure 1.2.

Undiscovered defects will cause high failure rates early in the life of a program. However, these are
corrected (ideally, without introducing other errors) and the curve flattens as shown.The idealized curve
is a gross oversimplification of actual failure models (see Chapter 8 for more information) for software.
However, the implication is clearsoftware doesn't wear out. But it does deteriorate! This seeming
contradiction can best be explained by considering the actual curve shown in Figure 1.2. During its life,
software will undergo change (maintenance).



www.missionmca.com 5

As changes are made, it is likely that some new defects will be introduced, causing the failure rate curve
to spike as shown in Figure 1.2. Before the curve can return to the original steady-state failure rate,
another change is requested, causing the curve to spike again. Slowly, the minimum failure rate level
begins to risethe software is deteriorating due to change.


Another aspect of wear illustrates the difference between hardware and software. When a hardware
component wears out, it is replaced by a spare part. There are no software spare parts. Every software
failure indicates an error in design or in the process through which design was translated into machine
executable code. Therefore, software maintenance involves considerably more complexity than
hardware maintenance.

3. Although the industry is moving toward component-based assembly, most software continues to be
custom built. Consider the manner in which the control hardware for a computer-based product is
designed and built. The design engineer draws a simple schematic of the digital circuitry, does some
fundamental analysis to assure that proper function will be achieved, and then goes to the shelf where
catalogs of digital components exist.

Each integrated circuit (called an IC or a chip) has a part number, a defined and validated function, a
well-defined interface, and a standard set of integration guidelines. After each component is selected, it
can be ordered off the shelf. As an engineering discipline evolves, a collection of standard design
components is created. Standard screws and off-the-shelf integrated circuits are only two of thousands
of standard components that are used by mechanical and electrical engineers as they design new
systems.

The reusable components have been created so that the engineer can concentrate on the truly
innovative elements of a design, that is, the parts of the design that represent something new. In the
hardware world, component reuse is a natural part of the engineering process. In the software world, it
is something that has only begun to be achieved on a broad scale. A software component should be
designed and implemented so that it can be reused in many different programs.



www.missionmca.com 6

In the 1960s, we built scientific subroutine libraries that were reusable in a broad array of engineering
and scientific applications. These subroutine libraries reused well-defined algorithms in an effective
manner but had a limited domain of application.

Today, we have extended our view of reuse to encompass not only algorithms but also data structure.
Modern reusable components encapsulate both data and the processing applied to the data, enabling
the software engineer to create new applications from reusable parts. For example, today's graphical
user interfaces are built using reusable components that enable the creation of graphics windows, pull-
down menus, and a wide variety of interaction mechanisms. The data structure and processing detail
required to build the interface are contained with a library of reusable components for interface
construction.

1.2.2 Software Applications

Software may be applied in any situation for which a prespecified set of procedural steps (i.e., an
algorithm) has been defined (notable exceptions to this rule are expert system software and neural
network software). Information content and determinacy are important factors in determining the
nature of a software application. Content refers to the meaning and form of incoming and outgoing
information. For example, many business applications use highly structured input data (a database) and
produce formatted reports.

Software that controls an automated machine (e.g., a numerical control) accepts discrete data items
with limited structure and produces individual machine commands in rapid succession. Information
determinacy refers to the predictability of the order and timing of information. An engineering analysis
program accepts data that have a predefined order, executes the analysis algorithm(s) without
interruption, and produces resultant data in report or graphical format. Such applications are
determinate. A multiuser operating system, on the other hand, accepts inputs that have varied content
and arbitrary timing, executes algorithms that can be interrupted by external conditions, and produces
output that varies as a function of environment and time. Applications with these characteristics are
indeterminate.

It is somewhat difficult to develop meaningful generic categories for software applications. As software
complexity grows, neat compartmentalization disappears. The following software areas indicate the
breadth of potential applications:

System software. System software is a collection of programs written to service other programs. Some
system software (e.g., compilers, editors, and file management utilities) process complex, but
determinate, information structures. Other systems applications (e.g., operating system components,
drivers, telecommunications processors) process largely indeterminate data. In either case, the system
software area is characterized by heavy interaction with computer hardware; heavy usage by multiple
users; concurrent operation that requires scheduling, resource sharing, and sophisticated process
management; complex data structures; and multiple external.
interfaces.

Real-time software. Software that monitors/analyzes/controls real-world events as they occur is called
real time. Elements of real-time software include a data gathering component that collects and formats
information from an external environment, an analysis component that transforms information as
required by the application, a control/output component that responds to the external environment,


www.missionmca.com 7

and a monitoring component that coordinates all other components so that real-time response
(typically ranging from 1 millisecond to 1 second) can be maintained.

Business software. Business information processing is the largest single software application area.
Discrete "systems" (e.g., payroll, accounts receivable/payable, inventory) have evolved into
management information system (MIS) software that accesses one or more large databases containing
business information. Applications in this area restructure existing data in a way that facilitates business
operations or management decision making. In addition to conventional data processing application,
business software applications also encompass interactive computing (e.g., pointof- sale transaction
processing).

Engineering and scientific software. Engineering and scientific software have been characterized by
"number crunching" algorithms. Applications range from astronomy to volcanology, from automotive
stress analysis to space shuttle orbital dynamics, and from molecular biology to automated
manufacturing. However, modern applications within the engineering/scientific area are moving away
from conventional numerical algorithms. Computer-aided design, system simulation, and other
interactive applications have begun to take on real-time and even system software characteristics.

Embedded software. Intelligent products have become commonplace in nearly every consumer and
industrial market. Embedded software resides in read-only memory and is used to control products and
systems for the consumer and industrial markets. Embedded software can perform very limited and
esoteric functions (e.g., keypad control for a microwave oven) or provide significant function and control
capability (e.g., digital functions in an automobile such as fuel control, dashboard displays, and braking
systems).

Personal computer software. The personal computer software market has burgeoned over the past two
decades. Word processing, spreadsheets, computer graphics, multimedia, entertainment, database
management, personal and business financial applications, external network, and database access are
only a few of hundreds of applications.

Web-based software. The Web pages retrieved by a browser are software that incorporates executable
instructions (e.g., CGI, HTML, Perl, or Java), and data (e.g., hypertext and a variety of visual and audio
formats). In essence, the network becomes a massive computer providing an almost unlimited software
resource that can be accessed by anyone with a modem.

Artificial intelligence software. Artificial intelligence (AI) software makes use of nonnumerical
algorithms to solve complex problems that are not amenable to computation or straightforward
analysis. Expert systems, also called knowledgebased systems, pattern recognition (image and voice),
artificial neural networks, theorem proving, and game playing are representative of applications within
this category.

1.3 Changing nature of software
Many industry observers (including this author) have characterized the problems associated with
software development as a "crisis." More than a few books (e.g., [GLA97], [FLO97], [YOU98a]) have
recounted the impact of some of the more spectacular software failures that have occurred over the
past decade. Yet, the great successes achieved by the software industry have led many to question


www.missionmca.com 8

whether the term software crisis is still appropriate. Robert Glass, the author of a number of books on
software failures, is representative of those who have had a change of heart.
He states [GLA98]: I look at my failure stories and see exception reporting, spectacular failures in the
midst of many successes, a cup that is [now] nearly full. It is true that software people succeed more
often than they fail. It also true that the software crisis predicted 30 years ago never seemed to
materialize. What we really have may be something rather different. The word crisis is defined in
Webster's Dictionary as a turning point in the course of anything; decisive or crucial time, stage or
event. Yet, in terms of overall software quality and the speed with which computer-based systems and
products are developed, there has been no "turning point," no "decisive time," only slow, evolutionary
change, punctuated by explosive technological changes in disciplines associated with software.

The word crisis has another definition: "the turning point in the course of a disease, when it becomes
clear whether the patient will live or die." This definition may give us a clue about the real nature of the
problems that have plagued software development. What we really have might be better characterized
as a chronic affliction.2 The word affliction is defined as "anything causing pain or distress." But the
definition of the adjective chronic is the key to our argument: "lasting a long time or recurring often;
continuing indefinitely." It is far more accurate to describe the problems we have endured in the
software business as a chronic affliction than a crisis.

Regardless of what we call it, the set of problems that are encountered in the development of computer
software is not limited to software that "doesn't function properly." Rather, the affliction encompasses
problems associated with how we develop software, how we support a growing volume of existing
software, and how we can expect to keep pace with a growing demand for more software. We live with
this affliction to this dayin fact, the industry prospers in spite of it. And yet, things would be much
better if we could find and broadly apply a cure.

1.4 Software Myths
Many causes of a software affliction can be traced to a mythology that arose during the early history of
software development. Unlike ancient myths that often provide human lessons well worth heeding,
software myths propagated misinformation and confusion. Software myths had a number of attributes
that made them insidious; for instance, they appeared to be reasonable statements of fact (sometimes
containing elements of truth), they had an intuitive feel, and they were often promulgated by
experienced practitioners who "knew the score."

Today, most knowledgeable professionals recognize myths for what they are misleading attitudes that
have caused serious problems for managers and technical people alike. However, old attitudes and
habits are difficult to modify, and remnants of software myths are still believed.

Management myths. Managers with software responsibility, like managers in most disciplines, are
often under pressure to maintain budgets, keep schedules from slipping, and improve quality. Like a
drowning person who grasps at a straw, a software manager often grasps at belief in a software myth, if
that belief will lessen the pressure (even temporarily).

Myth: We already have a book that's full of standards and procedures for building software, won't that
provide my people with everything they need to know?



www.missionmca.com 9

Reality: The book of standards may very well exist, but is it used? Are software practitioners aware of its
existence? Does it reflect modern software engineering practice? Is it complete? Is it streamlined to
improve time to delivery while still maintaining a focus on quality? In many cases, the answer to all of
these questions is "no."

Myth: My people have state-of-the-art software development tools, after all, we buy them the newest
computers.
Reality: It takes much more than the latest model mainframe, workstation, or PC to do high-quality
software development. Computer-aided software engineering (CASE) tools are more important than
hardware for achieving good quality and productivity, yet the majority of software developers still do
not use them effectively.

Myth: If we get behind schedule, we can add more programmers and catch up (sometimes called the
Mongolian horde concept).

Reality: Software development is not a mechanistic process like manufacturing. In the words of Brooks
[BRO75]: "adding people to a late software project makes it later." At first, this statement may seem
counterintuitive. However, as new people are added, people who were working must spend time
educating the newcomers, thereby reducing the amount of time spent on productive development
effort. People can be added but only in a planned and well-coordinated manner.

Myth: If I decide to outsource3 the software project to a third party, I can just relax and let that firm
build it.
Reality: If an organization does not understand how to manage and control software projects internally,
it will invariably struggle when it outsources software projects.

Customer myths. A customer who requests computer software may be a person at the next desk, a
technical group down the hall, the marketing/sales department, or an outside company that has
requested software under contract. In many cases, the customer believes myths about software
because software managers and practitioners do little to correct misinformation. Myths lead to false
expectations (by the customer) and ultimately, dissatisfaction with the developer.

Myth: A general statement of objectives is sufficient to begin writing programs we can fill in the
details later.
Reality: A poor up-front definition is the major cause of failed software efforts. A formal and detailed
description of the information domain, function, behavior, performance, interfaces, design constraints,
and validation criteria is essential. These characteristics can be determined only after thorough
communication between customer and developer.

Myth: Project requirements continually change, but change can be easily accommodated because
software is flexible.
Reality: It is true that software requirements change, but the impact of change varies with the time at
which it is introduced. Figure 1.3 illustrates the impact of change. If serious attention is given to up-front
definition, early requests for change can be accommodated easily. The customer can review
requirements and recommend modifications with relatively little impact on cost. When changes are
requested during software design, the cost impact grows rapidly. Resources have been committed and a
design framework has been established. Change can cause upheaval that requires additional resources
and major design modification, that is, additional cost. Changes in function, performance, interface, or


www.missionmca.com 10

other characteristics during implementation (code and test) have a severe impact on cost. Change, when
requested after software is in production, can be over an order of magnitude more expensive than the
same change requested earlier.

Practitioner's myths. Myths that are still believed by software practitioners have been fostered by 50
years of programming culture. During the early days of software, programming was viewed as an art
form. Old ways and attitudes die hard.

Myth: Once we write the program and get it to work, our job is done.
Reality: Someone once said that "the sooner you begin 'writing code', the longer it'll take you to get
done." Industry data indicate that between 60 and 80 percent of all effort expended on software will be
expended after it is delivered to the customer for the first time.

Myth: Until I get the program "running" I have no way of assessing its quality.
Reality: One of the most effective software quality assurance mechanisms can be applied from the
inception of a projectthe formal technical review. Software are a "quality filter" that have been found
to be more effective than testing for finding certain classes of software defects.

Myth: The only deliverable work product for a successful project is the working program.
Reality: A working program is only one part of a software configuration that includes many elements.
Documentation provides a foundation for successful engineering and, more important, guidance for
software support.

Myth: Software engineering will make us create voluminous and unnecessary documentation and will
invariably slow us down.
Reality: Software engineering is not about creating documents. It is about creating quality. Better
quality leads to reduced rework. And reduced rework results in faster delivery times. Many software
professionals recognize the fallacy of the myths just described. Regrettably, habitual attitudes and
methods foster poor management and technical practices, even when reality dictates a better approach.
Recognition of software realities is the first step toward formulation of practical solutions for software
engineering.



www.missionmca.com 11

UNIT 2
Approaches to system development


System Development Life Cycle (SDLC)

The System Development Life Cycle (SDLC) is a method of System Development that consists of 5
phases: Planning, Analysis, Design, Implementation, and Support. The first four phases of Planning,
Analysis, Design and Implementation are undertaken during development of the project, while the last
phase of Support is undertaken post-completion of the project. Each phase has some activities
associated with it, and each activity may have some tasks associated with it.
1. Planning Phase
Following are the activities of the Planning Phase:
i] Define the Problem
- Meeting the Users
- Determine scope of Problem
- Define System capabilities
ii] Confirm Project Feasibility :
- Identify intangible costs & benefits
- Estimate tangible, developmental, & operational costs
- Calculate NPV, ROI, Payback
- Consider technical, cultural, schedule feasibility of the Project
iii] Plan Project Schedule (Chart out a complete project schedule, including the activities
and tasks of each phase.)
iv] Staff the Project (Provide required staff, such as the Analysts, the Programmers, the End-Users, etc.)
v] Launch the Project (Begin actual work on the Project)
2. Analysis Phase :
Following are the activities of the Analysis Phase:


www.missionmca.com 12

i] Gather information
- Meet the User to understand all aspects of the Problem
- Obtain information by observing business procedures, asking questions to user, studying existing
documents, reviewing existing systems, etc.
ii] Define System Requirements (Review & analyze obtained information and structure it to
understand requirements of new system, using graphical tools.)
iii] Build Prototype for Discovery of Requirements (Build pieces of System for Users to review)
iv] Prioritize Requirements (Arrange requirements in order of importance)
v] Generate & Evaluate alternatives (Research alternative solutions while building system
requirements.)
vi] Review recommendations with Management (Discuss all possible alternatives with Management and
finalize best alternative)

2. Design Phase :

Following are the activities of the Design Phase:
i] Design & Integrate Network (Understand Network Specifications of Organization, such as Computer
equipment, Operating Systems, Platforms, etc.)
ii] Design Application Architecture
- Design model diagrams according to the problem
- Create the required computer program modules
iii] Design User Interfaces (Design the required forms, reports, user screens, and decide on the sequence
of interaction.)
iv] Design System Interface (Understand how the new system will interact with the existing systems of
the organization)
v] Design & Integrate Database (Prepare a database scheme and implement it into the system) .
vi] Build Prototype for Design Details (Check workability of the proposed design using a
prototype.)


www.missionmca.com 13

vii] Design & Integrate System Controls (Incorporate facilities such as login and password
protection to protect the integrity of the database and the application program.)
3. Implementation Phase :
Following are the activities of the Implementation Phase:
i] Construct Software Components (Write code for the design, using programming l languages such as
Java, VB, etc.)
ii] Verify & Test Software (Check the functionality of the software components.)
iii] Build Prototype for Tuning (Make the software components more efficient using a prototype, to
make the system capable of handling large volumes of transaction.)
iv] Convert Data (Incorporate data from existing system into new system and make sure it is updated
and compatible with the new system.)
v] Train & Document (Train users to use the new system, and prepare the documentation.)
vi] Install Software (Install the software and make sure all components are running properly and check
for database access.)
4. Support Phase :
Following are the activities of the Support Phase:
i] Provide support to End-Users (Provide a helpdesk facility and training programs, to provide support to
end users.)
ii] Maintain & Enhance new System (Keep the system running error-free, and provide upgrades to
keep the system contemporary.)
Classic Lifecycle Model :
This model is also known as the waterfall or linear sequential model. This model demands a systematic
and sequential approach to software development that begins at the system level and progresses
through analysis, design, coding testing and maintenance. Figure 1.1 shows a diagrammatic
representation of this model.



www.missionmca.com 14


The life-cycle paradigm incorporates the following activities:
System engineering and analysis : Work on software development begins by establishing the
requirements for all elements of the system. System engineering and analysis involves gathering of
requirements at the system level, as well as basic top-level design and analysis. The requirement
gathering focuses especially on the software. The analyst must understand the information domain of
the software as well as the required function, performance and interfacing. Requirements are
documented and reviewed with the client.
Design: Software design is a multi-step process that focuses on data structures, software architecture,
procedural detail, and interface characterization. The design process translates requirements into a
representation of the software that can be assessed for quality before coding begins. The design phase
is also documented and becomes a part of the software configuration.
Coding: The design must be translated into a machine-readable form. Coding performs this task. If the
design phase is dealt with in detail, the coding can be done mechanically.
Testing : Once code is generated, it has to be tested. Testing focuses on the logic as well as the function
of the program to ensure that the code is error free and that o/p matches the requirement
specifications.
Maintenance : Software undergoes change with time. Changes may occur on account of errors
encountered, to adapt to changes in the external environment or to enhance the functionality and / or
performance. Software maintenance reapplies each of the preceding life cycles to the existing program.


www.missionmca.com 15


The classic life cycle is one of the oldest models in use. However, there are a few associated problems.
Some of the disadvantages are given below.
1. Real projects rarely follow the sequential flow that the model proposes. Iteration always occurs and
creates problems in the application of the model.
2. It is difficult for the client to state all requirements explicitly. The classic life cycle requires this and it
is thus difficult to accommodate the natural uncertainty that occurs at the beginning of any new
project.
3. A working version of the program is not available until late in the project time span. A major
blunder may remain undetected until the working program is reviewed which is potentially
disastrous.
In spite of these problems the life-cycle method has an important place in software engineering work.
Some of the reasons are given below.
1. The model provides a template into which methods for analysis, design, coding, testing and
maintenance can be placed.
2. The steps of this model are very similar to the generic steps that are applicable to all software
engineering models.
3. It is significantly preferable to a haphazard approach to software development.

Prototype Model :

Often a customer has defined a set of objectives for software, but not identified the detailed input,
processing or output requirements. In other cases, the developer may be unsure of the efficiency of an
algorithm, the adaptability of the operating system or the form that the human-machine interaction
should take. In these situations, a prototyping approach may be the best approach. Prototyping is a
process that enables the developer to create a model of the software that must be built. The sequence
of events for the prototyping model is illustrated in figure 1.2. Prototyping begins with requirements
gathering.
The developer and the client meet and define the overall objectives for the software, identify the
requirements, and outline areas where further definition is required. In the next phase a quick design is
created. This focuses on those aspects of the software that are visible to the user (e.g. i/p approaches
and o/p formats). The quick design leads to the construction of the prototype. This prototype is
evaluated by the client / user and is used to refine requirements for the software to be developed. A


www.missionmca.com 16

process of iteration occurs as the prototype is tuned to satisfy the needs of the client, while at the
same time enabling the developer to more clearly understand what needs to be done.



The prototyping model has a few associated problems.
Disadvantages:
1. The client sees what is apparently a working version of the software unaware that in the rush to
develop a working model, software quality and long-term maintainability is not considered. When
informed that the system must be rebuilt, most clients demand that the existing application be fixed
and made a working product. Often software developers are forced to relent.
2. The developer often makes implementation compromises to develop a working model quickly. An
inappropriate operating system or language may be selected simply because of availability. An
inefficient algorithm may be used to demonstrate capability. Eventually the developer may become
familiar with these choices and incorporate them as an integral part of the system.


www.missionmca.com 17

Although problems may occur prototyping may be an effective model for software engineering. Some of
the advantages of this model are enumerated below.
Advantages:
1. It is especially useful in situations where requirements are not clearly defined at the beginning and
are not understood both by the client and the developer.
2. Prototyping is also helpful in situations where an application is built for the first time with no
precedents to be followed. In such circumstances, unforeseen eventualities may occur which cannot
be predicted and can only be dealt with when encountered.

Spiral Model :
The spiral model in software engineering has been designed to incorporate the best features of both the
classic life cycle and the prototype models, while at the same time adding an element of risk-taking
analysis that is missing in these models. The model, represented in figure 1.3, defines four major
activities defined by the four quadrants of the figure :
Planning : Determination of objectives, alternatives and constraints.
Risk analysis : Analysis of alternatives and identification or resolution of risks.
Engineering : Development of the next level product.
Customer evaluation : Assessment of the results of engineering.

An interesting aspect of the spiral model is the radial dimension as depicted in the figure. With each
successive iteration around the spiral, progressively more complete versions of the software are built.
During the first circuit around the spiral, objectives, alternatives and constraints are defined and risks
are identified and analyzed. If risk analysis indicates that there is an uncertainty in the requirements,
prototyping may be used in the engineering quadrant to assist both the developer and the client. The
client now evaluates the engineering work and makes suggestions for improvement.
At each loop around the spiral, the risk analysis results in a go / no-go decision. If risks are too great the
project can be terminated.
In most cases however, the spiral flow continues outward toward a more complete model of the system,
and ultimately to the operational system itself. Every circuit around the spiral requires engineering that
can be accomplished using the life cycle or the prototype models. It should be noted, that the number of
development activities increase as activities move away from the center of the spiral.
Like all other models, the spiral model too has a few associated problems, which are discussed below.
Disadvantages :
It may be difficult to convince clients that the evolutionary approach is controllable.


www.missionmca.com 18

It demands considerable risk assessment expertise and relies on this for success.
If major risk is not uncovered, problems will undoubtedly occur.
The model is relatively new and has not been as widely used as the life cycle or the prototype
models. It will take a few more years to determine efficiency of this process with certainty.

This model however is one of the most realistic approaches available for software engineering. It also
has a few advantages, which are discussed below.

Advantages :
The evolutionary approach enables developers and clients to understand and react to risks at an
evolutionary level.
It uses prototyping as a risk reduction mechanism and allows the developer to use this approach at
any stage of the development.
It uses the systematic approach suggested by the classic life cycle method but incorporates it into an
iterative framework that is more realistic.
This model demands an evaluation of risks at all stages and should reduce risks before they become
problematic, if properly applied.



www.missionmca.com 19

Component Assembly Model :

Object oriented technologies provide the technical framework for a component based process model for
software engineering. This model emphasizes the creation of classes that encapsulate both data and the
algorithms used to manipulate the data. The component-based development (CBD) model incorporates
many characteristics of the spiral model. It is evolutionary in nature, thus demanding an iterative
approach to software creation. However, the model composes applications from pre-packaged software
components called classes. The engineering begins with the identification of candidate classes. This is
done by examining the data to be manipulated, and the algorithms that will be used to accomplish this
manipulation. Corresponding data and algorithms are packaged into a class. Classes created in past
applications are stored in a class library. Once candidate classes are identified the class library is
searched to see if a match exists. If it does, these classes are extracted from the library and reused. If it
does not exist, it is engineered using object-oriented techniques. The first iteration of the application is
then composed. Process flow moves to the spiral and will ultimately re-enter the CBD during subsequent
passes through the engineering activity.

Advantages :
The CBD model leads to software reuse, and reusability provides software engineers with a number
of measurable benefits.

This model leads to a 70% reduction in development cycle time and an 84% reduction in projection
cost.

Disadvantages :
The results mentioned above are inherently dependent on the robustness of the component library.


www.missionmca.com 20



There is little question in general that the CBD model provides a significant advantage for software
engineers.

Rapid Application Development(RAD) Model :

Rapid Action Development is an incremental software development process model that emphasizes an
extremely short development cycle. The RAD model is a high-speed adaptation of the linear sequential
model in which rapid development is achieved by using component-based construction.


www.missionmca.com 21

If requirements are well understood and project scope is constrained, the RAD model enables a
development team to create a fully functional system within 60-90 days. Used primarily for information
system applications, the RAD approach encompasses the following phases :
Business modeling : The information flow among business functions is modeled so as to understand
the following:
i) The information that drives the business process .
ii) The information generated.
iii) The source and destination of the information generated.
iv) The processes that affect this information.




www.missionmca.com 22

Data modeling : The information flow defined, as a part of the business-modeling phase is refined
into a set of data objects that are needed to support the business. The attributes of each object are
identified and the relationships between these objects are defined.

Process modeling: The data objects defined in the previous phase are transformed to achieve the
information flow necessary to implement a business function. Processing descriptions are created
for data manipulation.

Application generation : RAD assumes the use of fourth generation techniques Rather than using
third generation languages, the RAD process works to reuse existing programming components
whenever possible or create reusable components. In all cases, automated tools are used to
facilitate construction.

Testing and turnover: Since RAD emphasizes reuse, most of the components have already been
tested. This reduces overall testing time. However, new components must be tested and all
interfaces must be fully exercised.

In general, if a business function can be modularized in a way that enables each function to be
completed in less than three months, it is a candidate for RAD. Each major function can be addressed by
a separate RAD team and then integrated to form a whole.

Advantages :
Modularized approach to development
Creation and use of reusable components
Drastic reduction in development time

Disadvantages :
For large projects, sufficient human resources are needed to create the right number of RAD
teams.
Not all types of applications are appropriate for RAD. If a system cannot be modularized, building
the necessary components for RAD will be difficult.
Not appropriate when the technical risks are high. For example, when an application makes heavy
use of new technology or when the software requires a high degree of interoperability with
existing programs.




www.missionmca.com 23

Incremental Model :

This model combines elements of the linear sequential model with the iterative philosophy of
prototyping. The incremental model applies linear sequences in a staggered fashion as time progresses.
Each linear sequence produces a deliverable increment of the software. For example, word processing
software may deliver basic file management, editing and document production functions in the first
increment. More sophisticated editing and document production in the second increment, spelling and
grammar checking in the third increment, advanced page layout in the fourth increment and so on.
The process flow for any increment can incorporate the prototyping model. When an incremental model
is used, the first increment is often a core product. Hence, basic requirements are met, but
supplementary features remain undelivered. The client uses the core product. As a result of his
evaluation, a plan is developed for the next increment. The plan addresses improvement of the core
features and addition of supplementary features. This process is repeated following delivery of each
increment, until the complete product is produced. As opposed to prototyping, incremental models
focus on the delivery of an operational product after every iteration.


Figure 1.6 The Incremental Model.


www.missionmca.com 24


Advantages Of Incremental Model :
1. Particularly useful when staffing is inadequate for a complete implementation by the business
deadline.
2. Early increments can be implemented with fewer people. If the core product is well received,
additional staff can be added to implement the next increment.
3. Increments can be planned to manage technical risks. For example, the system may require
availability of some hardware that is under development. It may be possible to plan early
increments without the use of this hardware, thus enabling partial functionality and avoiding
unnecessary delay.

Extreme Programming(XP) :

The most widely used agile process, originally proposed by Kent Beck.
XP Planning :
Begins with the creation of user stories.
Agile team assesses each story and assigns a cost.
Stories are grouped to for a deliverable increment
A commitment is made on delivery date
After the first increment project velocity is used to help define subsequent delivery dates for
other increments.




www.missionmca.com 25



XP Design :
Follows the KIS principle.
For difficult design problems, suggests the creation of spike solutionsa design prototype.
Encourages refactoringan iterative refinement of the internal program design.

XP Coding :
Recommends the construction of a unit test for a store before coding commences
Encourages pair programming.

XP Testing :
All unit tests are executed daily.
Acceptance tests are defined by the customer and executed to assess customer visible
functionality.




www.missionmca.com 26

Format Method Model :

1. The formal methods model encompasses a set of activities that leads to formal mathematical
specification of computer software.
2. Formal methods enable a software engineer to specify, develop, and verify a computer- based
system by applying a rigorous mathematical notation.

3. When formal methods are used during development, they provide a mechanism for eliminating
many of the problems that are difficult to overcome using other software engineering paradigms.
Ambiguity, incompleteness, and inconsistency can be discovered and corrected more easily, not
through adhoc review but through the application of mathematical analysis.

4. When formal methods are used during design, they serve as a basis for program verification and
therefore enable the software engineer to discover and correct errors that might go undetected.

5. The formal methods model offers the promise of defect-free software.

Drawbacks Of Format Method Model :
1. The development of formal models is quite time consuming and expensive.
2. Because few software developers have the necessary background to apply formal methods,
extensive training is required.
3. It is difficult to use the models as a communication mechanism for technically unsophisticated
customers.


EVOLUTIONARY SOFTWARE PROCESS MODELS
There is growing recognition that software, like all complex systems, evolves over a period of time
[GIL88]. Business and product requirements often change as development proceeds, making a straight
path to an end product unrealistic; tight market deadlines make completion of a comprehensive
software product impossible, but a limited version must be introduced to meet competitive or business
pressure; a set of core product or system requirements is well understood, but the details of product or
system extensions have yet to be defined.

In these and similar situations, software engineers need a process model that has been explicitly
designed to accommodate a product that evolves over time. The linear sequential model (Section 2.4) is
designed for straight-line development. In essence, this waterfall approach assumes that a complete
system will be delivered after the linear sequence is completed. The prototyping model (Section 2.5) is
designed to assist the customer (or developer) in understanding requirements. In general, it is not
designed to deliver a production system.



www.missionmca.com 27

The evolutionary nature of software is not considered in either of these classic software engineering
paradigms. Evolutionary models are iterative. They are characterized in a manner that enables software
engineers to develop increasingly more complete versions of the software.

2.7.1 The Incremental Model

The incremental model combines elements of the linear sequential model (applied repetitively) with the
iterative philosophy of prototyping. Referring to Figure 2.7, the incremental model applies linear
sequences in a staggered fashion as calendar time progresses. Each linear sequence produces a
deliverable increment of the software [MDE93].

For example, word-processing software developed using the incremental paradigm might deliver basic
file management, editing, and document production functions in the first increment; more sophisticated
editing and document production capabilities in the second increment; spelling and grammar checking
in the third increment; and advanced page layout capability in the fourth increment. It should be noted
that the process flow for any increment can incorporate the prototyping paradigm.

When an incremental model is used, the first increment is often a core product. That is, basic
requirements are addressed, but many supplementary features (some known, others unknown) remain
undelivered. The core product is used by the customer (or undergoes detailed review). As a result of use
and/or evaluation, a plan is developed for the next increment. The plan addresses the modification of
the core product to better meet the needs of the customer and the delivery of additional features and
functionality. This process is repeated following the delivery of each increment, until the complete
product is produced.


The incremental process model, like prototyping (Section 2.5) and other evolutionary approaches, is
iterative in nature. But unlike prototyping, the incremental model focuses on the delivery of an
operational product with each increment. Early increments are stripped down versions of the final
product, but they do provide capability that serves the user and also provide a platform for evaluation


www.missionmca.com 28

by the user Incremental development is particularly useful when staffing is unavailable for a complete
implementation by the business deadline that has been established for the project. Early increments can
be implemented with fewer people.

If the core product is well received, then additional staff (if required) can be added to implement the
next increment. In addition, increments can be planned to manage technical risks. For example, a major
system might require the availability of new hardware that is under development and whose delivery
date is uncertain. It might be possible to plan early increments in a way that avoids the use of this
hardware, thereby enabling partial functionality to be delivered to end-users without inordinate delay.

2.7.2 The Spiral Model

The spiral model, originally proposed by Boehm [BOE88], is an evolutionary software process model that
couples the iterative nature of prototyping with the controlled and systematic aspects of the linear
sequential model. It provides the potential for rapid development of incremental versions of the
software. Using the spiral model, software is developed in a series of incremental releases.

During early iterations, the incremental release might be a paper model or prototype. During later
iterations, increasingly more complete versions of the engineered system are produced. A spiral model
is divided into a number of framework activities, also called task regions.6 Typically, there are between
three and six task regions. Figure 2.8 depicts a spiral model that contains six task regions:

Customer communicationtasks required to establish effective communication between developer
and customer.
Planningtasks required to define resources, timelines, and other projectrelated information.
Risk analysistasks required to assess both technical and management risks.
Engineeringtasks required to build one or more representations of the application.
Construction and releasetasks required to construct, test, install, and provide user support (e.g.,
documentation and training).




www.missionmca.com 29



Customer evaluationtasks required to obtain customer feedback based on evaluation of the
software representations created during the engineering stage and implemented during the installation
stage. Each of the regions is populated by a set of work tasks, called a task set, that are adapted to the
characteristics of the project to be undertaken. For small projects, the number of work tasks and their
formality is low. For larger, more critical projects, each task region contains more work tasks that are
defined to achieve a higher level of formality. In all cases, the umbrella activities (e.g., software
configuration management and software quality assurance) noted in Section 2.2 are applied.

As this evolutionary process begins, the software engineering team moves around the spiral in a
clockwise direction, beginning at the center. The first circuit around the spiral might result in the
development of a product specification; subsequent passes around the spiral might be used to develop a
prototype and then progressively more sophisticated versions of the software. Each pass through the
planning region results in adjustments to the project plan.

Cost and schedule are adjusted based on feedback derived from customer evaluation. In addition, the
project manager adjusts the planned number of iterations required to complete the software. Unlike
classical process models that end when software is delivered, the spiral model can be adapted to apply
throughout the life of the computer software. An alternative view of the spiral model can be considered
by examining the project entry point axis, also shown in Figure 2.8. Each cube placed along the axis can
be used to representthe starting point for different types of projects. A concept development project
starts at the core of the spiral and will continue (multiple iterations occuralong the spiral path that
bounds the central shaded region) until concept development is complete. If the concept is to be
developed into an actual product, the process proceeds through the next cube (new product
development project entry point) and a new development project is initiated. The new product will


www.missionmca.com 30

evolve through a number of iterations around the spiral, following the path that bounds the region that
has somewhat lighter shading than the core.

In essence, the spiral, when characterized in this way, remains operative until the software is retired.
There are times when the process is dormant, but whenever a change is initiated, the process starts at
the appropriate entry point (e.g., product enhancement). The spiral model is a realistic approach to the
development of large-scale systems and software. Because software evolves as the process progresses,
the developer and customer better understand and react to risks at each evolutionary level.

The spiral model uses prototyping as a risk reduction mechanism but, more important, enables the
developer to apply the prototyping approach at any stage in the evolution of the product. It maintains
the systematic stepwise approach suggested by the classic life cycle but incorporates it into an iterative
framework that more realistically reflects the real world.

The spiral model demands a direct consideration of technical risks at all stages of the project and, if
properly applied, should reduce risks before they become problematic. But like other paradigms, the
spiral model is not a panacea. It may be difficult to convince customers (particularly in contract
situations) that the evolutionary approach is controllable. It demands considerable risk assessment
expertise and relies on this expertise for success. If a major risk is not uncovered and managed,
problems will undoubtedly occur. Finally, the model has not been used as widely as the linear sequential
or prototyping paradigms. It will take a number of years before efficacy of this important paradigm can
be determined with absolute certainty.

2.7.4 The Concurrent Development Model

The concurrent development model, sometimes called concurrent engineering, has been described in
the following manner by Davis and Sitaram [DAV94]: Project managers who track project status in terms
of the major phases [of the classic life cycle] have no idea of the status of their projects. These are
examples of trying to track extremely complex sets of activities using overly simple models. Note that
although . . . [a large] project is in the coding phase, there are personnel on the project involved in
activities typically associated with many phases of development simultaneously. For example,. . .
personnel are writing requirements, designing, coding, testing, and integration testing [all at the same
time]. Software engineering process models by Humphrey and Kellner [[HUM89], [KEL89]] have shown
the concurrency that exists for activities occurring during any one phase.

Kellner's more recent work [KEL91] uses statecharts [a notation that represents the states of a process]
to represent the concurrent relationship existent among activities associated with a specific event (e.g.,
a requirements change during late development), but fails to capture the richness of concurrency that
exists across all software development and management activities in the project. . . . Most software
development process models are driven by time; the later it is, the later in the development process you
are. [A concurrent process model] is driven by user needs, management decisions, and review results.

The concurrent process model can be represented schematically as a series of major technical activities,
tasks, and their associated states. For example, the engineering activity defined for the spiral model
(Section 2.7.2) is accomplished by invoking the following tasks: prototyping and/or analysis modeling,
requirements specification, and design.9 Figure 2.10 provides a schematic representation of one activity


www.missionmca.com 31

with the concurrent process model. The activityanalysismay be in any one of the states10 noted at
any given time. Similarly, other activities (e.g., design or customer communication) can be represented
in an analogous manner. All activities exist concurrently but reside in different states. For example, early
in a project the customer communication activity (not shown in the figure) has completed its first
iteration and exists in the

awaiting changes state. The analysis activity (which existed in the none state whileinitial customer
communication was completed) now makes a transition into the

under development state. If, however, the customer indicates that changes inrequirements must be
made, the analysis activity moves from the under development state into the awaiting changes state.
The concurrent process model defines a series of events that will trigger transitions from state to state





for each of the software.engineering activities. For example , during early stages of design, an
inconsistency in the analysis model is uncovered. This generates the event analysis model correction
which will trigger the analysis activity from the done state into the awaiting changes state.

The concurrent process model is often used as the paradigm for the development of client/server11
applications (Chapter 28). A client/server system is composed of a set of functional components. When
applied to client/server, the concurrent process model defines activities in two dimensions [SHE94]: a


www.missionmca.com 32

system dimension and a component dimension. System level issues are addressed using three activities:
design, assembly, and use. The component dimension is addressed with two activities: design and
realization. Concurrency is achieved in two ways:

(1) system and component activities occur simultaneously and can be modeled using the state-oriented
approach described previously; (2) a typical client/server application is implemented with many
components, each of which can be designed and realized concurrently. In reality, the concurrent process
model is applicable to all types of software development and provides an accurate picture of the current
state of a project. Rather than confining software engineering activities to a sequence of events, it
defines a network of activities. Each activity on the network exists simultaneously with other activities.
Events generated within a given activity or at some other place in the activity network trigger transitions
among the states of an activity.




www.missionmca.com 33

UNIT 3
Software Analysis and Design

Introduction:

The requirement analysis task is a process of discovery, refinement, modeling and specification. The
software scope is refined in detail. Models of the required information, control flow, operational
behavior and data content are created. Alternative solutions are analyzed and allocated to various
software elements.
Both the developer and the customer take an active role in requirements analysis and specification. The
customer attempts to reformulate a sometimes, unclear concept of software function and
performance into concrete detail. The developer acts as interrogator, consultant and problem-solver.
Requirement analysis is a software engineering task that bridges the gap between system level
software allocation and software design.

It enables the system engineer to specify software function and performance, indicate softwares
interface with other system elements and establish design constraints that the software must meet.

It allows the software engineer to refine the software allocation and build models of the process,
data and behavioral domains that will be treated by software.

It provides the software designer with a representation of information and function that can be
translated into data, architectural and procedural design.

It also provides the developer and the client with the means to assess quality once the software is
built.

The principles of requirement analysis call upon the analyst to systematically approach the
specification of the system to be developed. This means that the analysis has to be done using the
available information. Generally, all computer systems are looked upon as information processing
systems, since they process data input and produce a useful output.

The logical view of a system gives the overall feel of how the system operates. Any system performs
three generic functions: input, output and processing. The logical view focuses on the problem-specific


www.missionmca.com 34

functions. This helps the analyst to identify the functional model of the system. The functional model
begins with a single context level model. Over a series of iterations, more and more functional detail is
provided, until all system functionality is represented.

They physical view of the system focuses on the operations being performed on the data that is either
taken as input or generated as output. This view determines the actions to be performed on the data
under specific conditions. This helps the analyst to identify the behavioral model of the system. The
analyst can determine an observable mode of behavior that changes only when some event occurs.
Examples of such events are:
i) An internal clock indicating some specified time has passed.
ii) A mouse movement.
iii) An external time signal.

What are System Requirements?

System Requirements are the functions that our system must perform. During planning, the
Analyst defines system capabilities, during analysis, the Analyst expands these into a set of
system requirements.
There are two types of System Requirements:

Functional : activities that a system must perform with respect to the organisation.
Technical : operational objectives related to the environment, hardware, and software of the
organization.

In functional requirements, for example, if a Payroll System is being developed, then it is required to
calculate salary, print paychecks, calculate taxes, net salary etc.
In technical requirements, for example, the system may be required to support multiple terminals
with the same response time, or may be required to run on a specific operating system.




www.missionmca.com 35

Sources of System Requirements

The Stakeholders
The Stakeholders of the System are considered as the primary source of information for functional
system requirements.
Stakeholders are people who have an interest in the successful implementation of your system.
There are three groups of stakeholders:
(a) Users who use the system on a daily basis
(b) Clients who pay for and own the system
(c) Technical staff i.e. the people who must ensure that the system operates in the computing
environment of the organization.
The analysts first task during analysis is to (a) identify every type of stakeholder and (b) identify the
critical person from each type (group) of stakeholders.

User Stakeholders
User Stakeholders are identified into 2 types: (a) Vertical and (b) Horizontal.
Horizontal implies that an analyst needs to look at information flow across departments or
functions.
For example, a new inventory system may affect multiple departments, such as sales,
manufacturing, etc, so these departments need to be identified, so as to collect information relevant
to them.
Vertical implies that an analyst needs to look at information flow across job levels, such as clerical
staff, middle management, executives, etc.
Each of these users may need the system to perform different functions with respect to
themselves.
A Transaction is the single occurrence of a piece of work or an activity done in an organization.
A Query is a request for information from a system or from a database.

Analysis tasks :

All analysis methods are related by a set of fundamental principles:
The information domain of the problem must be represented and understood.
Models that depict system information function and behavior should be developed.
The models and the problem must be partitioned in a manner that uncovers detail in a layered or
hierarchical fashion.
The analysis process should move form essential information to implementation detail.
Software requirement analysis may be divided into five areas of effort:

i) Problem recognition :


www.missionmca.com 36


Initially, the analyst studies the system specification and the software project plan. Next communication
for analysis must be established so that problem recognition is ensured. The analyst must establish
contact with management and the technical staff of the user/customer organization and the software
development organization. The project manager can serve as a coordinator to facilitate establishment of
communication paths. The objective of the analyst is to recognize the basic problem elements as
perceived by the client.

ii) Evaluation and synthesis :

Problem evaluation and synthesis is the next major area of effort for analysis. The analyst
must evaluate the flow and content of information, define and elaborate all software functions,
understand software behavior in the context of events that affect the system, establish interface
characteristics and uncover design constraints. Each of these tasks serves to define the problem so that
an overall approach may be synthesized.
iii) Modeling :

We create models to gain a better understanding of the actual entity to be built. The software model
must be capable of modeling the information that software transforms the functions that enable the
transformation to occur and the behavior of the system during transformation. Models created serve a
number of important roles:
The model aids the analyst in understanding the information, function and behavior of the
system, thus making the requirement analysis easier and more systematic.

The model becomes the focal point or review and the key to determining the
completeness, consistency and accuracy of the specification.

The model becomes the foundation for design, providing the designer with an essential
representation of software that can be mapped into an implementation context.

iv) Specification :
There is no doubt that the mode of specification has much to do with the quality of the solution. The
quality, timeliness and completeness of the software may be adversely affected by incomplete or
inconsistent specifications. Software requirements may be analyzed in a number of ways. These analysis
techniques lead to a paper or computer-based specification that contains graphical and natural language
descriptions of the software requirements.


www.missionmca.com 37


v) Review :
Both the software developer and the client conduct a review of the software requirements specification.
Because the specification forms the foundation of the development phase, extreme care is taken in
conducting the review.
The review is first conducted at a macroscopic level. The reviewers attempt to ensure that the
specification is complete, consistent and accurate. In the next phase, the review is conducted at a
detailed level. Here, the concern is on the wording of the specification. The developer attempts to
uncover problems that may be hidden within the specification content.

Fact-Finding Methods:

Fact-finding techniques are used to identify system requirements, through comprehensive interaction
with the users using various ways of gathering information.
There are six methods of Information Gathering which are as follows :

1. Distribute & Collect Questionnaires :
Questionnaires enable the project team to collect information from a large number of
stakeholders conveniently, and to obtain preliminary insight on their information needs.
This information is then used to identify areas that need further research using document
reviews, interviews, and observation.
Questionnaires can be used to answer quantitative questions, such as How many orders do
you enter in a day?
Such questions are called closed-ended questions i.e. questions that have simple,
definitive answers and do not invite discussion or elaboration.
They can be used to determine the users opinion about various aspects of a system (say,
asking the user to rate a particular activity on a scale of 1-5).
Questionnaires, however, do not provide information about processes, work-flow, or
techniques used.
Questions that illicit a descriptive response are best answered using interviews, or
observation.
Such questions that encourage discussion and elaboration are called open-ended
questions.

2. Review Existing Reports, Forms, and Procedure Descriptions :



www.missionmca.com 38

Two advantages of reviewing existing documents and documentation:
To get a better understanding of processes
To gain knowledge about the industry or the application that needs to be studied.

An analyst requests for and reviews procedural manuals, and work descriptions, in order to
understand business functions.
Documents and reports can also be used in interviews, where forms and reports are used as
visual aid, and working documents are used for discussion.
Discussion can center on use of each form, its objective, distribution, and information
content.
Forms already filled-out with real information ensure a correct understanding of the fields and
data content.
Reviewing existing documentation of existing procedures helps identify business rules, while
written procedures also help in discovering discrepancies and redundancies in the business
processes.
It is essential to ensure that the assumptions and business rules derived from existing
documentation are accurate.

3. Conduct Interviews & Discussions with Users :
Interviewing stakeholders is considered the most effective way to understand business
functions and rules, though it is also the most time-consuming and resource-expensive.
In this method, members of the project team (system analysts) meet with individual groups of
users, in one or multiple sessions in order to understand all processing requirements through
discussion.
An effective interview consists of three parts: (a) Preparing for the interview (b) Conducting the
interview and (c) Following up the interview.

Before an Interview:
Establish objective of interview (what do you want to accomplish through this
interview?)
Determine correct user(s) to be involved (no. of users depends on the objective)
Determine project team members to participate (at least 2)
Build a list of questions and issues to be discussed
Review related documents and materials (list of specific questions, open and closed
ended)
Set the time and location (quiet location, uninterrupted)
Inform all participants of objective, time, and locations (each participant should be
aware of objective of the interview)


During an Interview:
Dress appropriately (show good manners)
Arrive on time (arriving early is a good practice, if long interview, prepare for breaks)


www.missionmca.com 39

Look for exceptions and error conditions (ask what if questions, ask about
exceptional situations)
Probe for details (ensure complete understanding of all procedures and rules)
Take thorough notes (handwritten note-taking makes user feel that what he has to say
is important to you)
Identify and document unanswered items or open questions (useful for next
interview session)

After an Interview:
Review notes for accuracy, completeness, and understanding (absorb, understand,
document obtained information)
Transfer information to appropriate models and documents (create models for better
understanding after complete review)
Identify areas that need further clarification (keep a log of unanswered questions, such
as those based on policy questions raised by new system, include them in next
interview)
Send thank-you notes if appropriate

4. Observe Business Processes & Work-flow :

Observing business procedures that the new system will support are an excellent way to
understand exactly how the users use a system, and what information they need.
A quick walkthrough of the work area gives a general understanding of the layout of the office,
the need and use of computer equipment, and the general workflow.

Actually observing a user at his job provides details about the actual usage of the computer
system, and how the business processes are carried out in reality.
Being trained by a user and actually performing the job allows one to discover the
difficulties of learning new procedures, the importance of an easy-to-use system, and
drawbacks of the current system that the new system needs to address.

It must be remembered that the level of commitment required by different processes varies
from one process to another.
Also, the analyst must not be a hindrance to the user.

5. Build Prototypes :
Building a prototype implies creating an initial working model of a larger, more complex entity.

Types of prototypes: throwaway, discovery, design, evolving prototypes.
Different phases of the SDLC require different prototypes.
The Discovery Prototype is used in the Planning & Analysis phases to test feasibility and help
identify processing requirements.


www.missionmca.com 40

The Development Prototype is used in the design, coding and implementation phases, to test
designs, effectiveness of code and workability of software.
Discovery prototypes are usually discarded after the concept has been tested, while an Evolving
prototype is one that grows and evolves and may eventually be used as the final, live system.

Characteristics of Prototypes:

A prototype should be operative i.e. a working model, that may provide lock-and-feel but
may lack some functionality.
It should be focused on a single objective, even if simple prototypes are being merged into
a single large prototype.
It should be built and modified easily and quickly, so as to enable immediate
modification if approach is wrong.

6. Conduct Joint Application Design (JAD) Sessions :
JAD is a technique used to expedite the investigation of system requirements.
Usually, the analysts first meet with the users and document the discussion through notes &
models (which are later reviewed).
Unresolved issues are placed on an open-items list, and are eventually discussed in
additional meetings.

The objective of this technique is to compress all these activities into a shorter series of JAD
sessions with users and project team members.
During a session, all of the fact-finding, model-building, policy decisions, and verification
activities are completed for a particular aspect of the system.

The success of a JAD session depends on the presence of all key stakeholders and their
contribution and decisions.

Validate The Requirements / Requirements Validation :

Requirements validation is a critical step in the development process, usually during requirements
engineering or requirements analysis. Also at delivery (client acceptance test).

Requirements validation criteria:
Complete : All possible scenarios, in which the system can be used, are described, including
exceptional behavior by the user or the system.
Consistent: There are no two functional or nonfunctional requirements that contradict each
other.

Unambiguous : Requirements can not be interpreted in mutually exclusive ways.



www.missionmca.com 41

Correct : The requirements represent the clients view.

More Requirements validation criteria :
Realistic : Requirements can be implemented and delivered.
Verifiable : Requirements can be checked.
Needs an exact description of the requirements
Problem with requirements validation :
Requirements change very fast during requirements elicitation.

Tool support for managing requirements :
Store requirements in a shared repository
Provide multi-user access
Automatically create a system specification document from the repository.
Allow for change management.
Provide traceability throughout the project lifecycle.

Structured Walkthroughs :

1. A structured walkthrough is a planned review of a system or its software by persons involved in the
development effort.

2. The participants are generally at the same level in the organization: that is , they are analysts or
programmer-analysts.

Typically department managers for marketing or manufacturing are not involved in the review even
though they may be the eventual recipients of the system.
3. Sometimes structured walkthroughs are called Peer Reviews because the participants are
colleagues at the same level in the organization.

Characteristics :
1. The purpose of walkthroughs is to find areas where improvement can be made in the system or the
development process.

2. A walkthrough should be viewed by the programmers and analysts as an opportunity to receive
assistance, not as an obstacle to be avoided or tolerated.


www.missionmca.com 42


3. The review session does not result in the correction of errors or changes in specifications. Those
activities remain the responsibility of the developers. Hence the emphasis is constantly on review,
not repair.

4. The individuals who formulated the design specifications or created the program code are as might
be expected, part of the review team.

5. A moderator is sometimes chosen to lead the review, although many organizations prefer to have
the analyst or designer who formulated the specifications or program lead the session, since they
have greater familiarity with the item being reviewed. In either case, someone must be responsible
for keeping the review focused on the subject of the meeting.

6. A scribe or recorder is also needed to capture the details of the discussion and the ideas that are
raised.
Since the walkthrough leader or the sponsoring programmers or analysts may not be able to jot
down all the points aired by the participants, appointing another individual to take down all the
relevant details usually ensures a more complete and objective record.
7. The benefits of establishing standards for data names, module determination, and data item size
and type are recognized by systems managers. The time to start enforcing these standards is at the
design stage.
Therefore, they should be emphasized during walkthrough sessions.
8. Maintenance should also be addressed during walkthroughs. Enforcing coding standards,
modularity, and documentation will ease later maintenance needs.

9. It is becoming increasingly common to find organizations that will not accept new software for
installation until it has been approved by software maintenance teams. In such an organization, a
participant from the quality control or maintenance team should be an active participant in each
structured walkthrough.

10.
(i) The walkthrough team must be large enough to deal with the subject of the review in a
meaning way, but not so large that it cannot accomplish anything.
(ii) Generally no more than 7 to 9 persons should be involved , including the individuals who
actually developed the product under review, the recorder, and the review leader.


11.


www.missionmca.com 43

a. As a general rule, management is not directly involved in structured walkthrough sessions. Its
participation could actually jeopardize the intent of the review team from speaking out about
problems they see in project.
b. Because management is often interpreted to mean evaluation.
c. Managers may feel that raising many questions, identifying mistakes or suggesting changes
indicates that the individual whose work is under review is incompetent/
d. It is best to provide managers with reports summarizing the review session rather than to have
them participate.
e. The most appropriate type of report will communicate that a review of the specific project or
product was conducted, who attended, and what action the team took. It need not summarize
errors that were found, modifications suggested, or revisions needed.

12. Structured reviews rarely exceed 90 minutes in length.

The structured walkthrough can be used throughout the systems development process as a constructive
and cost-effective management tool, after the detailed investigation (requirements review), following
design (design review), and during program development (code review and testing review).
F Fe ea as si ib bi il li it ty y A An na al ly ys si is s
A feasibility study is a preliminary study undertaken to determine and document a project's viability.
The results of this study are used to make a decision whether to proceed with the project, or table it. If it
indeed leads to a project being approved, it will - before the real work of the proposed project starts -
be used to ascertain the likelihood of the project's success. It is an analysis of possible alternative
solutions to a problem and a recommendation on the best alternative. It, for example, can decide
whether an order processing be carried out by a new system more efficiently than the previous one.
A feasibility study could be used to test a new working system, which could be used because :
The current system may no longer suit its purpose,
Technological advancement may have rendered the current system obsolete,
The business is expanding, allowing it to cope with extra work load,
Customers are complaining about the speed and quality of work the business provides,
Competitors are now winning a big enough market share due to an effective integration of a
computerized system.

Within a feasibility study, seven areas must be reviewed, including those of a Needs Analysis, Economics,
Technical, Schedule, Organizational, Cultural, and Legal.


1. Operational Feasibility :
It involves the following two tests:
Understanding whether the problem is worth solving and whether the solution to the
problem will work out, by analyzing the following criteria: (PIECES)
(a) Performance (b) Information (c) Economy (d) Control (e) Effectiveness (f) Service.


www.missionmca.com 44


Getting the management's and end-users' views on the solution by analyzing the following:

(a) Will the current working environment change?
(b) How do the end users feel about their role in the new system?
(c) Would the end-users resist the new system?

2. Organizational & Cultural Feasibility :

The new system must fit into the work-environment of the organization.
It must also fit with the culture of the organization.
It should not depart dramatically, from existing norms.

It has to deal with issues such as:
Low computer literacy
Perceived loss of control by staff or management
Fear of change of job responsibility
Reversal of longstanding work procedures
Fear of loss of job due to increased automation
It essentially involves identifying factors that might prevent the effective use of the new system,
thus resulting in loss of business benefits.
Such factors can be tackled with high user involvement during the system's development and well-
planned training procedures and proper orientation after the system's completion.

3. Technical Feasibility :

This involves testing the proposed technological requirements and the available expertise.
A company may implement new technology in the new system, or upgrade the technology of an
existing system.
In some cases, the scope and approach of the project may need to be changed to
restructure and reduce the technological risk.
When the risks are identified, the solutions may include conducting additional training,
hiring consultants, hiring more experienced employees.
A realistic assessment will help identify technological risks early and permit corrective
measures to be taken.

4. Schedule Feasibility :

It involves assessing if the project can be completed according to the proposed project schedule.
Every schedule requires many assumptions and estimates about the project, as the needs and
scope of the system may not be known at this stage.
Sometimes, a project may need to be completed within a deadline given by the upper
management.
Milestones should be developed within the project schedule to assess the ongoing risk of the
schedule slipping.
Deadlines should not be considered during project schedule construction, unless they are
absolute.


www.missionmca.com 45


5. Resource Feasibility :

The availability of resources is a crucial assessment in terms of project feasibility.
The primary resource consists of the members of the team.
Development projects require the involvement of system analysts, system technicians, users.

Three risks are involved here:
(a) Required people may not available to the team when needed.
(b) People who are assigned, may not have the necessary skills.
(c) People already working on the project may leave midway.

Also, adequate computer resources, physical facilities, and support staff are valuable
resources.
Delays in making these resources can affect the project schedule.

6. Economic Feasibility :

Economic feasibility consists of two tests:
(a) do the anticipated benefits exceed the projected cost of development?

(b) does the organization have adequate cash flow to fund the project?

The new system must increase income, either through cost saving, or by increased
revenues.
The economic feasibility of a system is usually assessed using one of the following methods:

(a) Cost/Benefit Analysis.
(b) Calculation of the Net Present Value (NPV)
(c) Payback Period, or Breakeven Point
(d) Return on Investment

Cost estimation :

Software cost estimation is a continuing activity, which starts at the proposal stage and continues
through the lifetime of the project. There are several different techniques of software cost
estimation. They are:

i) Expert judgment :
One or more experts on the software development techniques to be used, and on the
application domain, are consulted. They each estimate a project cost and the final cost is
arrived at by consensus.

ii) Estimation by analogy :
This technique is applicable when other projects in the same application domain have been completed.
The cost of a new project is estimated by analogy with these completed projects.


www.missionmca.com 46


iii) Parkinsons law :
It states that work expands to fill the time available. In software costing, it means that the
cost is determined by available resources rather than by objective assessment.

iv) Pricing to win :
The software cost is estimated to be whatever the customer has available to spend on the
project. The estimated effort depends on the customers budget and not on the software
functionality.

v) Top-down estimation:
A cost estimate is established by considering the overall functionality of the project and how that
functionality is provided by interacting functions. Cost estimates are made on the basis of logical
function rather than component implementation of the function.

vi) Bottom-up estimation :
The cost of each component is estimated. All these costs are added to produce a final cost
estimate.


Cost/Benefit Analysis

It is the analysis used to compare costs and benefits to see whether the investment in the development
of a new system will be more beneficial than costly.

Cost And Benefits Categories :

In developing cost estimates for a system, we need to consider several cost elements.
Following are the types of costs that are analyzed :

Hardware Costs : Costs related to actual purchases or leasing of computers and peripheral
devices.

Personnel Costs : Costs including staff salaries and benefits (staff includes system analysts,
programmers, end-users, etc.).

Facility Costs: Costs involved in the preparation of the physical site where the computer system
will be operating (wiring, flooring, air conditioning, etc.).

Developmental Costs: Costs involved in the development of the system (hardware costs,
personnel costs, facility costs).

Operating Costs : Costs incurred after the system is put into production i.e. the day-to-day
operations of the system (salaries of people using the application, etc.).

A system is also expected to provide benefits. The first task is to identify each benefits and then
assign a monetary value to it for cost/benefit analysis.


www.missionmca.com 47


Benefits may be tangible and intangible or direct and indirect. The two major benefits are as follows
:

Improving performance: The performance category emphasis improvement in the accuracy of
or access to information and easier access to the system by the authorized users.
Minimizing the cost of processing: Minimizing costs through an efficient system- error control
or reduction of staff is a benefit that should be measured and included in cost/benefit
analysis.

Procedure For Cost/Benefit Determination:

There is a difference between expenditure and investment. We spend to get what we need, but we
invest to realize a return on the investment. Building a computer-based system is an investment .Costs is
incurred throughout its life cycle. Benefits are realized in the form of reduced operating costs, improved
corporate image, staff efficiency or revenue.

Cost/Benefit Analysis is a procedure that gives a picture of the various costs, benefits and rules
associated with a system.

The determination of the cost and benefits entails the following step :
1. Identify the costs and benefits pertaining to a given project.
2. Categorize the various costs and benefits for analysis.
3. Select a method of evaluation.
4. Interpret the result of the analysis.
5. Take action.


1. Costs And Benefits Identification :
Certain costs and benefits are more easily identifiable than others. For example, direct costs,
such as the price of a hard disk, are easily identified from company invoice payments or
canceled checks.
Direct benefits often relates one-to-one to direct costs, especially savings from reducing cost in
the activity in question.
Other direct costs and benefits, however may not be well defined, since they represent
estimated costs or benefits that have some uncertainty. An example of such costs is reserve for
bad debt. It is a discerned real cost, although its exact amount is not so immediate.
A Category of costs or benefits that is not easily discernible is opportunity costs and opportunity
benefits.
These are the costs or benefits forgone by selecting one alternative over another. They do not
show in the organizations account and therefore are not easy to identify.

2. Classification of Cost and benefits :
The next step in cost and benefit determination is to categorize costs and benefits. They may be
tangible or intangible, direct or indirect, fixed or variable.
Let us review each category.


www.missionmca.com 48






www.missionmca.com 49


3. Select Evaluation Method :

When all financial data have been identified and broken down into cost categories, the analyst must
select a method of evaluation. Several methods are available.
The Common are as follows:

i.] Net Benefit Analysis :
Net Benefit simply involves subtracting total costs from total benefits.
It is easy to calculate, easy to interpret, and easy to present.
The main drawback is that it does not account for the time value of money and does not
discount future cash flow.

Cost/Benefit Year
0
Year
1
Year
2
Total
Costs $-1,000 $-2,000 $-2,000
Benefits 0 650 4,900 5,550
Net benefits $-1,000 $-1,350 $-2,900 $550

Above table illustrates the use of net benefit analysis. Cash flow amounts are shown for three time
period : Period 0 is the present period followed by two succeeding periods. The negative numbers
represent cash outlays. A cursory look at the numbers shows that net benefit is $550.

The time value of money is extremely important in evaluation processes. Let us explain what it
means. If you were faced with an opportunity that generates $3000 per year, how much would you be
willing to invest? Obliviously, youd like to invest less than the $3000. To earn the same money five years
from now, the amount of investment would be even less. What is suggested here is that money has a
time value. Todays dollar and tomorrows dollar are not the same. The time lag accounts for the time
value of money.
The time value of money is usually expressed in the form of interest on the funds invested to realize
the future value. Assuming compounded interest, the formula is:
F=P(1+i)^n
Where
F= Future value of an investment.
P= Present value of the investment.
i = Interest rate per compounding period.
n = Number of years.
For example, $3000 invested in Treasury notes for three years at 10% interest would have a value at
maturity of :
F=$3000(1+0.10)^3
=3000(1.33)
=$3993

ii.] Present Value Analysis :


www.missionmca.com 50

In developing long-term projects, it is often difficult to compute todays costs with the full value
of tomorrows benefits. Certain investments offer benefit periods that vary with different
projects.
Present Value analysis controls for these problems by calculating costs and benefits of the
system in terms of todays value of the investment and then comparing across alternatives.
A critical factor to consider in computing present value is a discount rate equivalent to the
forgone amount that the money could earn if it were invested in a different projects. It is similar
to the opportunity cost of the funds being considered for the project.

Example: Suppose that $3000 is to be invested in a microcomputer for our safe deposit tracking
system and the average annual benefit is $1500 for the four year life of the system. The
investment has to be made today, whereas the benefits are in the future. We compare present
values to the future values by considering the time value of the money to be invested. The
amount that we are willing to invest today is determined by the value of the benefits at the end
of a given period(year).The amount is called the present value of the benefit.
To compute the present value, we take the formula for future value(F=P/(1+i)^n) and solve for
the present value(P) as follows :
P=F/(1+i)^n
So the present value of $1500 invested at 10% interest at the end of the fourth year is :
P=1500/(1+0.10)^4
=1500/1.61
=$1027.39
That is, if we invest $1027.39 today at 10% interest , we can expect to have $1500 in 4 years.
This calculation can be represented for each year where a benefit is expected.


Year
Estimated
Future
Value
Discount Rate

Present
Value
Cumulative Prsent
Value of Benefits
1 $1500 X 0.908 = $1363.63 $1363.63
2 $1500 X 0.826 = $1239.67 $2603.30
3 $1500 X 0.751 = $1127.82 $3731.12
4 $1500 X 0.683 = $1027.39 $4758.51

1[(1+i)^n]
P=F[(1+i)^n]



www.missionmca.com 51


iii.] Net Present Value (NPV) Calculation :
The present value of rupee/dollar (currency) benefits and costs for investments such as a new
system.
Two concepts are involved:
All benefits and costs are calculated in terms of today's rupee/dollar (currency) values, i.e.
present values.
Benefits and costs are combined to give a net value.
It essentially tells you how much should be invested today, in order to achieve a
predetermined amount of benefit at a predetermined later point in time.
The following two terms hold great importance in this calculation:
Discount rate: It is the annual percentage rate that an amount of money is discounted to bring it
to a present value.
Discount factor: It is the accumulation of yearly discounts based on the discount rate.
Formula: If Present value is PV, amount received in future is FV, discount interest rate is i,
discount factor is F, and number of years is n :





For example, if the future amount is Rs. 1500, and the number of years is 4, at say 10% discount
rate, then the present value can be calculated as:





i.e. today, the investment should be 1024.5, to get 1500 after 4 years.

iv.] Payback Period/Breakeven Period Calculation:
The payback period is the period at which rupee/dollar (currency) benefits offset the
rupee/dollar (currency) costs.
It is the point in time, when the increased cash flow exactly pays off the costs of
development and operation.
When the net value becomes positive, that is the year in which payback occurs.
Consider the following table:




www.missionmca.com 52




v.] Return on Investment (ROI) Calculation:
Return on investment is a measure of the percentage gain received from an investment such as
a new system.
Similar to interest rate, this calculation is meant to ensure that costs and benefits are exactly
equal over a specified time period.
This time period can be the expected life of the investment, or it could be an arbitrary period.
It is the measure of the percent gain received from an investment.
Formula : If Estimated time period Benefits is EB, Estimated time period Costs is EC, then ROI =
(EB - EC)/EC Here, EC is the sum of developmental costs (DC) and total present value of costs
(PC).
If EB = 60,00,000; DC = 12,00,000; PC = 9,00,000
ROI = [60,00,000 (12,00,000 + 9,00,000)] / (12,00,000 + 9,00,000)
185%

4. Interpret Results of the Analysis And Final Action :


www.missionmca.com 53

When the evaluation of the project is complete, the results have to be interpreted. This entails
comparing actual results against a standard or the result of an alternative investment.
The interpretation phase as well as the subsequent decision phase is subjective, requiring
judgment and intuition.
Depending on the level of uncertainty, the analyst may be confronted with a single known value
or a range of values.
In either case, simpler measures such as net benefit analysis are easier to calculate and present
than other measures, although they do not discount future cash flows.
The decision to adopt an alternative candidate system can be highly subjective , depending on the
analysts or end users confidence in the estimated costs and benefits and the magnitude of the
system.


In summary, cost/benefits analysis is a tool for evaluating projects rather than a replacement of the
decision maker. In real-life business situations, whenever a choice among alternatives is considered,
cost/benefits is an important tool.
Like any tool, however it has problems:

Valuation problems: Intangible costs and benefits are difficult to quantify and tangible costs are
generally more pronounced than tangible benefits. In most cases, then, a project must have
substantial intangible benefits to be accepted.

Distortion problems: There are two ways of distorting the results of cost/benefit analysis. One is the
intentional favoritism of an alternative for political reasons. The second is when data are incomplete
or missing from the analysis.
Completeness problems: Occasionally an alternative is overlooked that compromises the quality of
the final choice. Furthermore, the costs related to cost/benefit analysis may be on the high side or
not enough costs may be considered to do a complete analysis. In either case, the reliability of the
final choice is in doubt.

List of Deliverables:

When the design of an information system is complete the specifications are documented in a form that
outlines the features of the application. These specifications are termed as the deliverables or the
design book by the system analysts.
No design is complete without the design book, since it contains all the details that must be included in
the computer software, datasets & procedures that the working information system comprises.

The deliverables include the following:
1. Layout charts :
Input & output descriptions showing the location of all details shown on reports, documents, &
display screens.


www.missionmca.com 54

2. Record layouts :
Descriptions of the data items in transaction & master files, as well as related database schematics.
3. Coding systems :
Descriptions of the codes that explain or identify types of transactions, classification, & categories of
events or entities.
4. Procedure Specification :
Planned procedures for installing & operating the system when it is constructed.
5. Program specifications :
Charts, tables & graphic descriptions of the modules & components of computer software & the
interaction between each as well as the functions performed & data used or produced by each.
6. Development plan :
Timetables describing elapsed calendar time for development activities; personnel staffing plans for
systems analysts, programmers, & other personnel; preliminary testing & implementation plans.
7. Cost Package :
Anticipated expenses for development, implementation and operation of the new system, focusing
on such major cost categories as personnel, equipment, communications, facilities and supplies.


www.missionmca.com

Sof

Topics
Project Planning
Project Size Estimation Metric
Decomposition Techniques
Software Estimation
Analytical Estimation Techniques
The Putman Resource Allocation Model.

Project Planning

If the project is found to be
activity.
The Project Planning is undertaken & completed even before any development activity starts.
The project planning consist of the following essential activities:
Estimating some basic att
Cost
Duration
Effort
Scheduling manpower & other resources
Staff organization & staffing plans
Risk Identification, analysis & abatement planning
Miscellaneous plans such as Quality Assurance Plan, Configuration Management Plan
etc.
Precedence among planning activities





UNIT 4
Software Project Planning
Project Size Estimation Metric
Decomposition Techniques
Analytical Estimation Techniques
The Putman Resource Allocation Model.
If the project is found to be feasible, s/w Project Manager undertake the project planning
The Project Planning is undertaken & completed even before any development activity starts.
The project planning consist of the following essential activities:
Estimating some basic attributes of the project
Scheduling manpower & other resources
Staff organization & staffing plans
Risk Identification, analysis & abatement planning
Miscellaneous plans such as Quality Assurance Plan, Configuration Management Plan
Precedence among planning activities


55
feasible, s/w Project Manager undertake the project planning
The Project Planning is undertaken & completed even before any development activity starts.
Miscellaneous plans such as Quality Assurance Plan, Configuration Management Plan


www.missionmca.com 56

Project Manager documents the plan in a Software Project Management Plan SPMP
Document
Organization of SPMP Document
1. Introduction
(a) Objective
(b) Major Functions
(c) Performance Issues
(d) Management & Technical Constraints
2. Project Estimates
(a) Historical Data Used
(b) Estimation Techniques Used
(c) Effort, Resource, Cost & Project Duration Estimation

3. Schedule
(a) Work Breakdown Structure
(b) Task Network Representation
(c) Gantt Chart Representation
(d) PERT Chart Representation
4. Project Resources
(a) People
(b) Hardware & Software
(c) Special Resources
5. Staff Organization
(a) Team Structure
(b) Management Reporting
6. Risk Management Plan
(a) Risk Analysis


www.missionmca.com

(b) Risk Identification
(c) Risk Estimation
(d) Risk Abatement Procedure
7. Project Tracking & Control Plan
8. Miscellaneous Plans
(a) Quality Assurance Plan
(b) configuration Management Plan
(c) Validation & Verification
(d) System Testing Plan
(e) Delivery, Installation & Maintenance Plan

Project Size Estimation Metric:

1. Measures
2. Metrics
3. Indicators
4. Line of Code (LOC)
5. Function Pair Metric
6. Features Point Metric

1. Measures
A Measure provides a quantitative indication of the extent, amount, dimension,
capacity or a size of product or process.
Measurement is the act or a process of determining a measure



(b) Risk Identification
(c) Risk Estimation
(d) Risk Abatement Procedure
7. Project Tracking & Control Plan
8. Miscellaneous Plans
(a) Quality Assurance Plan
(b) configuration Management Plan
(c) Validation & Verification
(d) System Testing Plan
(e) Delivery, Installation & Maintenance Plan
rovides a quantitative indication of the extent, amount, dimension,
capacity or a size of product or process.
is the act or a process of determining a measure

57
rovides a quantitative indication of the extent, amount, dimension,



www.missionmca.com


Figure 1 shows a measure. Without a trend to follow or an expected value to compare against, a
measure gives little or no information. It especially does not provide enough information to make
meaningful decisions.

2. Metric
A quantitative measure of the d
possesses a given attribute.
A calculated or composite indicator based upon two or more measures.
Project Metrics: The Projects Metric derived from the measures are used by Project
Managers & S/W team to a
The Metrics collected from past projects are used as a basis from which the Efforts & Time
estimates are made for the current s/w work.
The Project Manager uses this data to monitor & control the project prog


Figure 2 shows a metric. A metric is a comparison of two or more measures
temperature over time.

3. Indicator
An Indicator is a metric or combination of metrics that provides insight into the software
process, a software projec
A software Engineer collects measures & develops metrics so that indicator will be obtained.
It enables the project manager or software engineers to adjust the process, the project or
the product to make things better.
This allows the decision makers to make a quick comparison that can provide a perspective
as to the "health" of a particular aspect of the project.



Figure 1 shows a measure. Without a trend to follow or an expected value to compare against, a
measure gives little or no information. It especially does not provide enough information to make
A quantitative measure of the degree to which the system, component or a process
possesses a given attribute.
A calculated or composite indicator based upon two or more measures.
: The Projects Metric derived from the measures are used by Project
Managers & S/W team to adapt project work flow & technical activities.
The Metrics collected from past projects are used as a basis from which the Efforts & Time
estimates are made for the current s/w work.
The Project Manager uses this data to monitor & control the project progress.

Figure 2 shows a metric. A metric is a comparison of two or more measures--in this case body
An Indicator is a metric or combination of metrics that provides insight into the software
process, a software project or the product itself.
A software Engineer collects measures & develops metrics so that indicator will be obtained.
It enables the project manager or software engineers to adjust the process, the project or
the product to make things better.
This allows the decision makers to make a quick comparison that can provide a perspective
as to the "health" of a particular aspect of the project.

58
Figure 1 shows a measure. Without a trend to follow or an expected value to compare against, a
measure gives little or no information. It especially does not provide enough information to make
egree to which the system, component or a process
: The Projects Metric derived from the measures are used by Project
The Metrics collected from past projects are used as a basis from which the Efforts & Time
ress.
in this case body
An Indicator is a metric or combination of metrics that provides insight into the software
A software Engineer collects measures & develops metrics so that indicator will be obtained.
It enables the project manager or software engineers to adjust the process, the project or
This allows the decision makers to make a quick comparison that can provide a perspective


www.missionmca.com



Figure 3 illustrates an indicator. An indicator generally compares a metric with a baseline or
expected result. In this case, being able to compare the change in body temperature to the
normal body temperature makes a big difference in determining what kind of treatment, if
any, may be needed.

Another example of an indicator is the activation of a smoke
to a prescribed state and sounds an alarm if the number of smoke particles in the air
exceeds the specified conditions for the state for which the detector is set.
In software terms, an indicator may be a substantial increa
in the most recent release of code.

Project Indicators : The Project Indicators enables a s/w Project Manager to
Assess the status of an ongoing project
Track potential risks
Uncover problem areas before they go critica
Adjust work flow or tasks
Evaluate the project teams ability to control quality of s/w work products

4. Line Of Code (LOC)

This is the simplest among all the Metrics available to estimate the project size.
It is a software metric used to measure the amount of code
this metric, the project size is estimated by counting the number of source instructions
the developed program.
While counting the number of source instructions, the line used for commenting the code &
the header lines & blank lines are i
Determining the LOC count at the end of the project is very simple job but the accurate
estimation of the LOC count at the beginning of a project is very difficult.
The Project Manager divides the problem into module, & each module into sub
so on until the size of the different leaf



Figure 3 illustrates an indicator. An indicator generally compares a metric with a baseline or
d result. In this case, being able to compare the change in body temperature to the
normal body temperature makes a big difference in determining what kind of treatment, if
Another example of an indicator is the activation of a smoke detector in your home; it is set
to a prescribed state and sounds an alarm if the number of smoke particles in the air
exceeds the specified conditions for the state for which the detector is set.
In software terms, an indicator may be a substantial increase in the number of defects found
in the most recent release of code.
The Project Indicators enables a s/w Project Manager to
Assess the status of an ongoing project
Track potential risks
Uncover problem areas before they go critical
Adjust work flow or tasks
Evaluate the project teams ability to control quality of s/w work products
This is the simplest among all the Metrics available to estimate the project size.
used to measure the amount of code in a software program
this metric, the project size is estimated by counting the number of source instructions
the developed program.
While counting the number of source instructions, the line used for commenting the code &
the header lines & blank lines are ignored.
Determining the LOC count at the end of the project is very simple job but the accurate
estimation of the LOC count at the beginning of a project is very difficult.
The Project Manager divides the problem into module, & each module into sub
so on until the size of the different leaf-level modules can be approximately predicted. By

59
Figure 3 illustrates an indicator. An indicator generally compares a metric with a baseline or
d result. In this case, being able to compare the change in body temperature to the
normal body temperature makes a big difference in determining what kind of treatment, if
detector in your home; it is set
to a prescribed state and sounds an alarm if the number of smoke particles in the air
exceeds the specified conditions for the state for which the detector is set.
se in the number of defects found
The Project Indicators enables a s/w Project Manager to
Evaluate the project teams ability to control quality of s/w work products
This is the simplest among all the Metrics available to estimate the project size.
software program i.e. using
this metric, the project size is estimated by counting the number of source instructions in
While counting the number of source instructions, the line used for commenting the code &
Determining the LOC count at the end of the project is very simple job but the accurate
The Project Manager divides the problem into module, & each module into sub-modules &
level modules can be approximately predicted. By


www.missionmca.com 60

using the estimation of lowest level modules, project manager arrive at the total size
estimation.
Consider this snippet of C code as an example of the ambiguity encountered when
determining LOC:
for (i=0; i<100; ++i) printf("hello"); /* How many lines of code is this? */

In this example we have:
1 Physical Lines of Code
2 Logical Lines of Code (for statement and printf statement)
1 Comment Line

Depending on the programmer and/or coding standards, the above "line of code" could be,
and usually is, written on many separate lines:
for (i=0; i<100; ++i)
{
printf("hello");
} /* Now how many lines of code is this? */



In this example we have:
4 Physical Lines of Code LOC
2 Logical Line of Code LOC
1 Comment Line

Drawbacks :
LOC gives a numerical value of a problem size that can vary widely with individual
coding style different programmers lay out their code in different way.
It only considers the number of source code lines & not the overall complexity of
the problem & the efforts needed to solve it.
LOC measure correlates poorly with the quality & efficiency of the code. A large
code size does not necessarily imply better quality or higher efficiency
It is very difficult to accurately estimate LOC in the final product from the problem
specification. It can be accurately computed only after the code has been fully
developed.
LOC is particularly ineffective at comparing programs written in different languages
unless adjustment factors are applied to normalize languages. Various computer
languages balance clarity in different ways; as an extreme example, most assembly


www.missionmca.com

languages would require hundreds of lines of code to perform the same task as a
few characters in
The following example shows a comparison of a
and the same program written in

5. Function Point Metric:

The Function oriented metric focuses on program Functionality. i.e. the functionality
provided by system is used as a measure for estimating the
For this, 5 information domain characteristics are determined & counts for each are
recorded in table.
The information is entered in the following table to get the total count.



would require hundreds of lines of code to perform the same task as a
few characters in APL.
The following example shows a comparison of a "hello world" program
e program written in COBOL
The Function oriented metric focuses on program Functionality. i.e. the functionality
provided by system is used as a measure for estimating the project size.
For this, 5 information domain characteristics are determined & counts for each are
The information is entered in the following table to get the total count.

61
would require hundreds of lines of code to perform the same task as a
"hello world" program written in C,

The Function oriented metric focuses on program Functionality. i.e. the functionality
For this, 5 information domain characteristics are determined & counts for each are


www.missionmca.com 62



1. Number of Inputs :
Each user input that provides distinct application oriented data to the software
is counted.
2. Number of user outputs :
Each user output that provides application oriented information to the user is
counted. Eg. Reports, Screens, Error Messages etc.
3. Number of User Inquiries :
It is an on-line input that results in the generation of some immediate s/w
response in the form of on-line output.
4. Number of Files :
Each Logical Master File (i.e. a logical grouping of data that may be one part of a
large database or a separate file) is counted.
5. Number of External Interfaces :
All machine readable interfaces that are used to transmit information to
another system are counted.

Once these data have been collected, a complexity value associated with each count is
determined.
Organizations that use function point methods develop criteria for determining whether a
particular entry is simple, average or complex.
Once the total count is obtained, the Function point (FP) is calculated by using following
relationship :
FP = count total * [0.65 +0.01 * Sum(Fi)]
Here count total = sum of all FP entries obtained from above table.



www.missionmca.com 63

Fi (i= 1 to 14) are complexity adjustment values based on response to questions(1-14) given
below.
1. Does the system require reliable backup and recovery?
2. Are data communications required?
3. Are there distributed processing functions?
4. Is performance critical?
5. Will the system run in an existing, heavily utilized operational environment?
6. Does the system require on-line entry?
7. Does the on-line data entry require the input transaction to be built over multiple
screens or operations?
8. Are the inputs, outputs, files, or inquiries complex
9. Is the internal processing complex?
10. Is the code designed to be reusable?
11. Are master files updated on-line?
12. Are conversion and installations included in the design?
13. Is the system designed for multiple installations in different organizations?
14. Is the application designed to facilitate change and ease of use by the user?

Each of these questions is answered using a scale that ranges from 0 to 5
0 - No influence
1 - Incidental
2 - Moderate
3 - Average
4 - Significant
5 Essential

Drawback :
It does not take into account the algorithmic complexity of a software. It implicitly
assumes that the effort required to design & develop any two functionalities of the
system is the same. It does not distinguish between the difficulty levels of
developing the various functionalities.

Function Points May Compute the following important metrics :
Productivity = FP/person-months
Quality = Defects/FP
Cost = Rupees/FP
Documentation = Pages of Documents per FP

Example:-
Consider a project with following functional units:
Number of User Inputs = 50
Number of User Outputs = 40


www.missionmca.com 64

Number of user enquiries = 35
Number of user files = 6
Number of external interfaces = 4
Assume all complexity adjustment factors & weighting factors are average.
Compute the function point for the project.

Formula : FP = count total * [0.65 +0.01 * Sum(Fi)]

Count total = 50*4 + 40*5+ 35 *4 + 6*10 + 4*7
= 200 + 200 + 140 + 60 + 28
= 628
CAF = 0.65 + 0.01* (14*3)
= 0.65+0.42 = 1.07
FP = 628 * 1.07
= 671.96 = 672
6. Features Point Metric

To overcome the drawback of Function Point Metric, Feature Point Metric has been
proposed.
Features Point Metric incorporates extra parameter into algorithm complexity.
This parameter ensures that the computed size using the Feature Point Metric reflects the
fact that the more the complexity of the function, the greater the effort required to develop
it and therefore its size should be larger compared to simpler functions
Both the Function Point & Feature Point Metric are Language independent & can be easily
computed from SRS Document.

Decomposition Techniques:
Software Project Estimation is a form of problem solving, & in most cases, the problem to be
solved is too complex to be considered in one piece. For this reason we decompose the
problem, re-characterizing it as a set of smaller problems.



www.missionmca.com 65

Software Sizing:
1. The accuracy of s/w project estimate is predicted based on a number of things :
The degree to which the planner has properly estimated the size of the
product to be built
The ability to translate the size estimate into human effort, calendar time &
dollars
The degree to which the project plan reflects the abilities of the software
team
The stability of Product Requirements & Environment that supports the s/w
engineering efforts.

There are only two software sizing measures widely used today
1. Lines of Code (LOC or KLOC)
Lines of Code is a measure of the size of the system after it is built. It is very
dependent on the technology used to build the system, the system design,
and how the programs are coded. The major disadvantages of LOC are that
systems coded in different languages cannot be easily compared and
efficient code is penalized by having a smaller size.
2. Function Points (FP)
In contrast to LOC, Function Points is a measure of delivered functionality
that is relatively independent of the technology used to develop the system.
FP is based on sizing the system by counting external components (inputs,
outputs, external interfaces, files and inquiries.)
Putnam & Myers suggests four different approaches to the sizing problem. These are as follows:

1. Fuzzy_Logic Sizing :
This approach uses the approximate reasoning techniques that are the cornerstones of
Fuzzy-Logic.
In this the planner must identify the type of application, establish its magnitude on a
qualitative scale & then refine the magnitude.
Although personal experience can be used, the planner should also have access to historical
database of projects so that the estimates can be compared to actual experience.

2. Function Point Sizing :
The planner develops estimates of the information domain characteristics (inputs, outputs,
external interfaces, files and inquiries.)

3. Standard Component Sizing :
s/w is composed of a number of different Standard Components that are generic to a
particular application area.
Eg- the standard components for an information system are subsystems, modules, screens,
reports, interactive programs, batch programs, files & LOC


www.missionmca.com 66

The project planner estimates the number of occurrences of each standard component &
then uses historical project data to determine the delivered size per standard component.
For eg if planner estimate that 18 reports will be generated. Historical data indicates that
967 lines of COBOL are required per report. This enables the planner to estimate that 17,000
LOC will be required for the reports components.

4. Change Sizing :
This approach is used when a project encompasses the use of existing software that must be
modified in some way as a part of a project.
The planner estimates the number & type of modifications that must be accomplished. (e.g.
reuse, adding code, changing code, deleting code)

Software Estimation:

The estimation of various project parameters is a basic project planning activity.
There are three broad categories of estimation techniques:
1. Empirical estimation technique
2. Heuristic technique
3. Analytical estimation technique

Empirical Estimation Techniques

Empirical Estimation Techniques are based on making an educated guess of the project
parameters.
While using this technique, prior experience with the development of similar product is
helpful.
Two popular Empirical estimation techniques are:
1. Expert Judgment Technique
2. Delphi Cost Estimation

1. Expert Judgment Technique:

Most widely used estimation technique.
In this approach, an expert makes an educated guess of the problem size after analyzing the
problem thoroughly.
The experts estimates the cost of the different components of the system & then combines
them to arrive at the overall estimate.


www.missionmca.com 67

This technique is subject to human errors & individual bias. Also, it is possible that the
experts may overlook some factors inadvertently.
It is also possible that an expert making an estimate may not have experience & knowledge
of all aspects of a project
More refined form of expert judgment is the estimation made by group of experts.

2. Delphi Cost Estimation:

It is carried out by a team composed of a group of experts & a coordinator.
The coordinator provides each estimator with a copy of the SRS document & a form for
recording his/her cost estimates. Estimators complete their individual estimates & submit it
to the coordinator.
The coordinator prepares & distributes the summary of the responses of all the estimators.
Based on the summary, the estimators re-estimate. This process is iterated for several
rounds.
No discussion among the estimators is allowed during the entire estimation process.
After the completion of several iterations of estimations, coordinator compiles the result &
prepares the final estimate.

Heuristic Estimation Technique
COCOMO (Constructive Cost Model)
Proposed by Boehm who postulated that any s/w development project can be classified
into one of the following 3 categories based on the development complexity :

Mode Project
Size
Nature Of Project Deadline
of the
project
Development
Environment
Organic 2-50
KLOC
Small size project, small size team,
experienced developers in the familiar
environment. Eg Application Programs such
as Payroll, Inventory projects.
No tight Familiar & In House
Semi-
detached
50-300
KLOC
Medium size comments, medium size team,
Average previous experience on similar
projects.
Eg Utility Programs : Compilers, linkers
Medium Medium


www.missionmca.com 68

Embedded Over
300
KLOC
Large Projects, Real time systems, complex
interfaces, very little previous experience.
Eg ATM, Air traffic control, o.s. etc.
Tight Complex h/w,
customer interfaces
required.

Basic COCOMO Model

The Basic COCOMO Model gives an approximate estimate of the project parameters.
It is given by following expressions
Effort = a
1
* (KLOC)
a
2
PM
T
dev
= b
1
* (Effort)
b
2
Months
Where,
KLOC estimated size of s/w products expressed in Kilo Lines of Code
a
1
,

a
2
,

b
1
,

b
2
- constants for each category of s/w products
T
dev
estimated time to develop the s/w expressed in months
Efforts total efforts required to develop the s/w products, expressed in person
months

The values for a
1
,

a
2
,

b
1
,

b
2
are :

Software Project a
1
a
2
b
1
b
2

Organic 2.4 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

When Effort & Development Time are known, the Average Staff Size to complete the project
may be calculated as :
Average Staff Size (SS) = Effort/ T
dev
persons


www.missionmca.com 69




Productivity (P) = KLOC/ Effort KLOC/PM

Example:
Assume that the size of organic type s/w project has been estimated to be 32,000 LOC. The Avg. Salary
of S/w engineers is Rs. 15,000 per month. Determine the effort required to develop the s/w product &
development time.

From the basic COCOMO estimation formula for organic s/w :
Effort = a
1
* (KLOC)
a
2


= 2.4 (32)
1.05

= 91 PM
T
dev
= b
1
* (Effort)
b
2

= 2.5 * (91)
0.38


= 14 months


Cost required to develop the product = 14 * 15,000 = 210,000 Rs.



Intermediate COCOMO
The Basic COCOMO model considers only the Effort & time to determine the product size.
In order to obtain an accurate estimation of effort & project duration, the effect of all relevant
parameters must be taken into account.
The intermediate COCOMO model recognizes this fact & refines the initial estimate obtained
through basic COCOMO expression by using a set of 15 cost drivers based on various attributes
of s/w development.

1) Product attributes
a) Required software reliability.
b) Complexity of the project.
c) Size of application database.
2) Hardware attributes
a) Run-time performance constraints.


www.missionmca.com 70

b) Volatility of the virtual machine environment.
c) Required turnaround time.
d) Memory constraints.
3) Personnel attributes
a) Analyst capability.
b) Software engineer capability.
c) Virtual machine experience.
d) Application experience.
e) Programming language experience.
4) Project attributes
a) Application of software engineering methods.
b) Use of software tools.
c) Required development schedule.

It is given by following expressions
E
i
= a
i
* (KLOC)
b
i
PM
EAF = range from 0.9 to 1.4
Effort = EAF * E
i

T
dev =
c
i
(Effort)
d
i


Software Project a
i
b
i
c
i
d
i

Organic 3.2 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
Embedded 2.8 1.20 2.5 0.32

Complete COCOMO

Most large systems are made up of several smaller subsystems. These subsystems may have
widely different characteristics.


www.missionmca.com 71

For Eg some subsystems may be considered as organic type, some semi detached & some
embedded.
Even the development complexity of these subsystems may be different.
The COCOMO model considers these differences & estimate the Effort & development time as
the sum of the estimates for individual subsystems
The cost of each subsystem is estimated separately. This approach reduces the margin of error
in the final estimate.
COCOMO II
It is actually a hierarchy of Estimation models that address the following areas:
1. Application Composition Model :
Used during the early stages of S.E., when prototyping of user interfaces,
consideration of s/w & system Interaction, assessment of performance &
evaluation of technology maturity are paramount.
2. Early design Stage Model :
Used once requirements have been stabilized & basic s/w architecture has been
established.
3. Post-Architecture stage Model :
Used during the construction of the s/w.

This model requires sizing information. 3 different sizing options are available as part of the
model hierarchy:
1. Object Point
2. Function Point
3. LOC
Object point
4. An indirect s/w measure that is computed using counts of the number of :
Screens (user Interfaces)
Reports
Components
5. Each object instance is classified into one of 3 complexity levels ( simple, medium,
difficult)
6. Once complexity is determined, the number of screens, reports & components are
weighted according to the following table :

Object Type Complexity Weight
Simple Medium Difficult
Screen 1 2 3
Reports 2 5 8


www.missionmca.com 72

3GL Components 10

7. The object point count is determined by multiplying the original number of object
instances by the weighting factor & summing to obtain a total object point count.
8. When component based development or general s/w reuse is to be applied, the %reuse
is estimated & object point count is adjusted.

NOP = (object points) * [(100 - %reuse)/100]
Where, NOP New Object Point
PROD = NOP / person-month
Where, PROD Productivity Rate
Estimated Effort = NOP/PROD

Analytical Estimation Technique

Make/Buy Decision
S/w engineering managers are faced with a make/buy decision that can be further
complicated by number of acquisition options:
1. s/w may be purchased (or licensed) off the shelf
2. full-experience or partial-experience s/w components may be acquired&
then modified & integrated to meet specific needs
3. s/w may be custom built by an outside contractor to meet the purchasers
specifications
In the final analysis, the make/buy decision is made based on the following conditions.
1. Will the s/w product be available sooner than internally developed s/w?
2. Will the cost of acquisition + the cost of customization be less than the cost of
developing the s/w internally?
3. Will the cost of outside support be less than the cost of internal support?

Creating a Decision Tree
The steps just described can be represented by using statistical technique such as decision tree.
The Expected Value for Cost, computed along any branch of the decision tree, is
Expected Cost = Sum (path probability)i * (estimated path cost)I
Where i is the decision tree path.
For eg-
Expected Cost build = 0.30($380K) + 0.70 ($450K) = $429K
Similarly the value for all the options is identified, & the option for which the
cost is minimum is selected.


www.missionmca.com 73

Many criteria not just the cost must be considered during the decision making process
Eg Availability, experience of the developer/vendor/contractor, local politics etc. may
affect the ultimate decision to build, reuse, buy or contract.

Outsourcing :
S/w engineering activities are contracted to a third party who does the work at lower
cost and, hopefully, higher quality.
S/w work conducted within a company is reduced to contract management activity.
The decision to outsourcing can be either Strategic or tactical. At the strategic level,
business managers consider whether a significant portion of all s/w work can be
contracted to others. At the tactical level, a project manager determines whether a part
or all of a project can be best accomplished by subcontracting the s/w work.

Staffing Level Estimation
Once the effort required to develop a s/w has been determined, it is also necessary to
determine the staffing requirement for the project.
Putnamstudied the problem of what should be a proper staffing pattern for s/w projects. He
extended the work of Norden who had earlier investigated the staffing pattern of R&D type of projects





Nordens Work
He studied the staffing patterns of several R&D projects.
He found that the staffing pattern can be approximated by the Rayleigh distribution
curve
Norden represented the Rayleigh curve by the following equation :
E = (K/td2) * t * e(-t^2/2td^2)
Where E = Effort required at time t
K = Area under the curve
td = time at which the curve attains its maximum value.
E is an indication of the number of engineers at any particular time during the duration
of the project.

Putnams Work

Putnam studied the problem of staffing for S/w projects & found that S/w development has
characteristics very similar to any other R&D projects studied by Norden& the Rayliegh - Norden
curve can be used to relate the number of delivered lines of code to the effort & the time
required to develop the product.

He derived the following expression:
L = CkK1/3td4/3
K = total efforts expended (in PM) in the product development
L = product size in KLOC
td = time of system integration & testing (time required to develop the s/w)


www.missionmca.com 74

Ck= state of technology constant & reflects constraints that impede the progress of the
programmer.
Ck = 2 corresponds to poor development environment
Ck = 8 corresponds to good s/w development environment
Ck = 11 corresponds to excellent s/w development environment
The exact value of Ck for a specific project can be computed from the historical data.
Only a small no. of engineers are needed at the beginning of the project to carry out planning &
specification task.
As the project progresses & more detailed work is required, the no. of engineers reaches a peak.
After Implementation & testing the no. of project members falls.
The team size should be increased or decreased slowly whenever required to match the
Rayleigh-Norden curve
Constant level of manpower throughout the project duration would lead to wastage of effort &
increase the time & effort required to develop the product
Eg
If we examine the curve, 40% of area is to the left of td& 60% to the right of td.








www.missionmca.com 75

UNIT 5
Software Scheduling & Tracking

Relationship between People & Effort:

Small project single person can analyze requirements, perform design, generate code &
conduct test.
Major Projects More number of people are involved in the project development process.
Common myth If we fall behind the schedule, we can always add more programmers & catch
up later in the project.
Adding people late in a project often has a disruptive effect on a project causing schedule to slip
even further.
Drawbacks of adding people late in a project
People who are added must learn the system
People who teach them are the people who were doing the work
While teaching no work is done & the project falls further behind.
More people increases the number of communication paths & the complexity of
communication throughout a project.
Every new communication path requires additional effort & therefore additional time.

Effort Distribution

A recommended distribution of effort across the definition & development phase is often
referred to as 40-20-40 rule.
40% of all effort is allocated to front-end analysis & design.
20% of effort is spent on the coding.
40% of effort is applied to the back-end testing
Effort distribution in different activities:
Planning : 2 3 %
Requirement Analysis : 10 25 %
Design : 20 25 %
Code : 15 20 %
Testing : 30 40 %

Effect of Schedule Chang on Cost



www.missionmca.com 76

By using Putnams Proposed expression for L i.e.

L = C
k
K
1/3
t
d
4/3

We can write
K = L
3
/(C
k
)
3
(t
d
)
4

Or
K = C/(t
d
)
4
........(since C=L
3
/(C
k
)
3
is a constant for the same product size)

K = total efforts expended (in PM) in the product development
L = product size in KLOC
t
d
= time of system integration & testing (time required to develop the s/w)

C
k
= state of technology constant & reflects constraints that impede the progress of the programmer
k
1
/k
2
= (t
d2
)
4
/(t
d1
)
4
When the schedule of a project is compressed, the required effort increases in the proportion to
the 4
th
power of degree of compression
For eg if the estimated development time is 1Yr., then in order to develop the product in 6
months, the total effort required to develop the product increases 16 times.
In this case, most of the extra effort is due to the idle time of engineers waiting for the work
You recruit a large no. of engineers hoping to complete the project early, but it becomes very
difficult to keep those additional engineers occupied with work.

Task Set
Task Set is a collection of s/w engineering work tasks, milestones & deliverables that must be
accomplished to complete a particular project.
The task set to be chosen must provide enough discipline to achieve high s/w quality. But at the
same time, it must not burden the project team with unnecessary work.
The task sets are designed to accommodate different types of projects & different degrees of
rigor
Types Of projects:
1. Concept Development Projects : projects that are initiated to explore some new
business concept or application for some new technology.
2. New Application Development Projects : projects that are undertaken as a consequence
of a specific customer request.


www.missionmca.com 77

3. Application Enhancement Projects : projects that occur when existing s/w undergoes
major modifications to function, performance or interface that are observable by the
end-user.
4. Application Maintenance Projects : projects that correct, adapt or extend existing s/w in
ways that may not be immediately obvious to the end user.
5. Reengineering projects : projects that are undertaken with the intent of rebuilding an
existing system in whole or in part.

Degree of Rigor
The degree of Rigor is a function of many project characteristics.
The task set will grow in size & complexity as the degree of rigor grows.
4 different degrees of rigor can be defined:
1. Casual
All process framework activities are applied, but only minimum task set is
required
Umbrella tasks will be minimized
Documentation requirements will be reduced
All basic principles of s/w engineering are still applicable.
2. Structured
The process framework will be applied for this project.
Framework activities & related tasks appropriate to the project type will be
applied
Umbrella activities necessary to ensure high quality will be applied.
SQA, SCM, documentation & measurement tasks will be conducted in a
streamlined manner.
3. Strict
The full process will be applied for this project with a degree of discipline that
will ensure high quality
All umbrella activities will be applied
4. Quick Reaction
The process framework will be applied for this project, but because of
emergency situations only those tasks essential for maintaining good quality will
be applied
Developing a complete set of documentation, conducting additional reviews will
be accomplished after the application/product is delivered to the customer.
Task set selector

Adaptation Criteria:
Adaptation criteria are used to determine the recommended Degree of Rigor with which the s/w
process should be applied on a project. Eleven adaptation criteria are defined for s/w projects:
1. Size of the project
2. Number of potential users


www.missionmca.com 78

3. Mission criticality
4. Application longevity
5. Stability of requirements
6. Ease of customer / developer communication
7. Maturity of applicable technology
8. Performance constraints
9. Embedded & non-embedded characteristics
10. Project staff
11. Reengineering factors
Each of this adaptation criteria is assigned a grade that ranges between 1 and 5
1 = a project in which a small subset of process task are required & documentation
requirements are minimum.
5 = a project in which complete set of process task should be applied & documentation
requirements are substantial.

Adaptation Criteria Grad
e
Weight Entry Point Multiplier
Con
c
NDev Enh
a
Main
t
Reeng Produc
t
Size of the project ----- 1.20 0 1 1 1 1 -----
Number of potential users ----- 1.10 0 1 1 1 1 -----
Mission criticality ----- 1.10 0 1 1 1 1 -----
Application longevity ----- 0.90 0 1 1 0 0 -----
Stability of requirements ----- 1.20 0 1 1 1 1 -----
Ease of customer /
developer communication
----- 0.90 1 1 1 1 1 -----
Maturity of applicable
technology
----- 0.90 1 1 0 0 1 -----
Performance constraints ----- 0.80 0 1 1 0 1 -----
Embedded & non- ----- 1.20 1 1 1 0 1 -----


www.missionmca.com 79

embedded characteristics
Project staff ----- 1.00 1 1 1 1 1 -----
Reengineering factors ----- 1.20 0 0 0 0 1 -----

Computing task set selector value

Following steps are performed:
1. Assign the appropriate grade (1 to 5) to the adaptation criteria based on the
characteristics of the project. These grades should be entered into the above table.
2. The value of each weighting factor assigned to each adaptation criteria ranges from 0.8
to 1.2 & provides the indication of the relative importance of a particular adaptation
criteria to the type of s/w development
3. Multiply the grade by the weighting factor & by the entry point multiplier. The result of
the product
grade * weighting factor * entry point multiplier
is placed in the product column for each adaptation criteria individually.
4. To get the value of task set selector compute the average of all entries in the product
column.

Adaptation Criteria Grad
e
Weight Entry Point Multiplier
Conc. NDev
.
Enha
.
Maint
.
Reen
g
Produc
t
Size of the project 2 1.20 ----- 1 ----- ----- -- 2.4
Number of potential
users
3 1.10 ----- 1 ----- ----- -- 3.3
Mission criticality 4 1.10 ----- 1 ----- ----- -- 4.4
Application longevity 3 0.90 ----- 1 ----- ----- -- 2.7


www.missionmca.com 80

Stability of requirements 2 1.20 ----- 1 ----- ----- -- 2.4
Ease of customer /
developer
communication
2 0.90 ----- 1 ----- ----- -- 1.8
Maturity of applicable
technology
2 0.90 ----- 1 ----- ----- -- 1.8
Performance constraints 3 0.80 ----- 1 ----- ----- -- 2.4
Embedded & non-
embedded characteristics
3 1.20 ----- 1 ----- ----- -- 3.6
Project staff 2 1.00 ----- 1 ----- ----- -- 2.0
Reengineering factors 0 1.20 ----- 0 ----- ----- -- 0.0




Interpreting the TSS value & selecting the task set
Once the task set selector is computed, the following guidelines can be used to select the appropriate
task set for a project
Task Set Selector Value Degree Of Rigor
TSS < 1.2 Casual
1.0 < TSS < 3.0 Structured
TSS > 2.4 Strict

Why Are Projects Late?


www.missionmca.com 81


An unrealistic deadline established by someone outside the software development group
Changing customer requirements that are not reflected in schedule changes
An honest underestimate of the amount of effort and/or the number of resources that will be
required to do the job;
Predictable and/or unpredictable risks that were not considered when the project commenced;
Technical difficulties that could not have been foreseen in advance
Human difficulties that could not have been foreseen in advance;
Miscommunication among project staff that results in delays;
A failure by project management to recognize that the project is falling behind schedule and a
lack of action to correct the problem

Scheduling:
It is an important project planning activity.
It involves deciding which task to be taken up when.
In order to schedule project activity, the project manager needs to do the following:

1. Identify all the tasks needed to complete the project.
Good knowledge of intricacies of the project & development process helps the
manager to effectively identify the important task of the project.
2. Break down large tasks into small activities.
These small activities are then assigned to different engineers. The WBS is
prepared which helps the manager to break down the task systematically.
3. Determine the dependency among different activities.
It determines the order in which the different activities would be carried out.
If activity A requires the result of another activity B, then A must be scheduled
after completion of B.
Represented in the form of an activity network.
4. Establish the most likely estimate for the time duration necessary to complete the
activities.
5. Allocate resources to activities.
Done using Gantt Charts
6. Plan the starting & ending dates for various activities.
PERT Chart representation is developed which helps in program monitoring &
Control.
7. Determine the critical path. A critical path is the chain of activities that determines the
duration of the project.

Work Breakdown Structure (WBS)
An exhaustive, hierarchical (from general to specific) tree structure of deliverables and tasks
that need to be performed to complete a project.


www.missionmca.com 82

The purpose of a WBS is to identify terminal elements (the actual items to be done in a project).
Therefore, WBS serves as the basis for much of project planning.
It is used to break the RDD (Requirement Definition & description) into small units for planning
& budgeting & for controlling cost, resources & efforts.
The parameters to build WBS are:
1. System Structure
If the system is broken into modules then they become the anchor to build
WBS.
2. Functions
If the system is largely function driven, the WBS is built around its application
which is designed to deliver functions.
3. Organizational Units
If the system can be visualized after modeling by structure or functions into
organizational units, than WBS can be anchored onto organizational units.
4. Life Cycle Phase
In this, the WBS is built for each phase starting from inception till delivery to the
customer.
Guidelines to Build WBS

S/w
System
Size RDD & SRS Complexity
Driven
Function
driven
Technology
Parameter
WBS
A Small,
Medium
Stable Simple Yes ----- System Structure
B Large Unstable,
Need
Prototype
Complex ---- Yes Life Cycle
C Medium Stable Not so
complex
Yes Yes Life Cycle
D Large Stable Complex Yes Yes System Structure
E Large Not stable
but
controllable
Complex,
Multiple
locations,
technology
specific
Yes Yes Organizational
Units



www.missionmca.com

Purpose and Benefits of a WBS
It subdivides the program scope into smaller, manageable work efforts
An 'Owner' is established for each deliverable (further expanded in the responsibility matrix)
Responsibility for each task can be established
Estimated costs/budgets can be established for each element
It provides a framework to identify programs separately from orga
accounting systems, etc.

WBS Structure



It subdivides the program scope into smaller, manageable work efforts
established for each deliverable (further expanded in the responsibility matrix)
Responsibility for each task can be established
Estimated costs/budgets can be established for each element
It provides a framework to identify programs separately from organizations, funding sources,


83
established for each deliverable (further expanded in the responsibility matrix)
nizations, funding sources,


www.missionmca.com



WBS for MIS s/w





84




www.missionmca.com

Task Network / Activity Network
A task network, also called as an activity network, is a graphic representation of the task flow
for a project.
It is a useful mechanism for depicting inter
An activity network shows the different activities making up a project, their estimated durations
& interdependencies.
Each activity is represented by a rectangular nod
alongside each task.
Activity Network representation of MIS s/w
Critical Path Method (CPM)

From the activity network representation, following analysis can be made:
The minimum time (MT) to complete the project is the maximum of all paths from start to finish


A task network, also called as an activity network, is a graphic representation of the task flow
useful mechanism for depicting inter-task dependencies & determining the critical path.
An activity network shows the different activities making up a project, their estimated durations
Each activity is represented by a rectangular node & the duration of the activity is shown
Activity Network representation of MIS s/w
From the activity network representation, following analysis can be made:
The minimum time (MT) to complete the project is the maximum of all paths from start to finish

85
A task network, also called as an activity network, is a graphic representation of the task flow
task dependencies & determining the critical path.
An activity network shows the different activities making up a project, their estimated durations
e & the duration of the activity is shown


The minimum time (MT) to complete the project is the maximum of all paths from start to finish


www.missionmca.com 86

The earliest start (ES) time of a task is the maximum of all paths from the start to this task.
The latest start (LS) time is the difference between MT & maximum of all paths from the task to
the finish.
The earliest finish (EF) time of the task is the sum of the earliest start time of the task & the
duration of the task.
The latest finish (LF) time of a task can be obtained by subtracting maximum of all paths from
the task to finish from MT.
The Slack time (ST) is LS EF OR LF-EF. It is the total time for which a task may be delayed before
it would affect the finish time of the project.
A critical task is one with 0 slack time
A path from the start node to the finish node containing only critical task is called a critical path

The above parameters for different tasks for the MIS problem are as follows:

Task ES EF LS LF ST
Specification Part 0 15 0 15 0
Design Database Part 15 60 15 60 0
Design GUI Part 15 45 90 120 75
Code Database Part 60 165 60 165 0
Code GUI Part 45 90 120 165 75
Integrate & Test 165 285 165 285 0
Write User Manual 15 75 225 285 210

Gantt Charts

Used to allocate resources to activities.
The resources allocated to activities include staff, hardware, & software.
Useful for resource planning.
Special type of a bar chart where each bar represents an activity.


www.missionmca.com

The bars are drawn along a timeline. The length of each bar is proportional to the duration of
the time planned for the co
In the Gantt charts used for s/w project management , each bar consist of a white part & a
shaded part.
The shaded part of a bar shows the length of time each task is estimated to take.
White part shows the slack time, i.e. the latest


PERT Charts
PERT (Project Evaluation & Review Technique) Chart represents the statistical variations in the
project estimates assuming a normal distribution.
In this, instead of making a single estimate for each task,
estimates are also made.
The boxes of PERT charts are usually annotated with the above 3 estimates for every task.
It is useful for monitoring the timely progress
Also, it is easier to identify parallel activities in a project using a PERT chart.



The bars are drawn along a timeline. The length of each bar is proportional to the duration of
the time planned for the corresponding activity.
In the Gantt charts used for s/w project management , each bar consist of a white part & a
The shaded part of a bar shows the length of time each task is estimated to take.
White part shows the slack time, i.e. the latest time by which a task must be finished.

PERT (Project Evaluation & Review Technique) Chart represents the statistical variations in the
project estimates assuming a normal distribution.
In this, instead of making a single estimate for each task, pessimistic, likely, & optimistic
The boxes of PERT charts are usually annotated with the above 3 estimates for every task.
It is useful for monitoring the timely progress of activities.
Also, it is easier to identify parallel activities in a project using a PERT chart.

87
The bars are drawn along a timeline. The length of each bar is proportional to the duration of
In the Gantt charts used for s/w project management , each bar consist of a white part & a
The shaded part of a bar shows the length of time each task is estimated to take.
time by which a task must be finished.
PERT (Project Evaluation & Review Technique) Chart represents the statistical variations in the
pessimistic, likely, & optimistic
The boxes of PERT charts are usually annotated with the above 3 estimates for every task.



www.missionmca.com

Organization & Team Structure:
Every s/w development organization handles several projects at any time.
S/w organizations assign different teams of
Thus, there are 2 important issues:
How is the organization as a whole structured?
How are the individual project teams structured?

Organization Structure
It is the formal system of task and reporting relationsh
motivates employees so that they cooperate to achieve an organization's goals.
Your task as a manager is to create an organizational structure and culture that:
Encourages employees to work hard and to develop supportiv
Allows people and groups to cooperate and work together effectively.
Organization Structure
Functional Organization
The development staffs are divided based on the functional group to which they belong to.
Different teams of programme
Eg : 1 team might do the requirement specification, another might do the design & so on.
The partially completed project passes from one team to another as the product evolves
Requires considerable communication
team must be clearly understood by the subsequent teams working on the project.
This requires good quality documentation to be produced after every activity.

Advantages :
Ease of staffing


Organization & Team Structure:
Every s/w development organization handles several projects at any time.
S/w organizations assign different teams of engineers to handle different s/w projects.
Thus, there are 2 important issues:
How is the organization as a whole structured?
How are the individual project teams structured?
It is the formal system of task and reporting relationships that controls, coordinates, and
motivates employees so that they cooperate to achieve an organization's goals.
Your task as a manager is to create an organizational structure and culture that:
Encourages employees to work hard and to develop supportive work attitudes
Allows people and groups to cooperate and work together effectively.

The development staffs are divided based on the functional group to which they belong to.
Different teams of programmers perform different phases of a project.
Eg : 1 team might do the requirement specification, another might do the design & so on.
The partially completed project passes from one team to another as the product evolves
Requires considerable communication among the different teams because the work of one
team must be clearly understood by the subsequent teams working on the project.
This requires good quality documentation to be produced after every activity.

88
engineers to handle different s/w projects.
ips that controls, coordinates, and
motivates employees so that they cooperate to achieve an organization's goals.
Your task as a manager is to create an organizational structure and culture that:
e work attitudes
The development staffs are divided based on the functional group to which they belong to.
Eg : 1 team might do the requirement specification, another might do the design & so on.
The partially completed project passes from one team to another as the product evolves
among the different teams because the work of one
team must be clearly understood by the subsequent teams working on the project.
This requires good quality documentation to be produced after every activity.



www.missionmca.com

Production of good quality documents : more attention being paid to proper documentation
Job Specialization : It allows engineers to become specialist in their particular role
Efficient handling of the problems associated with manpower turnover

Project Organization
The development staff are divided based on the project for which they work.
A set of engineers are assigned to the project at the start of the project & they remain with the
project till the completion of the project.
The same team carries out all the life
Communication requirement is less as compared to the Functional Organization.

It forces the manager to take in almost a constant number of engineers for the entire duration
of the project.
This results engineers idling in the initial
tremendous pressure in the later phases of development.
Advantage:
Provides job rotation to the team members

Team Structure
Team structure addresses the issue of organization of the individual project teams.
There are 3 formal Team Structures :
1. Chief Programmer Team
2. Democratic Team
3. Mixed Team

Chief Programmer Team
A senior engineer provides the technical leadership & is designated as the chief programmer.
The chief programmer partitions the task into smaller a
members.
He also verifies & integrates the products developed by different team members
It is susceptible to single-point failure since too much responsibility & authority is assigned to
the chief programmer.


good quality documents : more attention being paid to proper documentation
Job Specialization : It allows engineers to become specialist in their particular role
Efficient handling of the problems associated with manpower turnover
The development staff are divided based on the project for which they work.
A set of engineers are assigned to the project at the start of the project & they remain with the
project till the completion of the project.
The same team carries out all the life cycle activities.
Communication requirement is less as compared to the Functional Organization.
It forces the manager to take in almost a constant number of engineers for the entire duration
This results engineers idling in the initial phase of s/w development & then coming under
tremendous pressure in the later phases of development.
Provides job rotation to the team members
Team structure addresses the issue of organization of the individual project teams.
here are 3 formal Team Structures :
Chief Programmer Team
Democratic Team
Mixed Team
A senior engineer provides the technical leadership & is designated as the chief programmer.
The chief programmer partitions the task into smaller activities & assigns them to the team
He also verifies & integrates the products developed by different team members
point failure since too much responsibility & authority is assigned to

89
good quality documents : more attention being paid to proper documentation
Job Specialization : It allows engineers to become specialist in their particular role
A set of engineers are assigned to the project at the start of the project & they remain with the
Communication requirement is less as compared to the Functional Organization.

It forces the manager to take in almost a constant number of engineers for the entire duration
phase of s/w development & then coming under
Team structure addresses the issue of organization of the individual project teams.
A senior engineer provides the technical leadership & is designated as the chief programmer.
ctivities & assigns them to the team
He also verifies & integrates the products developed by different team members
point failure since too much responsibility & authority is assigned to


www.missionmca.com

Used for simple & small projects.
Democratic Team

At different times, different members of a group provide
It leads to higher moral & job satisfaction.
It suffers from less manpower & turnover.
Appropriate for less understood problems, since
solutions than single individual.
Suitable for projects requiring less than five or 6 engineers & for research oriented projects.
Encourages egoless programming as programmers can share & review one anothers work.
Disadvantage: The team members may waste a lot of time arguing about trivial points due to
lack of any authority in the team to resolve such debates.
Mixed Control Team

It is the combination of both the democratic organization & Chief programmers
It incorporates both hierarchical reporting & democratic set
Suitable for large team size.
Used to handle large & complex programs.
Used in many s/w development industries.




imple & small projects.

different members of a group provide technical leadership.
It leads to higher moral & job satisfaction.
It suffers from less manpower & turnover.
Appropriate for less understood problems, since a group of engineers can invent better
solutions than single individual.
Suitable for projects requiring less than five or 6 engineers & for research oriented projects.
Encourages egoless programming as programmers can share & review one anothers work.
Disadvantage: The team members may waste a lot of time arguing about trivial points due to
lack of any authority in the team to resolve such debates.
It is the combination of both the democratic organization & Chief programmers
It incorporates both hierarchical reporting & democratic set-up.
Suitable for large team size.
Used to handle large & complex programs.
Used in many s/w development industries.

90

a group of engineers can invent better
Suitable for projects requiring less than five or 6 engineers & for research oriented projects.
Encourages egoless programming as programmers can share & review one anothers work.
Disadvantage: The team members may waste a lot of time arguing about trivial points due to

It is the combination of both the democratic organization & Chief programmers organization.


www.missionmca.com






91


www.missionmca.com 92

UNIT 6
Design phase activities

Develop System Flowchart :

A system flowchart is a diagram that describes the overall flow of control between computer
programs in a system.
It is observed that programs and subsystems have complex interdependencies including flow of
data, flow of control, and interaction with data stores.
It is a diagrammatic representation that illustrates the sequence of operations to be performed to
get the solution of a problem.

It effectively indicates where input enters the system, how it is processed and controlled, and how it
leaves the system in the form of the desired output. Here, emphasis is placed on input documents
and output reports.
Only limited details are displayed, about the process that transforms the input to output. For
convenience of design, it is a good idea to segregate the inputs, processes, outputs, and files
involved in the system into a tabular form before proceeding with the flowchart.
System Flowcharts are generally drawn in the early stages of formulating computer solutions. It
facilitate communication between programmers and business people.

The System Flowchart play a vital role in the programming of a problem and are quite helpful in
understanding the logic of complicated and lengthy problems. Once the flowchart is drawn, it
becomes easy to write the program in any high level language.
Often we see how flowcharts are helpful in explaining the program to others. Hence, it is correct to
say that a flowchart is a must for the better documentation of a complex program.


www.missionmca.com 93




www.missionmca.com 94

Example Of System Flowchart :



Advantages Of System Flowchart :
1. Communication : Flowcharts are better way of communicating the logic of a system to all
concerned.

2. Effective analysis : With the help of flowchart, problem can be analysed in more effective way.

3. Proper documentation : Program flowcharts serve as a good program documentation, which is
needed for various purposes.



www.missionmca.com 95

4. Efficient Coding : The flowcharts act as a guide or blueprint during the systems analysis and program
development phase.

5. Proper Debugging : The flowchart helps in debugging process.

6. Efficient Program Maintenance : The maintenance of operating program becomes easy with the
help of flowchart. It helps the programmer to put efforts more efficiently on that part.

Limitations Of System Flowchart :
1. Complex logic : Sometimes, the program logic is quite complicated. In that case, flowchart becomes
complex and clumsy.

2. Alterations and Modifications : If alterations are required the flowchart may require re-drawing
completely.

3. Reproduction : As the flowchart symbols cannot be typed, reproduction of flowchart becomes a
problem.

4. Loss Of Details : The essentials of what is done can easily be lost in the technical details of how it is
done.

Structure Chart :
A structure chart is a hierarchical diagram showing the relationships between the modules of a
computer program.

It shows which modules within a system interact and also graphically depicts the data that are
communicated between various modules.

Structure charts are developed prior to the writing of program code.

They identify the data passes existing between individual modules that interact with one another.

Structure Chart Notation :



www.missionmca.com 96




Symbol for a module (one being designed).
Symbol for a relationship
Symbol for control-driven data
Symbol for information-driven data
Symbol for a pre-defined module (eg. a
library)


www.missionmca.com 97

Fan-in (relationship to more than one parent) :



Fan-out (relationship to more than one child) :




Cross-over (possibly due to fan-in) two valid solutions are as follows :
1.


2.


www.missionmca.com 98



Continuity (possibly due structure size) :




Module X
(See p. n)
Module X
Page n
Page 1
Page 2
(From p. 1)
A A
A


www.missionmca.com 99



Iterative invocation(when one action is made up of more than one repeated smaller ones):





Code inclusion (considered as physical integration with logical separation) :



Transaction centre (considered as a way of selection of one from many possible functions at any one
specific moment) :

Module X
Module Y
Module Y is actually code in module X
Module X
Module Y
Module X
Module Y
Module X
Module Y
n
exactly n
Module X
Module Y
n m
X invokes Y
undefined times
X invokes Y a
max. of n times
X invokes Y
exactly n times
X invokes Y any
no. of times from
n to m


www.missionmca.com 100



Structure Chart Elements :
Module : Denote a logical piece of program
Library : Denote a logical piece of program that is repeated in a structure chart
Loop : A module is repeated
Conditional Line: Subordinate modules are invoked by control modules based on some conditions.
Control Couple : Communicates that a message or system flag is being passed from one module to
another
Data Couple : Communicates that data is being passed from one module to another
Off Page : Identifies when the part of the diagram are continued on another page of structure chart.
On Page: Identifies when the part of the diagram are continued somewhere else on the same page of
the structure chart.


Module A
Module B Module C Module D
In this example module As function can be the
function
of either one of modules B, C or D


www.missionmca.com 101

Example Of Structure Chart :
Payroll Processing



Two approaches exist for developing a structure chart :

(a) Transaction Analysis :

It makes use of a system flowchart to develop a structure chart.
Here, the system flowchart is examined to identify each major program.
These are usually the transactions supported by the system.
Thus, transaction analysis can be looked at as a process of identifying each separate
transaction that must be supported by the system, and then constructing a branch for each one
in the structure chart.
While every transaction will be a direct sub-module for the boss module, each transaction will
be the boss module for its sub-tree of processes for that transaction.
Each sub-tree may be developed using transform analysis.

(b) Transform Analysis :
It makes use of DFD fragments to develop the sub-module tree structures in a system flowchart.

It is based on the idea that input is "transformed" into output by the system.
Three important concepts are involved:


www.missionmca.com 102


Afferent data flow : It is the incoming data flow in a sequential set of processes.
Efferent data flow : It is the outgoing data flow from a sequential set of processes.
Central transform : It transforms afferent data flow into efferent data flow.

The following steps are followed to develop a structure chart from a DFD fragment.

1. Identify input, processes, output from the DFD fragment.
2. Reorganize DFD fragment to arrange input (afferent data flow) to the left, process
(central transform) in the center, and output (efferent data flow) to the right.
3. From the first two steps, identify the boss module (calling module) and branch out the sub-
modules out of the boss module (this is the boss module of each transaction and not
necessarily of the entire system).
4. Provide appropriate data flow lines, and show input and output data using data couples.
5. Display condition clauses using control couples.
HIPO( Hierarchy Input Process Output ) Chart :

1. It is a commonly used method for developing systems software
2. An acronym for hierarchical Input process output, developed by IBM for its large, complex operating
systems.
3. Greatest strength of HIPO is the documentation of a system

Purpose Of HIPO Chart :
1. Assumption on which HIPO is based : It is easy to lose track of the intended function of a system or
component in a large system.
2. Users view : Single functions can often extend across several modules.
3. Analysts concern : Understanding, describing, and documenting the modules and their interaction in
a way that provides sufficient detail but that does not lose sight of the larger picture.
4. HIPO diagrams are graphic, rather than narrative, descriptions of the system. They assist the analyst
in answering three guiding questions:
i. What does the system or module do? (Asked when designing the system).
ii. How does it do it? (Asked when reviewing the code for testing or maintenance)
iii. What are the inputs and outputs? (Asked when reviewing the code for testing or maintenance)
5. A HIPO description for a system consists of the visual table of contents & the functional diagrams.

Visual Table Of Content :


www.missionmca.com 103


1. The visual table of contents (VTOC) shows the relation between each of the documents making up a
HIPO package.
2. It consists of a Hierarchy chart that identifies the modules in a system by number and in relation to
each other and gives a brief description of each module.
3. The numbers in the contents section correspond to those in the organization section.
4. The modules are in increasing detail. Depending on the complexity of the system, three to five levels
of modules are typical.

Functional Diagrams :

1. For each box defined in the VTOC a diagram is drawn.
2. Each diagram shows input and output (right to left or top to bottom), major processes, movement
of data, and control points.
3. Traditional flowchart symbols represent media, such as magnetic tape, magnetic disk and printed
output.
4. A solid arrow shows control paths, and an open arrow identifies data flow.
5. Some functional diagrams contain other intermediate diagrams, but they also show external data, as
well as internally developed data and the step in the procedure where the data are used.
6. A data dictionary description can be attached to further explain the data elements used in a process.
7. HIPO diagrams are effective for documenting a system.
8. They aid designers and force them to think about how specifications will be met and where activities
and components must be linked together.
9.
Disadvantages :
1. They rely on a set of specialized symbols that require explanation, an extra concern when compared
to the simplicity of , for eg. Data flow diagram.
2. HIPO diagrams are not as easy to use for communication purpose as many people would like.
3. They do not guarantee error-free systems.



www.missionmca.com 104

Example Of HIPO Chart :



Warnier Orr Diagrams :

1. Warnier/Orr diagrams (also known as logical construction of programs/ logical construction of
systems).
2. Warnier/Orr diagram is a style of diagram which is extremely useful for describing complex
processes (e.g. computer programs, business processes, instructions) and objects (e.g. data
structures, documents, parts explosions).
3. Initially developed in France by Jean- Dominique Warnier and in the United States by Kenneth Orr.


www.missionmca.com

4. This method aids the design of program structures by identifying the output & processing results &
then working backwards to determine the steps & combinations of input needed the steps &
combinations of input needed to produce them.
5. The simple graphic methods used in Warnier/ Orr diagrams make the levels in the system evident
and the movement of the data between them vivid.

Basic Elements :
Bracket : A bracket encloses a level of decomposition in a diagram.
"consists of" at the next level of detail.

Sequence : The sequence of events is defined by the top
event occurs after everything above it in a diagram, but before anything below it.

OR : You represent choice in a diagram by placing an "OR" operator between the items of a choice.
The "OR" operator looks either like

AND: You represent concurrency in a diagram by placing an "AND" operator between the
concurrent actions. The "AND" operator looks either like

Repetition : To show that an action repeats (loops), you simply put the number of repetitions of the
action in parentheses below the action.

Using Warnier/Orr Diagrams :
1. The ability to show the relation
Warnier/Orr diagrams, nor is the use of iteration, alternation,or treatment of individual cases, the
approach used to develop systems definitions with Warnier/Orr diagrams is different and fits well
with those used in logical system design.

2. To develop a Warnier/Orr diagram, the analyst works backwards, starting with systems output and
using an output oriented analysis.

3. On paper, the development moves from left to right. First the intended output
processing are defined. At the next level, shown by inclusion with a bracket, the steps needed to
produce the output are defined.

4. Each step in turn is further defined. Additional brackets group the Processes required to produce the
result on the next level.


This method aids the design of program structures by identifying the output & processing results &
then working backwards to determine the steps & combinations of input needed the steps &
t needed to produce them.
The simple graphic methods used in Warnier/ Orr diagrams make the levels in the system evident
and the movement of the data between them vivid.
: A bracket encloses a level of decomposition in a diagram. It reveals what something
"consists of" at the next level of detail.
: The sequence of events is defined by the top-to-bottom order in a diagram. That is, an
event occurs after everything above it in a diagram, but before anything below it.
: You represent choice in a diagram by placing an "OR" operator between the items of a choice.
The "OR" operator looks either like or .
: You represent concurrency in a diagram by placing an "AND" operator between the
e "AND" operator looks either like or .
: To show that an action repeats (loops), you simply put the number of repetitions of the
action in parentheses below the action.
The ability to show the relation between processes and steps in a process is not unique to
Warnier/Orr diagrams, nor is the use of iteration, alternation,or treatment of individual cases, the
approach used to develop systems definitions with Warnier/Orr diagrams is different and fits well
with those used in logical system design.
To develop a Warnier/Orr diagram, the analyst works backwards, starting with systems output and
oriented analysis.
On paper, the development moves from left to right. First the intended output
processing are defined. At the next level, shown by inclusion with a bracket, the steps needed to
produce the output are defined.
Each step in turn is further defined. Additional brackets group the Processes required to produce the

105
This method aids the design of program structures by identifying the output & processing results &
then working backwards to determine the steps & combinations of input needed the steps &
The simple graphic methods used in Warnier/ Orr diagrams make the levels in the system evident
It reveals what something
bottom order in a diagram. That is, an

: You represent choice in a diagram by placing an "OR" operator between the items of a choice.
: You represent concurrency in a diagram by placing an "AND" operator between the
: To show that an action repeats (loops), you simply put the number of repetitions of the
between processes and steps in a process is not unique to
Warnier/Orr diagrams, nor is the use of iteration, alternation,or treatment of individual cases, the
approach used to develop systems definitions with Warnier/Orr diagrams is different and fits well
To develop a Warnier/Orr diagram, the analyst works backwards, starting with systems output and
On paper, the development moves from left to right. First the intended output or results of the
processing are defined. At the next level, shown by inclusion with a bracket, the steps needed to
Each step in turn is further defined. Additional brackets group the Processes required to produce the


www.missionmca.com


5. A completed Warnier/Orr diagram includes both process groupings & data requirements.
Data elements are listed for each process or process component.
These data elements are the ones needed to determine which alternative or case should
handled by the system & to carry out the process.
The analyst must determine where each data element originates, how it is used, and how
individual elements are combined.

6. When the definition is completed , a data structure for each process is
used by the programmers, who work from the diagrams to code the software.

Example Of Warnier/Orr Diagram
The diagram below illustrates the use of these constructs to describe a simple process.


You could read the above diagram like this :
"Welcoming a guest to your home (from 1 to many times)
taking the guest's coat at the same time
saying "Good morning" if it's morning,


A completed Warnier/Orr diagram includes both process groupings & data requirements.
Data elements are listed for each process or process component.
These data elements are the ones needed to determine which alternative or case should
handled by the system & to carry out the process.
The analyst must determine where each data element originates, how it is used, and how
individual elements are combined.
When the definition is completed , a data structure for each process is documented. It, in turn, is
used by the programmers, who work from the diagrams to code the software.
Example Of Warnier/Orr Diagram :
The diagram below illustrates the use of these constructs to describe a simple process.
diagram like this :
"Welcoming a guest to your home (from 1 to many times) consists of greeting the guest
at the same time, then showing the guest in. Greeting a guest
saying "Good morning" if it's morning, or saying "Good afternoon" if it's afternoon,

106
A completed Warnier/Orr diagram includes both process groupings & data requirements.
These data elements are the ones needed to determine which alternative or case should be
The analyst must determine where each data element originates, how it is used, and how
documented. It, in turn, is
The diagram below illustrates the use of these constructs to describe a simple process.

greeting the guest and
showing the guest in. Greeting a guest consists of
ng "Good afternoon" if it's afternoon, or saying


www.missionmca.com 107

"Good evening" if it's evening. Taking the guest's coat consists of helping the guest remove
their coat, then hanging the coat up."

Advantages of Warnier/ Orr diagrams :
1. They are simple in appearance and easy to understand. Yet they are powerful design tools.

2. They have the advantage of showing groupings of processes and the data that must be passed from
level to level.

3. The sequence of working backwards ensures that the system will be result oriented.

4. This method is useful for both data and process definition. It can be used for each independently , or
both can be combined on the same diagram.

Designing Databases :

Database :
It is an integral collection of stored data that is centrally managed and controlled.
It consists of two related stores of information :
(1) Physical data store and (2) The schema.
Physical data store is the storage area used by a DBMS to store the raw bits and bytes of a database.
Schema is the description of the structure, content, and access controls of a physical data store or
database.
It contains additional information about the data stored in the physical data store:
(a) Access and content controls (authorization, allowable values)
(b) Relationships among data elements (pointer indicating customer of a particular order)
(c) Details of physical data store organization (types, lengths, indexing, sorting)

DBMS (Database Management System) :
It is system software that manages and controls access to a database.
It has four key components
(a) API (b) query interface (c) administrative interface (d) data access programs/subroutines.
Working of a DBMS


www.missionmca.com 108

(a) Application programs, users, administrators tell the DBMS what data they need (for
reading/writing) using names defined in the schema.
(b) DBMS accesses the schema to verify that the requested data exists, and that the
requesting user has appropriate access privileges.
(c) If request is valid, then the DBMS extracts information about the physical organization of the
requested data from the schema and uses that info to access the physical data store on behalf
of the requesting program or user.
A DBMS provides the following data access and management capabilities:
(a) Allow simultaneous access by many users/application programs.
(b) Allow access to data without writing application programs, i.e. through queries.
(c) Managing all data of an information system as an integrated whole, through uniform, consistent
access, content controls.
Entity :
An Entity is a real-world object distinguishable from other objects. An entity is described (in DB)
using a set of attributes.
Examples : a book, an item, a student, a purchase order.

Entity Set
An Entity Set is a collection of similar entities.
E.g., All employees, Set of all books in a library.

Attribute :
An entity has a set of attributes
Attribute defines property of an entity
It is given a name
Attribute has value for each entity
Value may change over time
Same set of attributes are defined for entities in an entity set

Example :
Entity set BOOK has the following attributes
TITLE ISBN
ACC-NO AUTHOR


www.missionmca.com 109

PUBLISHER YEAR
PRICE

A particular book has value for each of the above attributes.
An attribute may be multi-valued, i.e., it has more than one value for a given entity; e.g., a book may
have many authors

An attribute which uniquely identifies entities of a set is called primary key attribute of that entity
set
Composite attribute : date, address, etc

Relationships :
It represents association among entities
E.g. : (1) A particular book is a text for particular course
Book Database Systems by C.J. Date is text for course identified by code CS644
(2) Student GANESH has enrolled for course CS644.

Relationship set :
It is a set of relationships of same type.
Words relationship and relationship set often used interchangeably
Certain entity sets :
Binary relationship : between two entities sets.
E.g. : Binary relationship set STUDY between STUDENT and COURSE.
Ternary relationship : among three entity sets.
E.g. : relationship STUDY could be ternary among STUDENT, COURSE and TEACHER.

A relationship may have attributes.
E.g. : Attribute GRADE and SEMESTER for STUDY.



www.missionmca.com 110

Normalization :

Normalization is the process of taking data from a problem and reducing it to a set of relations
while ensuring data integrity and eliminating data redundancy
Data integrity - all of the data in the database are consistent, and satisfy all integrity constraints.
Data redundancy if data in the database can be found in two different locations (direct
redundancy) or if data can be calculated from other data items (indirect redundancy) then the data
is said to contain redundancy.

Example Of Normalization :
Data taken is used to illustrate the process of normalization. A company obtains parts from a number of
suppliers. Each supplier is located in one city. A city can have more than one supplier located there and
each city has a status code associated with it. Each supplier may provide many parts. The company
creates a simple relational table to store this information that can be expressed in relational notation as
:
FIRST (s#, status, city, p#, qty) where
s# supplier identifcation number (this is the primary key)
status status code assigned to city
city name of city where supplier is located
p# part number of part supplied
qty> quantity of parts supplied to date
In order to uniquely associate quantity supplied (qty) with part (p#) and supplier (s#), a composite
primary key composed of s# and p# is used.
First Normal Form :
A relational table, by definition, is in first normal form. All values of the columns are atomic. That is, they
contain no repeating values. Figure1 shows the table FIRST in 1NF.


www.missionmca.com 111

Figure 1: Table in 1NF

Although the table FIRST is in 1NF it contains redundant data. For example, information about the
supplier's location and the location's status have to be repeated for every part supplied. Redundancy
causes what are called update anomalies. Update anomalies are problems that arise when information
is inserted, deleted, or updated. For example, the following anomalies could occur in FIRST:

INSERT. The fact that a certain supplier (s5) is located in a particular city (Athens) cannot be
added until they supplied a part.
DELETE. If a row is deleted, then not only is the information about quantity and part lost but also
information about the supplier.
UPDATE. If supplier s1 moved from London to New York, then six rows would have to be
updated with this new information.

Second Normal Form
The definition of second normal form states that only tables with composite primary keys can be in
1NF but not in 2NF.
A relational table is in second normal form 2NF if it is in 1NF and every non-key column is fully
dependent upon the primary key.
That is, every non-key column must be dependent upon the entire primary key. FIRST is in 1NF but not in
2NF because status and city are functionally dependent upon only on the column s# of the composite
key (s#, p#). This can be illustrated by listing the functional dependencies in the table:


www.missionmca.com 112

s# > city, status
city > status
(s#,p#) >qty

The process for transforming a 1NF table to 2NF is:

1. Identify any determinants other than the composite key, and the columns they determine.
2. Create and name a new table for each determinant and the unique columns it determines.
3. Move the determined columns from the original table to the new table. The determinate
becomes the primary key of the new table.
4. Delete the columns you just moved from the original table except for the determinate which will
serve as a foreign key.
5. The original table may be renamed to maintain semantic meaning.
To transform FIRST into 2NF we move the columns s#, status, and city to a new table called SECOND.
The column s# becomes the primary key of this new table. The results are shown below in Figure 2.

Figure 2: Tables in 2NF


Tables in 2NF but not in 3NF still contain modification anomalies. In the example of SECOND, they are


www.missionmca.com 113

INSERT. The fact that a particular city has a certain status (Rome has a status of 50) cannot be inserted
until there is a supplier in the city.
DELETE. Deleting any row in SUPPLIER destroys the status information about the city as well as the
association between supplier and city.
Third Normal Form
The third normal form requires that all columns in a relational table are dependent only upon the
primary key. A more formal definition is: A relational table is in third normal form (3NF) if it is
already in 2NF and every non-key column is non transitively dependent upon its primary key. In
other words, all nonkey attributes are functionally dependent only upon the primary key.
Table PARTS is already in 3NF. The non-key column, qty, is fully dependent upon the primary key (s#,
p#). SUPPLIER is in 2NF but not in 3NF because it contains a transitive dependency. A transitive
dependency is occurs when a non-key column that is a determinant of the primary key is the
determinate of other columns. The concept of a transitive dependency can be illustrated by showing the
functional dependencies in SUPPLIER:
SUPPLIER.s# > SUPPLIER.status
SUPPLIER.s# > SUPPLIER.city
SUPPLIER.city > SUPPLIER.status

Note that SUPPLIER.status is determined both by the primary key s# and the non-key column city. The
process of transforming a table into 3NF is:

1. Identify any determinants, other the primary key, and the columns they determine.
2. Create and name a new table for each determinant and the unique columns it determines.
3. Move the determined columns from the original table to the new table. The determinate
becomes the primary key of the new table.
4. Delete the columns you just moved from the original table except for the determinate which will
serve as a foreign key.
5. The original table may be renamed to maintain semantic meaning.
To transform SUPPLIER into 3NF, we create a new table called CITY_STATUS and move the columns city
and status into it. Status is deleted from the original table, city is left behind to serve as a foreign key to


www.missionmca.com 114

CITY_STATUS, and the original table is renamed to SUPPLIER_CITY to reflect its semantic meaning. The
results are shown in Figure 3 below.
Figure 3: Tables in 3NF

The results of putting the original table into 3NF has created three tables. These can be represented in
"pseudo-SQL" as:
PARTS (#s, p#, qty)
Primary Key (s#,#p)
Foreign Key (s#) references SUPPLIER_CITY.s#
SUPPLIER_CITY(s#, city)
Primary Key (s#)
Foreign Key (city) references CITY_STATUS.city
CITY_STATUS (city, status)
Primary Key (city)
Advantages of Third Normal Form :
The advantage of having relational tables in 3NF is that it eliminates redundant data which in turn saves
space and reduces manipulation anomalies.
For example, the improvements to our sample database are :
INSERT. A fact about the status of a city, Rome has a status of 50, can be added even though there is
not supplier in that city. Likewise, facts about new suppliers can be added even though they have not
yet supplied parts.
DELETE. Information about parts supplied can be deleted without destroying information about a
supplier or a city. UPDATE. Changing the location of a supplier or the status of a city requires modifying
only one row.


www.missionmca.com 115

UNIT 7
Software Quality

Software Quality
Quality Definition
The totality of features & characteristics of a product or service that bear on its ability to satisfy
stated or implied needs.
Quality of the s/w is an attribute which measures the goodness of the product.
If the product quality is good then customer expectations & the requirements of performance &
cost are said to have met properly.
Quality of conformance is the degree to which the designed standards are attained.
A higher degree of conformance increases the probability of delivering a product of desired
standards.

Quality factors:-
1. Portability
A s/w product is said to be portable, if it can easily made to work in different
operating system environments, in different machines, with other s/w
products, etc
2. Usability
A s/w product has good usability, if different categories of users can easily
invoke the functions of the product.
3. Reusability
A s/w product has a good reusability, if different modules of the product can
easily be reused to develop new products.
4. Correctness
A s/w product is correct, if different requirements as specified in the SRS
document have been correctly implemented.
5. Maintainability
A s/w product is maintainable, if errors can be easily corrected as & when they
show up, new functions can be easily added to the product, & the functionalities
of the product can be easily modified, etc.

Software Quality Management System:

A quality management system (often referred to as quality system) is the principle methodology
used by organizations to ensure that the products they develop have the desired quality.
A quality system consist of :
1. 1 Managerial structure & individual responsibilities


www.missionmca.com 116

A quality system is actually the responsibility of the organization as a whole.
Many organizations have separate quality department to perform several
quality system activities.
Quality system of the organization should have the support of the top
management.
2. 2 Quality System activities :
Auditing of the project
Review of the quality system.
Development of standards, procedures, guidelines, etc.
Production of reports for top management summarizing the effectiveness of
quality system in the organization.
A good quality system must be well documented.
Without proper documentation, the application of quality control & procedures become ad hoc
resulting in large variation in the quality of the products delivered.
International Standards such as ISO 9000 provides guidance on how to organize a quality
system.

Quality Control
To keep quality under control & to meet the desired standards, a series of steps should be
taken. These step are called quality control steps.
It includes measuring & monitoring activities like inspection, tests, reviews & control
mechanisms to take corrective actions if measured quality attribute has violated the standards.
The corrective actions may include correcting the inputs & the process that works on these
inputs.
Quality control activities may be fully automated, entirely manual, or a combination of
automated tools & human interaction.

Software Quality Assurance:

Quality Assurance (QA) is the process that assures the delivery of desired quality product to the
customer.
SQA is the process of assuring people that every effort has been made to ensure that software
products have the desired level of reliability, maintainability, & usability.
SQA is an umbrella activity that is applied throughout the s/w process.
SQA is ensured through a quality management system which works in integration with s/w
development, which comprises project, process & product management system.
SQA composed of variety of tasks associated with 2 different constituencies
1. S/w Engineers : performs technical work
2. SQA Group : has a responsibility for QA planning, oversight, record keeping, analysis &
reporting.
S/w engineers address quality (performs QA & QC activities) by applying solid technical methods
& measures, conducting formal technical reviews & performing well planned S/w testing
The SQA group assist the s/w team in achieving the high quality s/w product


www.missionmca.com 117

The SQA group performs several activities which includes:
1. 1 Quality Assurance Planning
The plan is developed during the project planning & is reviewed by all interested
parties.
The plan identifies :
Evaluations to be performed
Audits & reviews to be performed
Standards that are applicable to the project
Procedures for error reporting & tracking
Documents to be produced by SQA group
Amount of feedback provided to the s/w project team.

2. Participates in the development of projects s/w process description
The s/w team selects a process for the work to be performed.
The SQA group reviews the process description for compliance with
organizational policies, internal s/w standards etc.
3. Reviews S/w engineering activities to verify compliance with the defined s/w process.
The SQA group identifies, documents & tracks deviation from the process &
verifies that corrections have been made.
4. Audits designated s/w work products to verify compliance with those defined as part of
the s/w process
The SQA group reviews selected work products; identifies, documents & tracks
deviations; verifies that corrections have been made; & periodically reports the
result of its work to the project manager.
5. Ensure that deviation in a s/w work & work products are documented & handled
according to a documented procedure.
6. Records any noncompliance & reports to senior management.
Noncompliance items are tracked until they are resolved.
SQA has a variety of tools to implement the quality policy. They are:
1. Auditing Verify compliance with those norms & practices specified in QA policy. If any
deviations are found they set right.
2. Inspection Ensures that the deviations are documented & reported & put into QA
database for guidance.
3. Technical Review Design & architecture is reviewed to ensure that standards are met
& customer quality is assured.
4. Coordinates & Control Implement Change Management
5. Collects & Analyze Collect the data on various observations in auditing, inspections &
reviews to build QA database & to improve the standards
Hence, in short, SQA assures s/w quality, reliability, availability & safety.

SQA Plan

SQA Plan lay down the steps towards quality goals of organization.
A standard for SQA plan is prescribed in IEEE 94


www.missionmca.com 118

The documentation of SQA Plan includes:
1. Project Plan
2. Models of data, classes, objects, processes, design, architecture
3. S/w requirement specification
4. Test plan for testing SRS
5. User help documentation, manuals, online help etc.
6. Reviews & audits.

Testing methods for SQA

S/w quality assurance is achieved through designing test cases & using them as control measure
to ensure the desired quality.
Quality assurance to the customer is possible when it is backed by strategy of testing.
The strategy is built upon following test platforms:
Test That which is being tested Focus
Unit Test Process Code
Module Test Function Design
Application Test Business Operations Design
Integration Test Multiple Business operations Interface
Customer Acceptance RDD Major Deliverables.

S/w Quality Reviews

Reviews are widely used to validate the quality of a process or product.
A group of people examine part or all the s/w process, system or its documents to discover the
problems.
The conclusions of the review are formally recorded & passed to the authorized people for
correcting the discovered problems.
Quality review included the technical analysis of the product, component or documentation to
find the mismatch between the specification & the component design, code, documentation &
to ensure that defined quality standard have been followed.
The main aim of the review team is to detect errors & inconsistencies & point them out to the
design or document authors.


www.missionmca.com 119

Documents like process models, process standards & user manuals may all be reviewed.
The review team must contain 3-4 people who will be principal reviewers.
One member will be senior designer who can take the responsibility for making significant
technical decisions.
The principle reviewers may invite other project member to contribute to the review.
The review team may circulate the document being reviewed & ask for written comments from
other project members.
Documents to be reviewed must be distributed well in advance of the review to allow reviewers
time to read & understand them.
The review procedure should be relatively short.
One team member should chair the review & other should formally record all review decisions.
During review, the chair person is responsible for ensuring that all written comments are
considered.
On completion of review, the actions are noted & forms recording the comments & actions are
signed by designer & review chair.
They are filled as part of formal Project Documentation.
The chair person is responsible for any minor changes to it.
If major changes are necessary, then a follow-on review may be arranged.


Formal Technical Review (FTR):

FTR is s/w quality control activity performed by s/w engineers.
It is also called as
1. Walkthrough
2. Inspection
Objective of FTR :-
1. To uncover errors in function, logic or implementation of the s/w
2. To verify that the s/w under review meets its requirements
3. To ensure that the s/w has been represented according to predefined standards.
4. To achieve s/w that is developed in an uniform manner
5. To make projects mare manageable.
Guidelines for FTR
1. Review the produce not the procedure
Conduct the review properly which leaves all participants with the warm feeling of
accomplishment.
Errors should be printed out gently.
The tone of the meeting should be loose & constructive.
2. Set an agenda & maintain it
An FTR must be kept on track
3. Limit Debate
When an issue is raised by reviewers, there may not be universal agreement on
its impact.


www.missionmca.com 120

Rather than spending time debating the question, issue should be recorded for
further discussion off-line.
4. Identify problem area, but don't attempt to solve every problem noted
A review is not a problem solving session.
5. Take written notes
Make a note on wall board, so that priorities can be assessed by other reviewers as
information is recorded.
6. Limit the number of participants & insist upon advance preparation
2 heads are better than 1, but 14 are not necessarily better than 4. Keep the number to
necessary minimum.
All the team members should prepare in advance.
7. Prepare a checklist for every product that is likely to be reviewed
A checklist helps the review leader to structure the FTR meeting & helps each review to
focus on important issues.
8. Allocate resources & schedule time for FTR's :-
For reviews to be effective they should be scheduled as a task during the s/w process.
9. Conduct meaningful trainings for all reviewers :-
All the review team members should receive formal training for effectiveness in reviews.
The training should stress both process - related issues & human psychological side of
reviews.
10. Review your early reviews :-
Many errors can be uncovered in the review process itself. Hence the very first product
to be reviewed should be the review guidelines themselves.

During FTR, a reviewer actively records all issues that have been raised.
A FTR summary report is submitted by the team.
The review summary report answer the following questions:
What was reviewed?
Who reviewed it?
What were the findings & conclusions?
This report is single page form & becomes the part of project historical data.

Overview of ISO 9000:

ISO (International Standard Organization) is an understanding between 63 countries which
formulates standardization. It published its 9000 series of standards in 1987.
ISO 9000 standard specifies a guidelines for maintaining a quality system.
ISO standard mainly addresses operational & organizational aspects like responsibilities,
reporting etc.
It is a set of guidelines for production process & is not directly concerned with the product itself.
It says that, if proper process is followed for production then good quality products are bound to
follow automatically.
ISO 9000 has 3 series of standards :
ISO 9001


www.missionmca.com 121

These standards apply to organizations engaged in design, development,
production & servicing of goods.
This standard is applicable to many s/w organizations.
ISO 9002
These standards are applied to those organizations which do not design
products but are only involved in production.
E.g. - industries like steel & car manufacturing who buy the product & plant
design from external sources & are involved in only manufacturing those
products.
Not applicable for s/w development organizations.
ISO 9003
This is applied to organizations involved in installation & testing of the products.


ISO 9000 for s/w industry

ISO has released a separate document called ISO 9000 part 3 in 1991 to interpret the ISO
standards for s/w industry
The benefits of ISO 9000 certification are:
Confidence of customer on an organization increases when an organization qualifies for
ISO 9001 certification. This is true especially in international market.
ISO 9000 requires a well documented s/w production process. Hence, this
documentation can contribute to the development of a high quality product.
ISO 9000 make development process focused, efficient & cost effective.
ISO 9000 point out the weak points of the organization & recommends remedial plan.
ISO 9000 .(continued)
ISO certification can be used for corporate advertisement but cannot be used for advertising its
product.
This is because ISO 9000 certificate is issued for organizations process & does not apply to any
specific product of the organization.
In India ISO 9000 certification is offered by
BIS (Bureau of Indian Standards )
STQC (Standardization, Testing & Quality Control)
IRQS (Indian Register Quality System)

SEI Capability Maturity Model. (SCMM):

This was proposed by s/w Engineering Institute of the Carnegie Mellon University, USA
It was originally developed to assist the U.S. department of defense (DOD) in s/w acquisition.
Later, it helped many organizations to improve the quality of the s/w they developed & hence
the adoption of this model had significant business benefits. SEI CMM can be used in 2 ways:


www.missionmca.com 122

1 .Capability Evaluation provides a way to assess the s/w process capability of an
organization.
2 .Software process assessment is used by an organization with the objective to
improve its process capability. This type of assessment is for pure internal use.
SEI CMM classifies s/w development industries into the following 5 maturity levels.

Level 1 : Initial
A s/w organization at this level is characterized by ad hoc activities.
Very few or no processes are defined & followed
Since s/w production process are not defined, different engineers follow their own
process & the resulting efforts become chaotic
When engineers leave, the successors have great difficulty in understanding the process
followed & getting he work completed.
Since formal project management practices are not followed, shortcuts are tried which
leads to low quality.
Level 2 : Repeatable
At this level, Basic project management processes are established to track cost,
schedule, and functionality
Size & cost estimation technique such as function point analysis, COCOMO etc are used.
The necessary process discipline is in place to repeat earlier successes on projects with
similar applications.
Level 3 : Defined
The software process for both management and engineering activities is documented,
standardized, and integrated into an organization wide software process
All projects use a documented and approved version of the organization's process for
developing and supporting software.
There is a common organization wide understanding of activities, roles &
responsibilities.
ISO 9000 aims at achieving this level.
Level 4 : Managed
At this level, the focus is on s/w metrics.
2 types of metrics are collected
Product metrics Measure the characteristics of the product being developed
like size, reliability etc.
Process Metric reflects the effectiveness of the process being used like
productivity, average no. of defects found etc.
Level 5 : Optimization
Continuous process improvement is enabled by quantitative feedback from the process
and from testing innovative ideas and technologies.
Continuous process improvement is achieved both by carefully analyzing the
quantitative feedback from process measurements & from application of innovative
ideas & technologies.



www.missionmca.com 123



CMM Level Focus Key Process Areas
Initial Competent people S/w configuration Management
Repeatable Definition of Process Process Definition, Training Program, peer Review
Managed Product & Process Quality S/w quality management
Optimizing Continuous Process
improvement
Defect prevention, process change management,
technology change management.

McCalls Quality Factor:
The quality factor proposed by him are of 2 types:
Factors that can be directly measured (e.g. defects uncovered during testing)
Factors that can be measured only indirectly ( e.g. usability & maintainability)
3 important aspects are discussed regarding quality factors:
Operations factors measured are correctness, reliability, usability, integrity, efficiency
Adaptability factors measured are portability, reusability, interoperability
Changeability factors measured are maintainability, flexibility, testability.





www.missionmca.com 124

1. Correctness Extent the s/e meets the customer quality goals & expectations.
2. Reliability The extent the s/w functions with precision without any failure. (trustworthiness &
dependability)
3. Efficiency Cost of resource to run the s/w.
4. Integrity Extent to which the s/w is protected from unauthorized access.
5. Usability Effort required to learn, operate, prepare input, and interpret output of a program.
6. Maintainability - Effort required to locate and fix an error in a program
7. Flexibility - Effort required to modify an operational program.
8. Testability - Effort required to test a program to ensure that it performs its intended function.
9. Portability - Effort required to transfer the program from one hardware and/or software system
environment to another.
10. Reusability - Extent to which a program [or parts of a program] can be reused in other
applicationsrelated to the packaging and scope of the functions that the program performs.
11. Interoperability - Effort required to couple one system to another.

FURPS Quality Factors:
Functionality - is assessed by evaluating the feature set and capabilities of the program, the
generality of the functions that are delivered, and the security of the overall system.

Usability - is assessed by considering human factors overall aesthetics, consistency, and
documentation.

Reliability - is evaluated by measuring the frequency and severity of failure, the accuracy of
output results, the mean-time-to-failure (MTTF), the ability to recover from failure, and the
predictability of the program.

Performance - is measured by processing speed, response time, resource consumption,
throughput, and efficiency

Supportability - combines the ability to extend the program (extensibility), adaptability,
serviceabilitythese three attributes represent a more common, term, maintainabilityin
addition, testability, compatibility, configurability, the ease with which a system can be installed,
and the ease with which problems can be localized.



www.missionmca.com

Soft
Software Reliability:
Reliability of S/w products denotes its trustworthiness or dependability.
Defined as : probability of the product working correctly over a given period of time.
Reliability of the system improves, if the number of defects in it are reduced.
However, there is no simple relationship between observed system reliability & the number of
latent defects in the system.
Removing defects from the part of the s/w which is ra
difference to the perceived reliability of the product.
Reliability of the product also depends on the exact location of the errors.
It also depends upon how the product is used i.e. on its execution profile.
If we enter input data such that only the correctly implemented functions are executed,
none of the errors will be exposed & reliability of the product will be high.
On the other hand, if we select i/p data such that only those function which contain
errors are invoked, the
Different users use a s/w product in different ways
not show up for another user.
Therefore, the reliability of s/w product is observer
absolutely.
E.g. Library Automation s/w
Functions that library members use (issue book, search book) are error free
Function that librarian use (create member, delete member) have many bugs.


Hardware vs. Software Reliability

Hardware Product


UNIT 8
Software Reliability and Maintenance
Reliability of S/w products denotes its trustworthiness or dependability.
probability of the product working correctly over a given period of time.
Reliability of the system improves, if the number of defects in it are reduced.
However, there is no simple relationship between observed system reliability & the number of
latent defects in the system.
Removing defects from the part of the s/w which is rarely executed makes little
difference to the perceived reliability of the product.
Reliability of the product also depends on the exact location of the errors.
It also depends upon how the product is used i.e. on its execution profile.
ata such that only the correctly implemented functions are executed,
none of the errors will be exposed & reliability of the product will be high.
On the other hand, if we select i/p data such that only those function which contain
errors are invoked, the reliability of the system will be very low.
Different users use a s/w product in different ways defects which show up for one user may
not show up for another user.
Therefore, the reliability of s/w product is observer-dependent & can not be determined
E.g. Library Automation s/w
Functions that library members use (issue book, search book) are error free
Function that librarian use (create member, delete member) have many bugs.
Hardware vs. Software Reliability


125
probability of the product working correctly over a given period of time.
However, there is no simple relationship between observed system reliability & the number of
rely executed makes little
ata such that only the correctly implemented functions are executed,
none of the errors will be exposed & reliability of the product will be high.
On the other hand, if we select i/p data such that only those function which contain
defects which show up for one user may
dependent & can not be determined
Functions that library members use (issue book, search book) are error free
Function that librarian use (create member, delete member) have many bugs.


www.missionmca.com

Software Product

Reliability Metrics:
Metrics or techniques used to estimate the quantitative reliability of s/w product are called
reliability metrics.
A good reliability metric should be observer
degree of reliability that the system has.
A precise technique should measure the reliability.
Technique which gives the same performance value irrespective of who is carrying out
the performance measurement.
1. Rate of Occurrence of Failure (ROCOF)
It measure the frequency of occurrence of unexpected behavior (i.e. failure)



Metrics or techniques used to estimate the quantitative reliability of s/w product are called
A good reliability metric should be observer-independent, so that different people can agree on
ree of reliability that the system has.
A precise technique should measure the reliability.
Technique which gives the same performance value irrespective of who is carrying out
the performance measurement.
Rate of Occurrence of Failure (ROCOF)
the frequency of occurrence of unexpected behavior (i.e. failure)

126

Metrics or techniques used to estimate the quantitative reliability of s/w product are called
independent, so that different people can agree on
Technique which gives the same performance value irrespective of who is carrying out
the frequency of occurrence of unexpected behavior (i.e. failure)


www.missionmca.com 127

It can be obtained by observing the behavior of s/w product in operation over a
specified time interval & then calculating the total number of failures during this
interval.
2. Mean Time to Failure (MTTF)
It is the average time between two successive failures, observed over a large number of
failures.
Let the failure occur at the time instants t
1
,t
2
,.,t
n
. Then MTTF can be calculated as
n

t
i+1
t
i


i=1
(n-1)
- Only run-time is considered in the time measurements (Boot time etc. is not considered)
3. Mean Time to Repair (MTTR)
Once failure occurs, some time is required to fix the error.
MTTR measures the average time it takes to track the errors causing the failure & then
to fix them.


4. Mean Time Between Failures (MTBF)
MTBF = MTTF + MTTR
E.g. if MTBF of 300 hrs. indicate that once a failure occurs, the next failure is expected
to occur only after 300 hrs.
5. Probability of Failure on Demand (POFOD)
It measure the likelihood of the system failing when a service request is made.
E.g.- a POFOD of 0.001 means that 1 out of every 1000 service request would result in
failure.
6. Availability
It is the likelihood of the system made available for use over a given period of time.
This metric considers:
1. The no. of failures occurring during a time interval &
2. The repair time of the system when the failure occurs.
This metric is used for systems like telecommunication or o.s., which are never
supposed to be down.
Classification of Failures
1. Transient
Occur only for certain input values while invoking a function of the system.
2. Permanent
Occur for all input values while invoking a function of a system.
3. Recoverable
When recoverable failures occur, the system recovers with or without operator
intervention.
4. Unrecoverable
The system may need to be restarted.
5. Cosmetic
These failures cause minor irritations, & do not lead to incorrect results


www.missionmca.com 128

E.g. mouse button need to be clicked twice to invoke a specific function

Reliability Growth Modelling :
It is a mathematical model of how the s/w reliability improves as errors are detected & repaired.
This model tells us how the system reliability changes over time during the testing process.
As the system failures occur, the underlying faults causing these failures are repaired so that the
reliability of the system will improve during system testing & debugging
Hence, Reliability growth modelling is used to determine when to stop testing to attain a given
reliability level.

1.Jelinski & Moranda Model
This is a simple step function model where the reliability increases by a constant
increment each time a fault is discovered & repaired.
This model assumes that s/w repairs are always correctly implemented so that the no.
of s/w faults & associated failures decreases in each new version of the system.
As repairs are made, the rate of occurrence of s/w failure (ROCOF) should therefore
decrease as shown in following figure.
Note that, the time period on the horizontal axis reflects the time between the release
of the system for testing so they are normally of unequal length.

Drawbacks:
1. In practice, at time the reliability of the system using this model can worsen than
improving.
If the s/w faults are detected during testing phase, to correct these faults may
be the code should be changed which may introduce new faults into the system.
2. The growth of reliability is not constant in each time increment
Repairing the most common faults can contribute more to the reliability growth
than repairing the faults which occur occasionally.




www.missionmca.com 129

2.Littlewood & Veralls model
This model allows for negative reliability growth to reflect the fact that when a repair is
carried out, it may introduce additional errors.
This model also says that as faults are repaired, the average improvement in reliability
per repair decreases.
Because, in testing process the most probable faults are recognized 1
st
, repairing these
contributes to the reliability growth
Following fig. shows the littlewood & Veralls model. The fig. shows that each repair
does not result in equal amount of reliability improvement








www.missionmca.com 130



Software Maintenance Cost

Maintenance Cost vary widely from one application domain to another.
For business applications, studies showed that the cost incurred on maintenance is almost equal
to the development cost.
For real time systems, it is observed that maintenance cost is 4 times the developmental cost
Experiments also proved that investing efforts on developmental activities (design, code, test,
implement) drastically reduces the cost of maintenance
Good s/w engineering techniques, like precise specification, loose coupling & configuration
management reduces maintenance cost.



The above fig. shows how the overall lifetime cost may decrease as more effort is spend during
system development to produce maintainable system.
Maintenance cost depends on many technical & non- technical factors.
Technical Factors:
1 Module Independence ability to modify one component of the system without
affecting other system components.
2 Programming Languages Programming in higher level languages are much easier
than Lower level languages.
3 Programming Style It affects the understandability. If simple then it is easy to modify
& maintain it.
4 Program validation & testing - More time spent on validation & testing, fewer are
errors in the program. Hence corrective maintenance cost are minimized.
5 Quality of Program Documentation Program maintenance cost are tend to be less
for well documented systems.


www.missionmca.com 131

6 Configuration management techniques used keeping track of all system documents
& ensuring that they are kept consistent. Effective configuration management control
the maintenance cost.

Non technical factors :-
1 Application Domain - If application domain is clearly defined & well understood,
system requirement can be identified completely. If application is in new domain then it
is likely that the requirements may be modified frequently. Hence maintenance is
tedious.
2 Staff Stability If the system developers are the maintenance staff then the
maintenance is quite easy. But practically this may not be possible. Hence, more the
staff are stable the less are the maintenance cost & efforts.
3 Age of the program Older the program, the more is its maintenance.
4 Dependency of program on its external environment If so then the program should
be modified to adapt the changes. E.g. Changes in taxation system might require
payroll, accounting, stock Control programs to be modified.
5 H/w Stability - The more frequent are the h/w configuration changes, the more the
maintenance.


Software Maintenance Cost Estimation:

Maintenance costs are affected by many factors, hence it is very difficult to use some systematic
approaches to estimate maintenance cost.
Estimated prepared from past historical data is reasonably accurate when the past data are
consistent with the present working system.
COCOMO cost estimation technique can be extended to prepare the cost estimates for
maintenance.
Boehm established a formula for estimating maintenance cost.
ACT ( Annual Change traffic) is fraction of the s/w source instruction which changes during the
year either by addition or by modification.
AME = ACT * SDT
AME Annual maintenance effort
SDT s/w development time.
E.g. A s/w project required 236 person months of development effort & it was estimated that
15% of the code would be modified in a year.
SDT = 236 p.m.
ACT = 0.15
Therefore, AME can be calculated as,


www.missionmca.com 132

AME = 0.15 * 236 = 35.4 p.m.
Boehm suggested that this rough estimation of the maintenance cost can be used as initial
estimates (Ei) & intermediate COCOMO model can be used to prepare more accurate estimation
using the cost drivers.
E (final estimation) = Ei * EAF
Ei = Initial Estimation
EAF = Effort Adjustment Value.
The COCOMO model used here suffers from following drawbacks:
In, many cases systems are completely new & there is no historical basis for estimating
ACT. Hence guesses are unreliable.
It is not clear what attributes should be used as maintenance attributes.

Re-engineering & Reverse Engineering:

Many old s/w systems which were used in many business, government & other organizations
required lot of maintenance.
In some businesses, it was estimated that 80% of s/w expenditure is consumed by system
maintenance.
Such old systems which must still be maintained are some times called as legacy system.
Many legacy systems were developed before S.E. Techniques were widely used. They are poorly
structured & their documentation may be either out-of-date or non-existent
Most applications are made up of a number of different programs which, in some way, share
data. They may use database system or may rely on separate files for data storage.
Different programs are developed which used different files. Each of this file can have its own
format.
Information can be duplicated across different files because, information is tightly bound with
program data structure.
It is not accessible as individual data items, so can not be shared by different programs.

Software Re-engineering:
It is concerned with taking existing legacy system & re-implementing them to make them more
maintainable.



www.missionmca.com 133


Re-structuring or re-writing part or all of a legacy system without changing its functionality.
Applicable where some but not all sub-systems of a larger system require frequent
maintenance.
Re-engineering involves adding effort to make them easier to maintain. The system may be re-
structured and re-documented.
Advantages of re-engineering
Reduced risk
There is a high risk in new software development. There may be development
problems, staffing problems and specification problems.
Reduced cost
The cost of re-engineering is often significantly less than the costs of developing
new software.
Reengineering process activities
1 Source code translation
Convert code to a new language.
2 Reverse engineering
Analyze the program to understand it;
3 Program structure improvement
Restructure automatically for understandability;
4 Program modularization
Reorganize the program structure;
5 Data reengineering
Clean-up and restructure system data.

Reverse Engineering:
It is a part of re-engineering process.
It is the process of studying old s/w system to recover its design & specification.
S/w source code is usually used as input to the reverse engineering process.
Reverse Engineering steps are shown in the dig.
Abstraction involves study source code of old system to understand data, internal data
structures used, database structure used, processing, user interfaces & prepare initial
specification for re-engineering.
Understanding data
At program level internal data structure must be reviewed
At system level global data structures ( files, databases) are re-engineered to
accommodate new database management paradigm. ( e.g. the move from flat file to
relational or object oriented database system)
Understanding Processing
This begins with attempt to understand & then extract procedural abstractions
represented by source code.
To understand the procedural abstractions, code is analyzed at varying levels of
abstraction .
Engineers looks for sections of code that represent generic procedural patterns

Potrebbero piacerti anche