Sei sulla pagina 1di 272

M.Sc.

Information Technology
(DISTANCE MODE)

DSE 112 Software Engineering

I SEMESTER COURSE MATERIAL

Centre for Distance Education


Anna University Chennai Chennai 600 025

Author

Dr. G.V. Uma


Assistant Professor Department of Computer Science and Engineering Anna University Chennai Chennai 600 025

Reviewer

Dr. K. M. Mehata
Professor Department of Computer Science and Engineering Anna University Chennai Chennai 600 025

Editorial Board

Dr. C. Chellappan
Professor Department of Computer Science and Engineering Anna University Chennai Chennai 600 025

Dr. T.V. Geetha


Professor Department of Computer Science and Engineering Anna University Chennai Chennai 600 025

Dr. H. Peeru Mohamed


Professor Department of Management Studies Anna University Chennai Chennai 600 025

Copyrights Reserved (For Private Circulation only)

ACKNOWLEDGEMENT The Author Dr.G.V.Uma, Assistant Professor, Department of Computer Science & Engineering, College of Engineering, Anna University, Chennai 600 025, extends heart felt thanks and gratitude to the Director, Distance Education, Anna Univesity, Chennai and the Deputy Director, M.Sc Software Engineering, for the opportunity given to prepare the Course Material for Software Engineering.

I author has drawn inputs from Several Sources for the preparation of this Course Material, to meet the requirements of the syllabus. The author gratefully acknowledge the following sources:

Software Engineering A Practitioners Approach, By Roger. S.Pressman, Mc Graw Hill International 6th edition, 2005. www. OO design.com www. Sqa.net www. Softwareqatest.com www. Sce. Carleton. Ca / faculty / chinneck / po / Chapter 11. pdf www. Cs.Umd.edu / ~vibha An integrated approach to software Engineering, By Pankaj Jalote, Second edition, Springer verlag 1997. software Engineering, By Ian Sommerville, 6th edition, Pearson education, 2000.

Dr. G.V. UMA Assistant Professor Department of Computer Science & Engineering College of Engineering Anna Univesity, Chennai 25.

DSE 112 SOFTWARE ENGINEERING UNIT I Introduction The Software problem Software Engineering Problem Software Engineering Approach Summary Software Processes Characteristics of a Software Process Software Development Process Project Management Process Software Configuration Management Process Process Management Process Summary. UNIT II Software Requirements Analysis and Specification Software Requirements Problem Analysis Requirements Specification Validation Metrics Summary. UNIT III Planning a Software Project Cost Estimation Project Scheduling Staffing and Personnel Planning Software configuration Management Plans Quality Assurance Plans Project Monitoring Plans Risk Management Summary. UNIT IV Function-oriented Design Design Principles Module-Level Concepts Design Notation and Specification Structured Design Methodology Verification Metrics Summary. Detailed Design Module specifications Detailed Design Verification Metrics Summary. UNIT V Coding Programming Practice Top-down and Bottom-up - structured programming Information Hiding Programming style Internal Documentation Verification Code Reading Static Analysis Symbolic Execution Code Inspection or Reviews Unit Testing Metrics Summary Testing Fundamentals Functional Testing versus structural Testing Metrics Reliability Estimation Basic concepts and Definitions Summary. TEXT BOOK 1. Pankaj Jalote, An Integrated Approach to Software Engineering, Narosa Publishing House, Delhi, 2000. REFERENCES 1. Pressman R.S., Software Engineering, Tata McGraw Hill Pub. Co., Delhi, 2000. 2. Sommerville, Software Engineering, Pearson Education, Delhi, 2000.

DSE 112

SOFTWARE ENGINEERING

UNIT I
1.1 1.2 1.3 1.4. 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 INTRODUCTION LEARNING OBJECTIVES BASIC DEFINITIONS CHARACTERISTICS OF SOFTWARE ISSUES WITH SOFTWARE PROJECTS SOFTWARE ENGINEERING PRINCIPLES SOFTWARE ENGINEERING APPROACHES SOFTWARE PROCESS SOFTWARE DEVELOPMENT PROCESS PROJECT MANAGEMENT PROCESS SOFTWARE CONFIGURATION MANAGEMENT PROCESS CAPABILITY MATURITY MODEL (CMM)

Page Nos. 1 1 2 3 4 6 9 21 28 40 41 45

UNIT II
2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 INTODUCTION LEARNING OBJECTIVES REQUIREMENTS ENGINEERING PROCESS SOFTWARE REQUIREMENTS PROBLEMS THE REQUIREMENTS SPIRAL TECHNIQUES FOR ELICITING REQUIREMENTS SOFTWARE REQUIREMENTS SPECIFICATION (SRS) SOFTWARE REQUIREMENTS SPECIFICATION SOFTWARE REQUIREMENTS VALIDATION REQUIREMENTS METRICS 51 51 52 53 55 56 60 62 74 76

UNIT III
3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 INTRODUCTION LEARNING OBJECTIVES PLANNING A SOFTWARE PROJECT COST ESTIMATION PROJECT SCHEDULING STAFFING AND PERSONNEL PLANNING SOFTWARE CONFIGURATION MANAGEMENT QUALITY ASSURANCE PLAN RISK MANAGEMENT 81 81 81 83 97 109 112 118 130

DSE 112

SOFTWARE ENGINEERING Page Nos.

UNIT IV
4 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 INTRODUCTION LEARNING OBJECTIVES FUNCTION-ORIENTED DESIGN DESIGN PRINCIPLES MODULE LEVEL CONCEPTS STRUCTURED DESIGN STRUCTURED DESIGN METHODOLOGY DETAILED DESIGN MODULE SPECIFICATIONS DESIGN VERIFICATION DESIGN METRICS 141 141 141 154 157 161 162 169 181 185 189

UNIT V
5 5.1 5.2 5.3 5.4 INTRODUCTION LEARNING OBJECTIVES CODING PROGRAMMING PRACTICES TOP-DOWN AND BOTTOM-UP 195 195 195 196 196 197 199 200 202 204 204 205 209 214 216 225 235 243 245 251 255

5.5
5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18 5.19

STRUCTURED PROGRAMMING

INFORMATION HIDING PROGRAMMING STYLE INTERNAL DOCUMENTATION CODE VERIFICATION CODE READING STATIC ANALYSIS SYMBOLIC EXECUTION CODE REVIEWS AND WALKTHROUGHS UNIT TESTING CODING METRICS INTEGRATION TESTING TESTING FUNDAMENTALS FUNCTIONAL VS. STRUCTURAL TESTING SOFTWARE RELIABILITY ESTIMATION - BASIC CONCEPTS AND DEFINITIONS 5.20 SOFTWARE RELIABILITY ESTIMATION

DSE 112

SOFTWARE ENGINEERING

NOTES

UNIT I
1.1 INTRODUCTION
Software has become the key element in the evolution of computer-based systems and products and one of the most important technologies in the world stage. Over the past several years, software has evolved from a specialized problem solving and information analysis tool into an industry itself. Yet, we still have many problems in developing high quality software on time and within budget. Software- programs, data and documents-address a wide array of technology and application areas, yet all software evolve according to a set of rules that remain the same. The intent of software engineering is to provide a framework for building high quality software. In order to study in detail about software engineering, in the first place we need to be clear about some of the definitions in software engineering such as software, engineering, software engineering and software lifecycle. 1.2 LEARNING OBJECTIVES 1. The various terminologies in software engineering. 2. The characteristics of software project. 3. The issues in the software project. 4. Software Engineering Principles. 5. The Approaches to various Software Engineering paradigms. 6. What is Software Process 7. Traditional Software Life Cycle Models. 8. Various Software Engineering Processes such as the development process, Project Management Process, Software Configuration Management Process.

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

1.3 BASIC DEFINITIONS


1.3.1. Software Software is a set of instructions that when executed provide the desired features, function and performance. It is the data structure that enables the programs to adequately manipulate information and also a set of documents that describe the operation and use of programs. It is a set of instructions that cause a computer to perform one or more tasks. The set of instructions is often called a program or, if the set is particularly large and complex, a system. Computers cannot do any useful work without instructions from software; thus a combination of software and hardware (the computer) is necessary to do any computerized work. A program must tell the computer each of a set of tasks to perform, in a framework of logic, such that the computer knows exactly what to do and when to do it. 1.3.2. Engineering Engineering is the application of scientific and mathematical principles to practical ends such as the design, manufacture, and operation of efficient and economical structures, machines, processes, and systems. 1.3.3. Software Engineering The IEEE definition of Software Engineering is as follows. It is the application of a systematic, disciplined and quantifiable approach to the development, operation and maintenance of software that is the application of engineering to software. Another definition of software engineering is the establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently on real machines. 1.3.4. Software Lifecycle The software lifecycle is the set of activities and their relationships to each other to support the development process. It can be better understood from the figure 1.0 shown below. The typical activities in the software lifecycle are as shown below. 1. Feasibility Study

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

2. 3. 4. 5. 6. 7.

Requirements Elicitation and Analysis Software Design Implementation Testing Integration Installation and Maintenance
Requirements Elicitation and Analysis

NOTES

Feasibility Study

Software Design

Implementation

Installation and Maintenance

Integration

Testing

Figure 1.1: Software Development Life Cycle of software project

1.4. CHARACTERISTICS OF SOFTWARE


Software has got some characteristics that make it different from the other forms of core or rather hard engineering fields like mechanical, civil and the like. 1.4.1. Software is intangible Software is an entity that is intangible, which means we cannot touch and feel a software product. Software is developed and not engineering. In the classical sense we do not build software as it is done for laying road or building bridges, dams. Though some similarities exist between software development and hardware manufacturing, the approaches used to build each are different. High quality can be achieved in both through good designs, but still in hardware manufacturing, there is scope for more errors to be made during the manufacturing process. 1.4.2. Software does not wear out Another key characteristic of software that is quite different from hardware is that software does not wear out whereas hardware does. This can be easily illustrated by the following graph.
3 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

1.4.3. Software is flexible to changes The software project requirements change frequently but these changes can be accommodated easily as software is very flexible. The completion of the software project happens only if we write code that performs correctly and also the related documents required are also ready.

1.5 ISSUES WITH SOFTWARE PROJECTS


1.5.1 Unclear and missing requirements The customers will never state the requirements clearly. In fact, the customer will most of the time be unaware of what he exactly wants from the system. The requirements or rather the problem statement will not be very clear. It will be very ambiguous and misleading. It is the duty of the requirement elicitor to take the necessary actions and do all possible mechanisms to obtain the correct set of requirements. The requirements thus obtained should be clear, complete, unambiguous, consistent, testable, verifiable and traceable. 1.5.2 Requirements keep changing The customer would like to add certain features or delete some in the problem statement. He keeps changing his mind about the product and hence there is ample change that the requirements keep changing. We need to maintain the consistency, completeness and the traceability of all the requirements. 1.5.3 There is always a constant need to deliver more at any given point of time It is always the fact that in any software project, the need for more time is inevitable. The developers are expected to deliver more at any given point of time. The work pressure is thus always high at any point or at phase of the development of the software project. 1.5.4 The quality of the software can be measured only after the whole system is built and in starts functioning Unlike any other hard engineering fields, the quality of the product, which is software, can be assessed only after it has been completely developed.
Anna University Chennai 4

DSE 112

SOFTWARE ENGINEERING

1.5.5 Choosing the correct life cycle model for the software project is a difficult one. Any software project needs to follow a particular life cycle model for its development in an organized manner. There are many models such as the Water fall model, Iterative model, Spiral model, Rapid Prototyping model and so on each with its own advantages and disadvantages. Hence, choosing the appropriate life cycle model for the development of the software is quite a tough task and needs much attention. 1.5.6 Security is a main focus area in software engineering, which has many loopholes. Software Security is one of the main areas, which need much attention. As the competition grows in the software industry, so does the threat to the information it contains. Hence, security measures should be perfectly in place in order to make sure that the information is correct. 1.5.7 Self-Inflicted Vulnerabilities Software engineering must consider system-level information assurance issues such as; 1. Possible fail-stop mechanisms and procedures 2. Fallback, contingency solutions for both direct and secondary effects of failure modes 3. Usage scenarios are frequently not a priori limited 4. The most important aspect of a software-based system may not be intrinsic, but lie in modeling and analysis of its interactions with external factors and overall mission assurance. 1.5.8 The right standards Standards are needed to measure the effectiveness of the software project. There are many standards that are in vogue. Some of the highly regards standards are those of the ISO, IEEE. Some organizations can follow their own standards. Whatever the case may be, any software that gets developed need to be measured against the standards in order to verify if they meet the quality requirements of the standards. 1. Too heavy, inflexible? 2. Too imprecise?
5

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

3. 4. 5. 6. 7. 8.

Large-scale variability Types of projects Defect consequences Scale (in terms of the number of modules, functions etc.) Stability of requirements Acceptable time to IOC

This makes it very likely that only a selective subset of standards applies to any given project

1.6 SOFTWARE ENGINEERING PRINCIPLES


There are certain principles in Software Engineering that need to be followed in order to develop a quality and reliable product. The diagram 1.2 shown below gives a pictorial representation that conveys that the whole of the software development is based upon the software engineering principles.
Tools

Methodologies Methodologies
Methods and techniques Principles Principles

Figure 1.2: Overview of Software Engineering The use of software engineering principles during the software development would help in the development of the software in a well organized manner which would have options for the incorporating the changes that might arise anytime during the course of development and also would maximize the quality of the software developed. The following are the important principles of software engineering. 1.6.1 Rigor and formality

a. Software engineering is a creative design activity, BUT b. It must be practiced systematically


Anna University Chennai 6

DSE 112

SOFTWARE ENGINEERING

c. Rigor is a necessary complement to creativity that increases our confidence in our developments d. Formality is rigor at the highest degree - software process driven and evaluated by mathematical laws e. Examples: Mathematical (formal) analysis of program correctness Systematic (rigorous) test data derivation Process: Rigorous documentation of development steps helps project management and assessment of timeliness 1.6.2 Separation of concerns Most of the software projects involve great deal of complexity. There are many projects that have too much functionality and hence the complexity increases. Highly complex projects can be better approached using the Separation of concerns. Doing this way, we will be able to concentrate better in a particular module at a time and reduce the complexity. The following points give an idea of the need for the concept of separation of concerns. a. b. c. d. e. To dominate complexity, separate the issues to concentrate on one at a time Divide & conquer Supports parallelization of efforts and separation of responsibilities Process - Go through phases one after the other (as in waterfall). Product - Keep product requirements separate. Example, Functionality, performance and user interface and usability.

NOTES

1.6.3 Modularity a. A complex system may be divided into simpler pieces called modules b. A system that is composed of modules is called modular c. Supports application of separation of concerns when dealing with a module we can ignore details of other modules d. Each module should be highly cohesive i. Module understandable as a meaningful unit ii. Components of a module are closely related to one another e. Modules should exhibit low coupling i. Modules have low interactions with others ii. Understandable separately
7 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

1.6.4 Abstraction Abstraction is the process of suppressing, or ignoring, inessential details while focusing on the important, or essential, details. We often speak of levels of abstraction. As we move to higher levels of abstraction, we shift our attention to the larger, and more important, aspects of an item, e.g., the very essence of the item, or the definitive characteristics of the item. As we move to lower levels of abstraction we begin to pay attention to the smaller, and less important, details, e.g., how the item is constructed. a. Identify the important aspects of a phenomenon and ignore its details b. Special case of separation of concerns c. The type of abstraction to apply depends on purpose For example, consider an automobile. At a high level of abstraction, the automobile is a monolithic entity, designed to transport people and other objects from one location to another. At a lower level of abstraction we see that the automobile is composed of an engine, a transmission, an electrical system, and other items. At this level we also see how these items are interconnected. At a still lower level of abstraction, we find that the engine is made up of spark plugs, pistons, and other items. 1.6.5 Anticipation of change a. Ability to support software evolution requires anticipating potential future changes b. It is the basis for software evolution c. Example: set up a configuration management environment for the project 1.6.6 Generality a. While solving a problem, try to discover if it is an instance of a more general problem whose solution can be reused in other cases b. Carefully balance generality against performance and cost c. Sometimes a general problem is easier to solve than a special case 1.6.7 Incrementality a. Process proceeds in a stepwise fashion (increments) b. Examples (process) i. Deliver subsets of a system early to get early feedback from expected users, then add new features incrementally

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

ii. iii.

Deal first with functionality, then turn to performance Deliver a first prototype and then incrementally add effort to turn prototype into product

NOTES

Q 1.6.8 Questions 1. What are the issues inherent in the software process? 2. Explain in detail the principles of software engineering. 3. What is modularity? Explain with an example. 4. Define the term abstraction.

1.7 SOFTWARE ENGINEERING APPROACHES


There are several approaches for the development of the software project. According to the type of the project, the team that develops the software will select the best suitable approach. However, the two main approaches of software development are listed below. Almost all the projects follow any one of these two approaches. 1. Object Oriented Approach to Software Development. 2. Structured Approach to Software Development 1.7.1 Object Oriented Approach to Software Development In the object-oriented approach, we make use of the use cases to design the system. There are many diagrams that can be used in the design of the system. Many CASE tools are also available to better design the system using the object-oriented paradigm. Major motivations for object-oriented approaches in general are; a. Object-oriented approaches encourage the use of modern software engineering technology. b. Object-oriented approaches promote and facilitate software reusability. c. Object-oriented approaches facilitate interoperability. d. When done well, object-oriented approaches produce solutions, which closely resemble the original problem. e. When done well, object-oriented approaches result in software, which is easily modified, extended, and maintained. f. Traceability improves if an overall object-oriented approach is used. g. There is a significant reduction in integration problems.
9 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

h. The conceptual integrity of both the process and the product improve. i. The need for objectification and deobjectification is kept to a minimum.

Encouragement of modern software engineering


Modern software engineering encompasses a multitude of concepts. We will focus on four things, which are; 1. Information Hiding 2. Data Abstraction 3. Encapsulation 4. Concurrency 5. Polymorphism Information Hiding Information hiding stresses that certain (inessential or unnecessary) details of an item are made inaccessible. By providing only essential information, we accomplish two goals: 1. Interactions among items are kept as simple as possible, thus reducing the chances of incorrect, or unintended, interactions 2. We decrease the chances of unintended system corruption (e.g., ripple effects), which may result from the introduction of changes to the hidden details. Objects are black boxes. Specifically, the details of the underlying implementation of an object are hidden to the users of an object, and all interactions take place through a well-defined interface. It can be better understood from the example given below. Consider a bank account object. Bank customers may know that they can open an account, make deposits and withdrawals, and inquire as to the present balance of the account. Further, they should also know that they might accomplish these activities via either a live teller or an automatic teller machine. However, bank customers are not likely to be privy to the details of how each of these operations is accomplished. Abstraction Abstraction has been discussed earlier in this chapter. Software engineering deals with many different types of abstraction. Three of the most important are: a. Functional abstraction b. Data abstraction c. Process abstraction
Anna University Chennai 10

DSE 112

SOFTWARE ENGINEERING

Functional Abstraction: In functional abstraction, the function performed becomes a high-level concept. While we may know a great deal about the interface for the function, we know relatively little about how it is accomplished. For example, given a function which calculates the sine of an angle, we may know that the input is a floating-point number representing the angle in radians, and that the out put will be a floating-point number between -1.0 and +1.0 inclusive. Still, we know very little about how the sine is actually calculated, i.e., the function is a high-level concept an abstraction. Functional abstraction is considered good because it hides unnecessary implementation details from those who use the function. If done well, this makes the rest of the system less susceptible to changes in the details of the algorithm. Data Abstraction: Data abstraction is built on top of functional abstraction. Specifically, in data abstraction, the details of the underlying implementations of both the functions and the data are hidden from the user. While many definitions of data abstraction often stop at this point, there is more in the concept. For example, we were to implement a list using data abstraction. We might encapsulate the underlying representation for the list and provide access via a series of operations, e.g., add, delete, length, and copy. This offers the benefit of making the rest of the system relatively insensitive to changes in the underlying implementation of the list. Process Abstraction: Process abstraction deals with how an object handles (or does not handle) itself in a parallel processing environment. In sequential processing there is only one thread of control, i.e., one point of execution. In parallel processing there are at least two threads of control, i.e., two, or more, simultaneous points of execution. Imagine a windowing application. Suppose two, or more, concurrent processes attempted to simultaneously write to a specific window. If the window itself had a mechanism for correctly handling this situation, and the underlying details of this mechanism were hidden, then we could say that the window object exhibits process abstraction. Specifically, how the window deals with concurrent process is a high-level concept an abstraction.
11

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

One of the differences between an object-oriented system and more conventional systems is in how they each handle concurrency. Many conventional systems deal with concurrency by having a master routine maintain order (e.g., schedule processing, prevent deadlock, and prevent starvation). In an object-oriented concurrent system, much of the responsibility for maintaining order is shifted to the objects themselves, i.e., each object is responsible for its own protection in a concurrent environment. Encapsulation Encapsulation is the process of logically and/or physically packaging items so that they may be treated as a unit. Functional decomposition approaches localize information around functions, data-driven approaches localize information around data, and object-oriented approaches localize information around objects. Since encapsulation in a given system usually reflects the localization process used the encapsulated units that result from a functional decomposition approach will be functions, whereas the encapsulated units resulting from an object-oriented approach will be objects. Object-oriented programming introduced the concept of classes and later provided programmers with a much more powerful encapsulation mechanism than subroutines. In object-oriented approaches, a class may be viewed as a template, a pattern, or even a blueprint for the creation of objects (instances). Programmers to encapsulate many subroutines, and other items, into still larger program units called classes. Consider a list class. Realizing that a list is more than just a series of storage locations, a software engineer might design a list class so that it encapsulated: 1. 2. 3. 4. The items actually contained in the list Other useful state information, e.g., the current number of items stored in the list The operations for manipulating the list, e.g., add, delete, length, and copy Any list related exceptions, e.g., overflow and underflow, (exceptions are mechanisms whereby an object can actively communicate exceptional conditions to its environment) 5. Any useful exportable (from the class) constants, e.g., empty list and the maximum allowable number of items the list can contain. In summary, we could say that objects allow us to deal with entities, which are significantly larger than subroutines and that this, in turn, allows us to better manage the complexity of large systems.

Anna University Chennai

12

DSE 112

SOFTWARE ENGINEERING

Concurrency Many modern software systems involve at least some level of concurrency. Examples of concurrent systems include: 1. An interactive MIS (management information system) which allows multiple, simultaneous users, 2. A HVAC (heating, ventilation, and air conditioning) system which controls the environment in a building, in part, by simultaneously monitoring a series of thermostats which have been place throughout the building, and 3. An air traffic control (ATC) system, which must deal with hundreds (possibly thousands) of airplanes simultaneously. Polymorphism Polymorphism is a measure of the degree of difference in how each item in a specified collection of items must be treated at a given level of abstraction. Polymorphism is increased when any unnecessary differences, at any level of abstraction, within a collection of items are eliminated. Although polymorphism is often discussed in terms of programming languages, it is a concept with which we are all familiar with in everyday life. Suppose we are constructing a software system, which involves a graphical user interface (GUI). Further, suppose we are using an object-oriented approach. Three of the objects we have identified are a file, an icon, and a window. We need an operation, which will cause each of these items to come into existence. We could provide the same operation with a different name (e.g., open for the file, build for the icon, and create for the window) for each item. Hopefully, we will recognize that we are seeking the same general behavior for several different objects and will assign the same name (e.g., create) to each operation. It should not go unnoticed that a polymorphic approach, when done well, can significantly reduce the overall complexity of a system. This is especially important in a distributed application environment. Hence, there appears to be a very direct connection between polymorphism and enhanced interoperability. The advantages of the object-oriented approach are as follows :

NOTES

The promotion and facilitation of software reusability


Software reusability is not a topic that is well understood by the people. For example, many software reusability discussions incorrectly limit the definition of software
13 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

to source code and object code. Even within the object-oriented programming community, people seem to focus on the inheritance mechanisms of various programming languages as a mechanism for reuse. Although reuse via inheritance is not to be dismissed, there are more powerful reuse mechanisms. Research into software reusability, and actual practice, have established a definite connection between overall software engineering approaches and software reusability. For example, analysis and design techniques have a very large impact on the reusability of software a greater impact, in fact, than programming (coding) techniques. A literature search for software engineering approaches, which appear to have a high correlation with software reusability, shows a definite relationship between objectoriented approaches and software reuse. The promotion and facilitation of interoperability Consider a computer network with different computer hardware and software at each node. Next, instead of viewing each node as a monolithic entity, consider each node to be a collection of (hardware and software) resources. Interoperability is the degree to which an application running on one node in the network can make use of a (hardware or software) resource at a different node on the same network. For example, consider a network with a Cray supercomputer, at one node, rapidly processing a simulation application, and needing to display the results on a highresolution color monitor. If the simulation software on the Cray makes use of a color monitor on a Macintosh IIfx at a different node on the same network, that is an example of interoperability. In effect, as the degree of interoperability goes up, the concept of the network vanishes. A user on any one node has increasingly transparent use of any resource on the network. Object-oriented solutions closely resemble the original problem One of the axioms of systems engineering is that it is a good idea to make the solution closely resemble the original problem. One of the ideas behind this is that, if we understand the original problem, we will also be better able to understand our solution. For example, if we are having difficulties with our solution, it will be easy to check it against the original problem. There is a great deal of evidence to suggest that it is easier for many people to view the real world in terms of objects, as opposed to functions, e.g.:

Anna University Chennai

14

DSE 112

SOFTWARE ENGINEERING

Many forms of knowledge representation, e.g. semantic networks, discuss knowledge in terms of objects, The relative user friendliness of graphical user interfaces, and Common wisdom, e.g., a picture is worth a thousand words.

NOTES

Unfortunately, many who have been in the software profession for more than a few years tend to view the world almost exclusively in terms of functions. These people often suffer from the inability to identify objects, or to view the world in terms of interacting objects. We should point out that function is not bad in object-oriented software engineering. For example, it is quite acceptable to speak of the functionality provided by an object, or the functionality resulting from interactions among objects. Object-oriented approaches result in software which is easily modified, extended and maintained When conventional engineers (e.g., electronics engineers, mechanical engineers, and automotive engineers) design systems they follow some basic guidelines: They may start with the intention of designing an object (e.g., an embedded computer system, a bridge, or an automobile), or with the intention of accomplishing some function (e.g., guiding a missile, crossing a river, or transporting people from one location to another). Even if they begin with the idea of accomplishing a function, they quickly begin to quantify their intentions by specifying objects (potentially at a high level of abstraction), which will enable them to provide the desired functionality. In short order, they find themselves doing object-oriented decomposition, i.e., breaking the potential product into objects (e.g., power supplies, RAM, engines, transmissions, girders, and cables). They assign functionality to each of the parts (object-oriented components). For example, the function of the engine is to provide a power source for the movement of the automobile. Looking ahead (and around) to reusing the parts, the engineers may modify and extend the functionality of one, or more, of the parts. Realizing that each of the parts (objects) in their final product must interface with one, or more, other parts, they take care to create well-defined interfaces. Again, focusing on reusability, the interfaces may be modified or extended to deal with a wider range of applications. Once the functionality and well-defined interfaces are set in place, each of the parts may be either purchased off-the-shelf, or designed independently. In the case of complex, independently designed parts, the engineers may repeat the above process.
15 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Without explicitly mentioning it, we have described the information hiding which is a normal part of conventional engineering. By describing the functionality (of each part) as an abstraction, and by providing well-defined interfaces, we foster information hiding. However, there is also often a more powerful concept at work here. Each component not only encapsulates functionality, but also knowledge of state (even if that state is constant). This state, or the effects of this state, are accessible via the interface of the component. For example, a RAM chip stores and returns bits of information (through its pins) on command. By carefully examining the functionality of each part, and by ensuring wellthought-out and well-defined interfaces, the engineers greatly enhance the reusability of each part. However, they also make it easier to modify and extend their original designs. New components can be swapped in for old components provided they adhere to the previously defined interfaces and that the functionality of the new component is harmonious with the rest of the system. Electronics engineering, for example, often uses phrases such as plug compatibility and pin compatibility to describe this phenomenon. Conventional engineers also employ the concept of specialization. Specialization is the process of taking a concept and modifying (enhancing) it so that it applies to a more specific set of circumstances, i.e., it is less general. Mechanical engineers may take the concept of a bolt and fashion hundreds of different categories of bolts by varying such things as the alloys used, the diameter, the length, and the type of head. Electronics engineers create many specialized random access memory (RAM) chips by varying such things as the implementation technology (e.g., CMOS), the access time, the organization of the memory, and the packaging. By maintaining a high degree of consistency in both the interfaces and functionality of the components, engineers can allow for specialization while still maintaining a high degree of modifiability. By identifying both the original concepts, and allowable (and worthwhile) forms of specialization, engineers can construct useful families of components. Further, systems can be designed to readily accommodate different family members. In a very real sense, object-oriented software engineering shares a great deal in common with more conventional forms of engineering. The concepts of encapsulation, well-defined functionality and interfaces, information hiding, and specialization are key

Anna University Chennai

16

DSE 112

SOFTWARE ENGINEERING

to the modification and extension of most non-software systems. It should come as no surprise that, if used well, they can allow for software systems, which are easily modified and extended. The impact of object-orientation on the software life-cycle To help us get some perspective on object-oriented software engineering, it is useful to note the approximate times when various object-oriented technologies were introduced, e.g.: 1. 2. 3. 4. 5. 6. Object-oriented programming: 1966 Object-oriented design: 1980 Object-oriented computer hardware: 1980 Object-oriented databases: 1985 Object-oriented requirements analysis: 1986 Object-oriented domain analysis: 1988

NOTES

Originally, people though of object-orientation only in terms of programming languages. Discussions were chiefly limited to object-oriented programming (OOP). However, during the 1980s, people found that: 1. Object-oriented programming alone was insufficient for large and/or critical problems, and 2. Object-oriented thinking was largely incompatible with traditional (e.g., functional decomposition) approaches due chiefly to the differences in localization. During the 1970s and early 1980s, many people believed that the various lifecycle phases (e.g., analysis, design, and coding) were largely independent. Therefore, one could supposedly use very different approaches for each phase, with only minor consequences. For example, one could consider using structured analysis with objectoriented design. This line of thinking however was found to be largely inaccurate. Today, we know that, if we are considering an object-oriented approach to software engineering, it is better to have an overall object-oriented approach. There are several reasons for this. Traceability Traceability is the degree of ease with which a concept, idea, or other item may be followed from one point in a process to either a succeeding, or preceding, point in the same process. For example, one may wish to trace a requirement through the software engineering process to identify the delivered source code, which specifically addresses that requirement.
17 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Suppose, as is often the case, that you are given a set of functional requirements, and you desire (or are told) that the delivered source code be object-oriented. During acceptance testing, your customer will either accept or reject your product based on how closely you have matched the original requirements. In an attempt to establish conformance with requirements (and sometimes to ensure that no extraneous code has been produced), your customer wishes to trace each specific requirement to the specific delivered source code, which meets that requirement, and vice versa. Unfortunately, the information contained in the requirements is localized around functions and the information in the delivered source code is localized around objects. One functional requirement, for example, may be satisfied by many different objects, or a single object may satisfy several different requirements. Experience has shown that traceability, in situations such as this, is a very difficult process. There are two common solutions to this problem: 1. Transform the original set of functional requirements into object-oriented requirements, or 2. Request that the original requirements be furnished in object-oriented form. Either of these solutions will result in the requirements information, which is localized around objects. This will greatly facilitate the tracing of requirements to objectoriented source code, and vice versa. Reduction of integration problems When Grady Booch first presented his first-generation version of object-oriented design in the early 1980s, he emphasized that it was a partial life-cycle methodology, i.e., it focused primarily on software design issues, secondarily on software coding issues, and largely ignored the rest of the life-cycle, e.g., it did not address early lifecycle phases, such as analysis. {One strategy, which was commonly attempted, was to break a large problem into a number of large functional (i.e., localized on functionality) pieces, and then to apply object-oriented design to each of the pieces. The intention was to integrate these pieces at a later point in the life cycle, i.e., shortly before delivery. This process was not very successful. In fact, it resulted in large problems, which became visible very late in the development part of the software life cycle, i.e., during test and integration. The problem was again based on differing localization criteria. Suppose, for example, a large problem is functionally decomposed into four large functional partitions. Each partition is assigned to a different team, and each team attempts to apply an

Anna University Chennai

18

DSE 112

SOFTWARE ENGINEERING

object-oriented approach to the design of their functional piece. All appears to be going well until it is time to integrate the functional pieces. When the pieces attempt to communicate, they find many cases where each group has implemented the same object in a different manner. What has happened? Let us assume, for example, that the first, third, and fourth groups all have identified a common object. Lets call this object X. Further, let us assume that each team identifies and implements object X solely on the information contained in their respective functional partition. The first group identifies and implements object X as having attributes A, B, and D. The third group identifies and implements object X as having attributes C, D, and E. The fourth group identifies and implements object X as having only attribute A. Each group, therefore, has an incomplete picture of object X. This problem may be made worse by the fact that each team may have allowed the incomplete definitions of one, or more, objects to influence their designs of both their functional partition, and the objects contained therein. This problem could have been greatly reduced by surveying to the original unpartitioned set of functional requirements, and identifying both candidate objects and their characteristics. Further, the original system should have been re-partitioned along object-oriented lines, i.e., the software engineers should be using object-oriented decomposition. This knowledge should be carried forward to the design process as well. Improvement in conceptual integrity Conceptual integrity means being true to a concept, or, more simply, being consistent. Consistency helps to reduce complexity, and, hence, increases reliability. If a significant change in the localization strategy is made during the life cycle of a software product, the concept of conceptual integrity is violated, and the potential for the introduction of errors is very high. During the development part of the life cycle, we should strive for an overall object-oriented approach. In this type of approach, each methodology, tool, documentation technique, management practice, and software engineering activity is either object-oriented or supportive of an object-oriented approach. By using an overall object-oriented approach (as opposed to a mixed localization approach), we should be able to eliminate a significant source of errors.

NOTES

19

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Lessening the need for objectification and de-objectification Objects are not data. Data are not objects. Objects are not merely data and functions encapsulated in the same place. However, each object-oriented application must interface with (at least some) non-object-oriented systems, i.e., systems that do not recognize objects. Two of the most common examples are: When objects must be persistent, e.g., when objects must persist beyond the invocation of the current application. Although an object-oriented data base management system (OODBMS) is called for, a satisfactory one may not be available. Conventional relational DBMSs, while they may recognize some state information, do not recognize objects. Therefore, if we desire to store an object in a non-OODBMS, we must transform the object into something, which can be recognized by the non-OODBMS. When we wish to retrieve a stored object, we will reverse the process. In a distributed application, where objects must be transmitted from one node in the network to another node in the same network. Networking hardware and software is usually not object-oriented. Hence, the transmission process requires that we have some way of reducing an object to some primitive form (recognizable by the network), transmitting the primitive form, and reconstituting the object at the destination node. Deobjectification is the process of reducing an object to a form which can be dealt with by a non-object-oriented system. Objectification is the process of (re) constituting an object from some more primitive form of information. Each of these processes, while necessary, has a significant potential for the introduction of errors. Our goal should be to minimize the need for these processes. An overall object-oriented approach can help to keep the need for objectification and deobjectification to a minimum. 1.7.2 The Structured Approach to Software Development In the structured methodology approach, we make use of the functional design of the system. The concepts of data abstraction come into picture and the complexity of the design can be measured with the coupling between the modules and cohesion within a module. Petri Nets come under the structured design methodology. Q 1.7.3 Questions 1. What are the various approaches in Software Engineering? 2. Explain in detail the Object oriented approach to software development.

Anna University Chennai

20

DSE 112

SOFTWARE ENGINEERING

3. List the pros and cons of various SE approaches. 4. Explain the software engineering concepts considering the railway reservation system as an exercise.

NOTES

1.8 SOFTWARE PROCESS


The software process is becoming a big concept for companies that produce software. As a consequence, the software process is becoming more and more important for permanent employees, long-term practitioners, and short-term consultant in the software industry. A process may be defined as a set of partially ordered steps intended to reach a goal; in software engineering the goal is to build a software product or enhance an existing one. This simple definition shows us nothing new. After all, all software has been developed using some method. Every process produces some product or artifact. 1.8.1 The importance of process In the past, such processes, no matter how professionally executed, have been highly dependent on the individual developer. This can lead to three key problems. First, such software is very difficult to maintain. Imagine our software developer has fallen under a bus, and somebody else must take over the partially completed work. Quite possibly there is extensive documentation explaining the state of the work in progress. Maybe there is even a plan, with individual tasks mapped out and those that have been completed neatly marked - or maybe the plan only exists in the developers head. In any case, a replacement employee will probably end up starting from scratch, because however good the previous work, the replacement has no clue of where to start. The process may be superb, but it is an ad-hoc process, not a defined process. (Ad-hoc and defined processes are discussed in the following section under CMM) Second, it is very difficult to accurately gauge the quality of the finished product according to any independent assessment. If we have two developers each working according to their own processes, defining their own tests along the way, we have no objective method of comparing their work either with each other, or, more important, with a customers quality criteria. Third, there is a huge overhead involved as each individual works out their own way of doing things in isolation. To avoid this we must find some way of learning from the experiences of others who have already trodden the same road.
21 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

So it is important for each organization to define the process for a project. At its most basic, this means simply to write it down. Writing it down specifies the various items that must be produced and the order in which they should be produced: from plans to requirements to documentation to the finished source code. It says where they should be kept, and how they should be checked, and what to do with them when the project is over. It may not be much of a process. 1.8.2 The purpose of process What do we want our process to achieve? We can identify certain key goals in this respect. Effectiveness Not to be confused with efficiency. An effective process must help us produce the right product. It doesnt matter how elegant and well-written the software, nor how quickly we have produced it. If it isnt what the customer wanted, or required, its no good. The process should therefore help us determine what the customer needs, produce what the customer needs, and, crucially, verify that what we have produced is what the customer needs. Maintainability However good the programmer, things will still go wrong with the software. Requirements often change between versions. In any case, we may want to reuse elements of the software in other products. One of the goals of a good process is to expose the designers and programmers thought processes in such a way that their intention is clear. Then we can quickly and easily find and remedy faults or work out where to make changes. Predictability Any new product development needs to be planned, and those plans are used as the basis for allocating resources: both time and people. It is important to predict accurately how long it will take to develop the product. That means estimating accurately how long it will take to produce each part of it - including the software. A good process will help us do this. The process helps lay out the steps of development. Furthermore, consistency of process allows us to learn from the designs of other projects.

Anna University Chennai

22

DSE 112

SOFTWARE ENGINEERING

Repeatability If a process is discovered to work, it should be replicated in future projects. Ad-hoc processes are rarely replicable unless the same team is working on the new project. Even with the same team, it is difficult to keep things exactly the same. A closely related issue, is that of process re-use. It is a huge waste and overhead for each project to produce a process from scratch. It is much faster and easier to adapt an existing process. (ad-hoc process is deiscussed in the later part of the material) Improvement No one would expect their process to reach perfection and need no further improvement itself. Even if we were as good as we could be now, both development environments and requested products are changing so quickly that our processes will always be running to catch up. A goal of our defined process must then be to identify and prototype possibilities for improvement in the process itself. Tracking A defined process should allow the management, developers, and customer to follow the status of a project. Tracking is the flip side of predictability. It keeps track of how good our predictions are, and hence how to improve them. These seven process goals are very close relatives of the McCall quality factors which categorize and describe the attributes that determine how the quality of the software produced. Quality Quality in this case may be defined as the products fitness for its purpose. One goal of a defined process is to enable software engineers to ensure a high quality product. The process should provide a clear link between a customers desires and a developers product. Quality systems are often far removed from the goals set out for a process. All too often they appear to be nothing more than an endless list of documents to be produced in the knowledge that they will never be read; written long after they might have had any use; in order to satisfy the auditor, who in turn is not interested in the content of the document but only its existence. This gives rise to the quality dilemma, which states that it is possible for a Quality system to adhere completely to any given quality standard and yet for that Quality system to make it impossible to achieve a quality process.
23

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

So is the entire notion of a quality system flawed? Not at all. It is possible, and some organizations do achieve, a quality process that really helps them to produce quality software. Much excellent work is going in to the development of new quality models that can act as road maps to developing a better quality system. The Software Engineering Institutes Capability Maturity Model (CMM) is principal among them. (To be discussed later in this unit) 1.8.4 Further discussion on Quality The key goal of these models is to establish and maintain a link between the quality of the process and the quality of the product - our software - that comes out of that process. But in order to establish such a link we must know what we mean by quality. It is based on the British Standard Institutes (BSI) definition below for quality. 1. The totality of features and characteristics of a product or service that bear on its ability to satisfy a given need. (British Standards Institute). 2. We must define quality as `conformance to requirements. Requirements must be clearly stated so that they cannot be misunderstood. Measurements are then taken continually to determine conformance to those requirements. The non-conformance detected is the absence of quality 3. Degree of excellence, relative nature or kind of character to Faculty, skill, accomplishment, characteristic trait, mental or moral attribute Intuitively this is simply wrong. Few of us, especially given the nature of the application, would agree that this system was flawless! There has to be a subjective element to quality, even if it is reasonable to maximize the objective element. More concretely in this case we must identify that there is a quality problem with the requirements statement itself. This requires that our quality model be able to reflect the existence of such problems, for example, by taking measures of perceived quality, such as the use of questionnaires to measure customer satisfaction. A number of sources have looked at different ways of making sense of what we should mean by quality. Most of these take a multi-dimensional view, with conformance at one end and transcendental or aesthetic quality at the other. For example, Garvin lists eight dimensions of quality: 1. Performance quality Expresses whether the products primary features conform to specification. In software terms we would often regard this as the product fulfilling its functional specification.

Anna University Chennai

24

DSE 112

SOFTWARE ENGINEERING

2. Feature quality Does it provide additional features over and above its functional specification? 3. Reliability A measure of how often (in terms of number of uses, or in terms of time) the product will fail. This will be measured in terms of the mean time between failures (MTBF). 4. Conformance A measure of the extent to which the originally delivered product lives up to its specification. This could be measured for example as a defect rate (possibly number of faulty units per 1000 shipped units, or more likely in the case of software the number of faults per 1000 lines of code in the delivered product) or a service call-out rate. 5. Durability How long will an average product last before failing irreparably? Again in software terms this has a slightly different meaning, in that the mechanisms by which software wears out is rather different from, for example, a car or a light-bulb. Software wears out, in large part, because it becomes too expensive and risky to change further. This happens when nobody fully understands the impact of a change on the overall code. 6. Serviceability Serviceability is a measure of the quality and ease of repair. It is astonishing how often it is that the component in which everybody has the most confidence is the first to fail - a principle summed up by the author Douglas Adams: The difference between something that can go wrong and something that cant possibly go wrong is that when something that cant possibly go wrong goes wrong it usually turns out to be impossible to get at or repair 7. Aesthetics A highly subjective measure. How does it look? How does it feel to use? What are your subconscious opinions of it? This is also a measure with an interesting variation over time. Consider your reactions when you see a ten-year old car. It looks square, box-like, and unattractive. Yet, ten years ago, had you looked at the same car, it would have looked smart, aerodynamic and an example of great design. We may like to think that we dont change, but clearly we do! Of course, give that car another twenty years and you will look at it and say oh thats a classic design!. I wonder if we will say the same about our software. 8. Perception This is another subjective measure, and one that it could be argued really shouldnt affect the products quality at all. It refers of course to the perceived quality of the provider, but in terms of gaining an acceptance of the product it is key.

NOTES

25

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

More specifically to software, McCalls software quality factors define eleven dimensions of quality under three categories which is called as the Quality Triangle:

Product Operations

Product Revision

Product Transition

Figure 1.3: McCalls Quality Triangle 1. Product Operations - correctness, reliability, efficiency, usability, and integrity 2. Product Revision - maintainability, flexibility, and testability 3. Product Transition - portability, reusability, and interoperability The primary area that McCalls factors (shown in figure 1.3) do not address are the subjective ones of perception and aesthetics - possibly he felt they were impossible to measure or possibly the idea in 1977 that software could have an aesthetic quality would have been considered outlandish! But nowadays most professionals would agree that such judgments are possible, and indeed are made every day. All of us will recognize that products do not score equally on all of these dimensions. It is arguable that there is no reason why they should, as they are appealing to different sectors of the market with different needs. To take an example with which we will all be familiar; many consumer software companies concentrate on features (performance quality and feature quality in the above descriptions) to the detriment of reliability, conformance, and serviceability. The danger for such companies is that this does damage their reputation over the longer term. The subjective measure of aesthetic quality suffers and their customers are very likely to desert them as soon as an acceptable alternative comes on the market. 1.8. 5 Process and product quality So which is the right definition of quality? Traditional quality systems, based on ISO9000, clearly focus on conformance to a defined process. Why is this? You may argue that this is a flawed measure of quality, bearing little relationship to the quality of the end product. There is no guarantee that process quality (or process conformance) will produce a product of the required quality.
Anna University Chennai 26

DSE 112

SOFTWARE ENGINEERING

Such process conformance was never intended to give such a guarantee anyway. ISO9000 auditors dont know how good your product is, that isnt their area of expertise. They know about process, and can measure your conformance to your defined process and measure your process itself against the standards, but it is the people in your own industry that must judge your product. The guarantee it does provide is the inverse of this. Process conformance is a necessary (but not sufficient) pre-requisite to the consistent production of a high-quality product. The challenge for the developers of the software meta-processes - those guides that say what the process should contain - is to strengthen the link between the process conformance and the product quality. A key factor in this is psychological. The aim of the process should be to facilitate the engineer doing the job well rather than to prevent them from doing it badly. This implies that the process must be easy to use correctly, and certainly easier to use correctly than badly or not at all. It implies that the engineers will want to use the process: in the jargon of the trade that they will buy-in to the process. It implies that there must be some feedback from the user of the process as to how to improve the process, ie Continuous Process Improvement or CPI. This in turn implies that the organization provide the structures that encourage the user to provide such feedback, for without such structures the grumbles, complaints and great ideas discussed round the coffee machine will be quickly forgotten - at least until the next time someones work is affected. Perhaps most of all it implies that the process should not be seen as a bureaucratic overhead of documents and figures that can be left until after the real work is finished, but as an integral part of the real work itself. In the past, software quality has embraced only a limited number of the dimensions that truly constitute software quality. Focusing only on the process is limiting; it is only by including all the facets of software quality that a better evaluation of the quality of software can be obtained. The keys to better software are not simply to be found in process quality but rather in a closer link between process quality and product quality and in the active commitment to that goal of the people involved. In order to establish, maintain, and strengthen that link we must measure our product - our software - against all the relevant factors: those that relate to the specification of the product, those related to the development and maintenance of the product, and those related to our and our colleagues subjective views of the product as well as those that relate to process conformance.
27

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Q 1.8.6 Questions 1. 2. 3. 4. What is a software Process? What are the goals of a software process? What is Quality in Software Engineering? What are the various dimensions of quality? Explain the McCalls Quality Triangle. 5. Explain in detail about the software process.

1.9 SOFTWARE DEVELOPMENT PROCESS


A software development process is a structure imposed on the development of a software product. Synonyms include software lifecycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process. 1.9.1 Processes A growing body of software development organizations implement process methodologies. Many of them are in the defense industry, which in the U.S. requires a rating based on Process models to obtain contracts. ISO 12207 is a standard for describing the method of selecting, implementing and monitoring a life cycle for a project. The Capability Maturity Model (CMM) is one of the leading models. Independent assessments grade organizations on how well they follow their defined processes, not on the quality of those processes or the software produced. CMM is gradually replaced by CMM-I. ISO 9000 describes standards for formally organizing processes with documentation. (CMM is discussed in detail in the later part of this unit) ISO 15504, also known as Software Process Improvement Capability Determination (SPICE), is a framework for the assessment of software processes. The software process life cycle is also gaining wide usage. This standard is aimed at setting out a clear model for process comparison. SPICE is used much like CMM and CMMI. It models processes to manage, control, guide and monitor software development. This model is then used to measure what a development organization or project team actually does during software development. This information is analyzed to identify weaknesses and drive improvement. It also identifies strengths that can be continued or integrated into common practice for that organization or team. Six Sigma is a methodology to manage process variations that uses data and statistical analysis to measure and improve a companys operational performance. It works by
Anna University Chennai 28

DSE 112

SOFTWARE ENGINEERING

identifying and eliminating defects in manufacturing and service-related processes. The maximum permissible defects are 3.4 per one million opportunities. However, Six Sigma is manufacturing-oriented and needs further reset on its relevance to software development. (Not getting too much into this topic) 1.9.2 Process activities/steps of the process life cycle Software Elements Analysis: The most important task in creating a software product is extracting the requirements. Customers typically know what they want, but not what software should do, while skilled and experienced software engineers recognize incomplete, ambiguous or contradictory requirements. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect. Specification: Specification is the task of precisely describing the software to be written, possibly in a rigorous way. In practice, most successful specifications are written to understand and fine-tune applications that were already well developed, although safetycritical software systems are often carefully specified prior to application development. Specifications are most important for external interfaces that must remain stable. Software architecture: The architecture of a software system refers to an abstract representation of that system. Architecture is concerned with making sure the software system will meet the requirements of the product, as well as ensuring that future requirements can be addressed. The architecture step also addresses interfaces between the software system and other software products, as well as the underlying hardware or the host operating system. Implementation (or coding): Reducing a design to code may be the most obvious part of the software engineering job, but it is not necessarily the largest portion. Testing: Testing of parts of software, especially where code by two different engineers must work together falls to the software engineer.

NOTES

29

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Documentation: An important (and often overlooked) task is documenting the internal design of software for the purpose of future maintenance and enhancement. Documentation is most important for external interfaces. Software Training and Support: A large percentage of software projects fail because the developers fail to realize that it doesnt matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. People are occasionally resistant to change and avoid venturing into an unfamiliar area, so as a part of the deployment phase, its very important to have training classes for the most enthusiastic software users (build excitement and confidence), shifting the training towards the neutral users intermixed with the avid supporters, and finally incorporate the rest of the organization into adopting the new software. Users will have lots of questions and software problems, which lead to the next phase of software. Maintenance: Maintaining and enhancing software to cope with newly discovered problems or new requirements could take far more time than the initial development of the software. Not only may it be necessary to add code that does not fit the original design but also just determining how software works at some point after it is completed may require significant effort by a software engineer. About ? of all software engineering work is maintenance, but this statistic can be misleading. A small part of that is fixing bugs. Most maintenance is extending systems to do new things, which in many ways can be considered new work. In comparison, about ? of all civil engineering, architecture, and construction work is maintenance in a similar way. 1.9.3 Process models A decades-long goal has been to find repeatable, predictable processes or methodologies that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of developing software. There are many traditional and recently developed process models. The important process models are discussed further. 1.9.3.1 Waterfall processes The best-known and oldest process is the waterfall model, where developers (roughly) follow these steps in order:

Anna University Chennai

30

DSE 112

SOFTWARE ENGINEERING

1. 2. 3. 4. 5. 6. 7.

State requirements Analyze them Design a solution approach Develop code Test (perhaps unit tests then system tests) Deploy Maintain

NOTES

Waterfall approach was first Process Model to be introduced and followed widely in Software Engineering to ensure success of the project. In The Waterfall approach, the whole process of software development is divided into separate process phases. The phases in Waterfall model are: Requirement Specifications phase, Software Design, Implementation and Testing & Maintenance. All these phases are cascaded to each other so that second phase is started as and when defined set of goals are achieved for first phase and it is signed off, so the name Waterfall Model. All the methods and processes undertaken in Waterfall Model are more visible.

Figure 1.4: The Waterfall Model

31

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Requirement Analysis & Definition: All possible requirements of the system to be developed are captured in this phase. Requirements are set of functionalities and constraints that the end-user (who will be using the system) expects from the system. The requirements are gathered from the end-user by consultation, these requirements are analyzed for their validity and the possibility of incorporating the requirements in the system to be development is also studied. Finally, a Requirement Specification document is created which serves the purpose of guideline for the next phase of the model. System & Software Design: Before a starting for actual coding, it is highly important to understand what we are going to create and what it should look like? The requirement specifications from first phase are studied in this phase and system design is prepared. System Design helps in specifying hardware and system requirements and also helps in defining overall system architecture. The system design specifications serve as input for the next phase of the model. Implementation & Unit Testing: On receiving system design documents, the work is divided in modules/units and actual coding is started. The system is first developed in small programs called units, which are integrated in the next phase. Each unit is developed and tested for its functionality; this is referred to as Unit Testing. Unit testing mainly verifies if the modules/ units meet their specifications. Integration & System Testing: As specified above, the system is first divided in units which are developed and tested for their functionalities. These units are integrated into a complete system during Integration phase and tested to check if all modules/units coordinate between each other and the system as a whole behaves as per the specifications. After successfully testing the software, it is delivered to the customer. Operations & Maintenance: This phase of The Waterfall Model is virtually never ending phase (Very long). Generally, problems with the system developed (which are not found during the development life cycle) come up after its practical use starts, so the issues related to the

Anna University Chennai

32

DSE 112

SOFTWARE ENGINEERING

system are solved after deployment of the system. Not all the problems come in picture directly but they arise time to time and needs to be solved; hence this process is referred as Maintenance. Disadvantages of the Waterfall Model: 1) As it is very important to gather all possible requirements during the Requirement Gathering and Analysis phase in order to properly design the system, not all requirements are received at once, the requirements from customer goes on getting added to the list even after the end of Requirement Gathering and Analysis phase, this affects the system development process and its success in negative aspects. 2) The problems with one phase are never solved completely during that phase and in fact many problems regarding a particular phase arise after the phase is signed off, these results in badly structured system as not all the problems (related to a phase) are solved during the same phase. 3) The project is not partitioned in phases in flexible way.

NOTES

4) As the requirements of the customer goes on getting added to the list, not all the requirements are fulfilled, this results in development of almost unusable system. These requirements are then met in newer version of the system; this increases the cost of system development. After each step is finished, the process proceeds to the next step, just as builders dont revise the foundation of a house after the framing has been erected. There is a misconception that the process has no provision for correcting errors in early steps (for example, in the requirements). In fact this is where the domain of requirements management comes in which includes change control. This approach is used in high-risk projects, particularly large defense contracts. The problems in waterfall do not arise from immature engineering practices, particularly in requirements analysis and requirements management. Studies of the failure rate of the certain specification, which enforced waterfall, have shown that the more closely a project follows its process, specifically in up-front requirements gathering, the more likely the project is to release features that are not used in their current form. More often too the supposed stages are part of joint review between customer and supplier, the supplier can, in fact, develop at risk and evolve the design but must sell

33

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

off the design at a key milestone called Critical Design Review. This shifts engineering burdens from engineers to customers who may have other skills. 1.9.3.2 Iterative processes Iterative development prescribes the construction of initially small but everlarger portions of a software project to help all those involved uncovering important issues early before problems or faulty assumptions can lead to disaster. Iterative processes are preferred by commercial developers because it allows a potential of reaching the design goals of a customer who does not know how to define what they want.

Figure 1.5: Iterative Software Developemnt Process The basic idea behind iterative enhancement (the one shown on figure 1.5) is to develop a software system incrementally, allowing the developer to take advantage of what was being learned during the development of earlier, incremental, deliverable versions of the system. Learning comes from both the development and use of the system, where possible. Key steps in the process were to start with a simple implementation of a subset of the software requirements and iteratively enhance the evolving sequence of versions until the full system is implemented. At each iteration, design modifications are made and new functional capabilities are added. The Procedure itself consists of the Initialization step, the Iteration step, and the Project Control List. The initialization step creates a base version of the system. The goal for this initial implementation is to create a product to which the user can react. It should offer a sampling of the key aspects of the problem and provide a solution that is simple enough to understand and implement easily. To guide the iteration process, a
Anna University Chennai 34

DSE 112

SOFTWARE ENGINEERING

project control list is created that contains a record of all tasks that need to be performed. It includes such items as new features to be implemented and areas of redesign of the existing solution. The control list is constantly being revised as a result of the analysis phase. The iteration involves the redesign and implementation of a task from project control list, and the analysis of the current version of the system. The goal for the design and implementation of any iteration is to be simple, straightforward, and modular, supporting redesign at that stage or as a task added to the project control list. The code can, in some cases, represent the major source of documentation of the system. The analysis of an iteration is based upon user feedback, and the program analysis facilities available. It involves analysis of the structure, modularity, usability, reliability, efficiency, and achievement of goals. The project control list is modified in light of the analysis results. 1.9.3.3 Spiral Model Process

NOTES

Figure 1.6: The Spiral Model of Software Development


35 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

In order to overcome the cons of The Waterfall Model, it was necessary to develop a new Software Development Model, which could help in ensuring the success of software project. One such model was developed which incorporated the common methodologies followed in The Waterfall Model, but it also eliminated almost every possible/known risk factors from it. This model is referred as The Spiral Model or Boehms Model. There are four phases in the Spiral Model (shown in figure 1.6)which are: Planning, Evaluation, Risk Analysis and Engineering. These four phases are iteratively followed one after other in order to eliminate all the problems, which were faced in The Waterfall Model. Iterating the phases helps in understating the problems associated with a phase and dealing with those problems when the same phase is repeated next time, planning and developing strategies to be followed while iterating through the phases. The phases in Spiral Model are: Plan: In this phase, the objectives, alternatives and constraints of the project are determined and are documented. The objectives and other specifications are fixed in order to decide which strategies/approaches to follow during the project life cycle. Risk Analysis: This phase is the most important part of Spiral Model. In this phase all possible (and available) alternatives, which can help in developing a cost effective project are analyzed and strategies are decided to use them. This phase has been added specially in order to identify and resolve all the possible risks in the project development. If risks indicate any kind of uncertainty in requirements, prototyping may be used to proceed with the available data and find out possible solution in order to deal with the potential changes in the requirements. Engineering: In this phase, the actual development of the project is carried out. The output of this phase is passed through all the phases iteratively in order to obtain improvements in the same. Customer Evaluation: In this phase, developed product is passed on to the customer in order to receive customers comments and suggestions which can help in identifying and resolving potential problems/errors in the software developed. This phase is very much similar to TESTING phase. The process progresses in spiral sense to indicate iterative path followed, progressively more complete software is built as we go on iterating through all four

Anna University Chennai

36

DSE 112

SOFTWARE ENGINEERING

phases. The first iteration in this model is considered to be most important, as in the first iteration almost all possible risk factors, constraints, requirements are identified and in the next iterations all known strategies are used to bring up a complete software system. The radical dimensions indicate evolution of the product towards a complete system. However, as every system has its own pros and cons, The Spiral Model does have its pros and cons too. As this model is developed to overcome the disadvantages of the Waterfall Model, to follow Spiral Model, highly skilled people in the area of planning, risk analysis and mitigation, development, customer relation etc. are required. This along with the fact that the process needs to be iterated more than once demands more time and is somehow expensive task. 1.9.3.4 Agile Software Development Process Agile software development processes are built on the foundation of iterative development. To that foundation they add a lighter, more people-centric viewpoint than traditional approaches. Agile processes use feedback, rather than planning, as their primary control mechanism. The feedback is driven by regular tests and releases of the evolving software. Agile processes seem to be more efficient than older methodologies, using less programmer time to produce more functional, higher quality software, but have the drawback from a business perspective that they do not provide long-term planning capability. Agile software development is a conceptual framework for undertaking software engineering projects that embraces and promotes evolutionary change throughout the entire life-cycle of the project. There are a number of agile software development methods; most attempt to minimize risk by developing software in short timeboxes, called iterations, which typically last one to four weeks. Each iteration is like a miniature software project of its own, and includes all of the tasks necessary to release the mini-increment of new functionality: planning, requirements analysis, design, coding, testing, and documentation. While an iteration may not add enough functionality to warrant releasing the product, an agile software project intends to be capable of releasing new software at the end of every iteration. In many cases, software is released at the end of each iteration. This is particularly true when the software is web-based and can be released easily. Regardless, at the end of each iteration, the team re-evaluates project priorities.
37

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Agile methods emphasize real-time communication, preferably face-to-face, over written documents. Most agile teams are located in a bullpen and include all the people necessary to finish software. At a minimum, this includes programmers and their customers (customers are the people who define the product; they may be product managers, business analysts, or actual customers). The bullpen may also include testers, interaction designers, technical writers, and managers. Agile methods also emphasize working software as the primary measure of progress. Combined with the preference for face-to-face communication, agile methods produce very little written documentation relative to other methods. This has resulted in criticism of agile methods as being undisciplined. 1.9.3.5 Extreme Programming Extreme Programming, XP, is the best-known agile process. In XP, the phases are carried out in extremely small (or continuous) steps compared to the older, batch processes. The (intentionally incomplete) first pass through the steps might take a day or a week, rather than the months or years of each complete step in the Waterfall model. First, one writes automated tests, to provide concrete goals for development. Next is coding (by a pair of programmers), which is complete when all the tests pass, and the programmers cant think of any more tests that are needed. Design and architecture emerge out of refactoring, and come after coding. Design is done by the same people who do the coding. (Only the last feature - merging design and code - is common to all the other agile processes.) The incomplete but functional system is deployed or demonstrated for (some subset of) the users (at least one of which is on the development team). At this point, the practitioners start again on writing tests for the next most important part of the system. While Iterative development approaches have their advantages, software architects are still faced with the challenge of creating a reliable foundation upon which to develop. Such a foundation often requires a fair amount of upfront analysis and prototyping to build a development model. The development model often relies upon specific design patterns and entity relationship diagrams (ERD). Without this upfront foundation, Iterative development can create long-term challenges that are significant in terms of cost and quality.

Anna University Chennai

38

DSE 112

SOFTWARE ENGINEERING

Critics of iterative development approaches point out that these processes place what may be an unreasonable expectation upon the recipient of the software: that they must possess the skills and experience of a seasoned software developer. The approach can also be very expensive if iterations are not small enough to mitigate risk the up-front design is as necessary for software development as it is for architecture. The problem with this criticism is that the whole point of iterative programming is that you dont have to build the whole house before you get feedback from the recipient. Indeed, in a sense conventional programming places more of this burden on the recipient, as the requirements and planning phases take place entirely before the development begins, and testing only occurs after development is officially over. In fact, a relatively quiet turn around in the agile community has occurred on the notion of evolving the software without the requirements locked down. In the old world this was called requirements creep and never made commercial sense. The Agile community has similarly been burnt because, in the end, when the customer asks for something that breaks the architecture, and wont pay for the re-work, the project terminates in an agile manner. These approaches have been developed along with web-based technologies. As such, they are actually more akin to maintenance life cycles given that most of the architecture and capability of the solutions is embodied within the technology selected as the backbone of the application. The Agile community, as their alternative to cogitating and documenting a design, claims refactoring. No equivalent claim is made of re-engineering - which is an artifact of the wrong technology being chosen, therefore the wrong architecture. Both are relatively costly. Claims that 10%-15% must be added to an iteration to account for refactoring of old code exist. However, there is no detail as to whether this value accounts for the re-testing or regression testing that must happen where old code is touched. Of course, throwing away the architecture is more costly again. In fact, a survey of the design less approach paints a picture of the cost incurred where this class of approach is used (Software Development at Microsoft Observed). Note the heavy emphasis here on constant reverse engineering by programming staff rather than managing a central design. Test Driven Development (TDD) is a useful output of the Agile camp but raises a conundrum. TDD requires that a unit test be written for a class before the class is
39

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

written. Therefore, the class firstly has to be discovered and secondly defined in sufficient detail to allow the write-test-once-and-code-until-class-passes model that TDD actually uses. This is actually counter to agile approaches, particularly (so-called) Agile Modeling, where developers are still encouraged to code early, with light design. Obviously to get the claimed benefits of TDD a full design down to class and, say, responsibilities (captured using, for example, Design By Contract) is necessary. This counts towards iterative development, with a design locked down, but not iterative design - as heavy refactoring and re-engineering negate the usefulness of TDD. Q1.9.4 Questions 1. Write a short note on the Software Development Process. 2. What are process models? 3. List out the various process models that can be used to develop a system 4. Explain the Water fall process model in detail 5. Explain the suitability of the spiral model for software development. 6. What is Agile Development process model? Explain the Extreme Programming model in detail. 7. Compare the various process models. 8. Explain in detail about the Software Development process.

1.10 PROJECT MANAGEMENT PROCESS


1.10.1 Core Project Management Process Overview The core project management process is divided into five main stages. Each of the project stages is described in its own section. The five stages that are identified include: 1. Starting the Project: Starting from idea realization through to the development and evaluation of a business case and prioritization of the potential project investments against the government/Departmental objectives and other organizational priorities and resource constraints. 2. Project Planning: This stage is critical to successful resourcing and execution of the project activities and it includes the development of the overall project structure, the activities and work
Anna University Chennai 40

DSE 112

SOFTWARE ENGINEERING

plan/timeline that will form the basis of the project management process throughout the project lifecycle. Where Treasury Board approval is required, project planning is usually conducted in two major iterations at increasing levels of planning detail and estimation accuracy. 3. Approving the Project: The Treasury Board Approval criteria should be consulted to determine whether your project requires Treasury Board Approval. This stage details the requirements of the Treasury Board Approval process. Even if your project does not officially require that the Treasury Board Project Approval Process be applied, you can gain by referencing and adopting those components that may provide extra rigor and support to your project approach. 4. Project Implementation: Against the project plan and project organization structure defined in the previous stage, the project activities are executed, tracked and measured. The project implementation stage not only includes the completion of planned activities, but also the evaluation of the success and contribution of this effort and the continual review and reflection of project status and outstanding issues against the original project business case. 5. Project Close Out and Wrap-up: One of the key success criteria for continuous process improvement involves defining a formal process for ending a project. This includes evaluating the successful aspects of the project as well as identifying opportunities for improvement, identification of project best practices that can be leveraged in future projects, and evaluating the performance of project team members. Q1.10.2 Questions 1. What are the phases of the Project Management Process? 2. Explain the various stages involved in the PMP?

NOTES

1.11 SOFTWARE CONFIGURATION MANAGEMENT PROCESS


SCM is a set of activities designed to control change by identifying the work products that are likely to change, establishing relationships among them, defining
41 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

mechanisms for managing different versions of these work products, controlling the changes imposed, and auditing and reporting on the changes made. In other words, SCM is a methodology to control and manage a software development project. SCM concerns itself with answering the question: somebody did something, how can one reproduce it? Often the problem involves not reproducing it identically, but with controlled, incremental changes. Answering the question will thus become a matter of comparing different results and of analyzing their differences. Traditional CM typically focused on controlled creation of relatively simple products. Nowadays, implementers of SCM face the challenge of dealing with relatively minor increments under their own control, in the context of the complex system being developed. Traditional SCM process is looked upon as the best-fit solution to handling changes in software projects. Traditional SCM process identifies the functional and physical attributes of software at various points in time and performs systematic control of changes to the identified attributes for the purpose of maintaining software integrity and traceability throughout the software development life cycle. 1.11.1 The goals of SCM are generally: 1. Configuration Identification- What code are we working with? 2. Configuration Control- Controlling the release of a product and its changes. 3. Status Accounting- Recording and reporting the status of components. 4. Review- Ensuring completeness and consistency among components. 5. Build Management- Managing the process and tools used for builds. 6. Process Management- Ensuring adherence to the organizations development process. 7. Environment Management- Managing the software and hardware that host our system. 8. Teamwork- Facilitate team interactions related to the process. Defect Tracking- making sure every defect has traceability back to the source

Anna University Chennai

42

DSE 112

SOFTWARE ENGINEERING

NOTES

Figure 1.7: Version Control Schematic Diagram The SCM process further defines the need to trace the changes and the ability to verify that the final delivered software has all the planned enhancements that are supposed to be part of the release. The figure 1.7 shown above gives an idea as to how the different versions of the software that is getting developed are maintained. 1.11.2 SCM Procedures: The traditional SCM identifies four procedures that must be defined for each software project to ensure a good SCM process is implemented. They are 1. 2. 3. 4. Configuration Identification Configuration Control Configuration Status Accounting Configuration Authentication

Most of this section will cover traditional SCM theory. Do not consider this as boring subject since this section defines and explains the terms that will be used throughout this document. 1. Configuration Identification Software is usually made up of several programs. Each program, its related documentation and data can be called as a configurable item(CI). The number of CI in any software project and the grouping of artifacts that make up a CI is a decision made of the project. The end product is made up of a bunch of CIs. The status of the CIs at a given point in time is called as a baseline. The baseline serves as a reference point in the software development life cycle. Each new baseline is the sum total of an older baseline plus a series of approved changes made on the CI.
43 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

A baseline is considered to have the following attributes 1. Functionally complete A baseline will have a defined functionality. The features and functions of this particular baseline will be documented and available for reference. Thus the capabilities of the software at a particular baseline are well known. 2. Known Quality The quality of a baseline will be well defined. i.e. all known bugs will be documented and the software will have undergone a complete round of testing before being put define as the baseline. 3. Immutable and completely re-creatable A baseline, once defined, cannot be changed. The list of the CIs and their versions are set in stone. Also, all the CIs will be under version control so the baseline can be recreated at any point in time. 4. Configuration Control The process of deciding, co-coordinating the approved changes for the proposed CIs and implementing the changes on the appropriate baseline is called Configuration control. It should be kept in mind that configuration control only addresses the process after changes are approved. The act of evaluating and approving changes to software comes under the purview of an entirely different process called change control. 5. Configuration Status Accounting Configuration status accounting is the bookkeeping process of each release. This procedure involves tracking what is in each version of software and the changes that lead to this version. Configuration status accounting keeps a record of all the changes made to the previous baseline to reach the new baseline. 6. Configuration Authentication Configuration authentication (CA) is the process of assuring that the new baseline has all the planned and approved changes incorporated. The process involves verifying that all the functional aspects of the software is complete and also the completeness of the delivery in terms of the right programs, documentation and data are being delivered.

Anna University Chennai

44

DSE 112

SOFTWARE ENGINEERING

The configuration authentication is an audit performed on the delivery before it is opened to the entire world. 1.11.3 Tools that aid Software Configuration Management 1. Concurrent Versions System (CVS) 2. Revision Control System (RCS) 3. Source Code Control System (SCCS) Commercial Tools 1. Rational Clear Case 2. Polytron Version Control System (PVCS) 3. Microsoft Visual SourceSafe 1.12 CAPABILITY MATURITY MODEL (CMM) The Capability Maturity Model defined by the Software Engineering Institute (SEI) for Software describes the principles and practices to achieve a certain level of software process maturity. The model is intended to help software organizations improve the maturity of their software processes in terms of an evolutionary path from ad hoc, chaotic processes to mature, disciplined software processes. The CMM is designed towards organizations in improving their software processes for building better software faster and at a lower cost. The Software Engineering Institute (SEI) defines five levels of maturity of a software development process. (Please refer to the figure 1.2 shown below) 1.12.1 Structure of CMM The CMM involves the following aspects: Maturity Levels: It is a layered framework providing a progression to the discipline needed to engage in continuous improvement (It is important to state here that an organization develops the ability to assess the impact of a new practice, technology, or tool on their activity. Hence it is not a matter of adopting these, rather it is a matter of determining how innovative efforts influence existing practices. This really empowers projects, teams, and organizations by giving them the foundation to support reasoned choice.) Key Process Areas: A Key Process Area (KPA) identifies a cluster of related activities that, when performed collectively, achieve a set of goals considered important.
45

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Goals: The goals of a key process area summarize the states that must exist for that key process area to have been implemented in an effective and lasting way. The extent to which the goals have been accomplished is an indicator of how much capability the organization has established at that maturity level. The goals signify the scope, boundaries, and intent of each key process area. Common Features: Common features include practices that implement and institutionalize a key process area. These five types of common features include: Commitment to Perform, Ability to Perform, Activities Performed, Measurement and Analysis, and Verifying Implementation. Key Practices: The key practices describe the elements of infrastructure and practice that contribute most effectively to the implementation and institutionalization of the key process areas. 1.12.2 Levels of the CMM

Figure 1.8: Levels of Capability Maturity Model


Anna University Chennai 46

DSE 112

SOFTWARE ENGINEERING

There are five levels of the CMM as shown in the figure 1.8 are; Level 1 - Initial At maturity level 1, processes are usually ad hoc, and the organization usually does not provide a stable environment. Success in these organizations depends on the competence and heroics of the people in the organization, and not on the use of proven processes. In spite of this ad hoc, chaotic environment, maturity level 1 organizations often produce products and services that work; however, they frequently exceed the budget and schedule of their projects. Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be able to repeat their past successes again. Level 1 software project success depends on having high quality people. Level 2 - Repeatable At maturity level 2, software development successes are repeatable. The processes may not repeat for all the projects in the organization. The organization may use some basic project management to track cost and schedule. Process discipline helps ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans. Project status and the delivery of services are visible to management at defined points (for example, at major milestones and at the completion of major tasks). Basic project management processes are established to track cost, schedule, and functionality. The minimum process discipline is in place to repeat earlier successes on projects with similar applications and scope. There is still a significant risk of exceeding cost and time estimates. Level 3 - Defined The organizations set of standard processes, which are the basis for level 3, are established and improved over time. These standard processes are used to establish consistency across the organization. Projects establish their defined processes by the organizations set of standard processes according to tailoring guidelines.

NOTES

47

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

The organizations management establishes process objectives based on the organizations set of standard processes and ensures that these objectives are appropriately addressed. A critical distinction between level 2 and level 3 is the scope of standards, process descriptions, and procedures. At level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of the process (for example, on a particular project). At level 3, the standards, process descriptions, and procedures for a project are tailored from the organizations set of standard processes to suit a particular project or organizational unit. Effective project management system is implemented with the help of good project management software. Level 4 - Quantitatively Managed Using precise measurements, management can effectively control the software development effort. In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Organizations at this level set quantitative quality goals for both software process and software maintenance. Sub processes are selected that significantly contribute to overall process performance. These selected sub processes are controlled using statistical and other quantitative techniques. A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable. Level 5 - Optimizing Maturity level 5 focuses on continually improving process performance through both incremental and innovative technological improvements. Quantitative processimprovement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing process improvement. The effects of deployed process improvements are measured and evaluated against the quantitative process-improvement objectives. Both the defined processes and the organizations set of standard processes are targets of measurable improvement activities.

Anna University Chennai

48

DSE 112

SOFTWARE ENGINEERING

Process improvements to address common causes of process variation and measurably improve the organizations processes are identified, evaluated, and deployed. Optimizing processes that are nimble, adaptable and innovative depends on the participation of an empowered workforce aligned with the business values and objectives of the organization. The organizations ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate and share learning. A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At maturity level 4, processes are concerned with addressing special causes of process variation and providing statistical predictability of the results. Though processes may produce predictable results, the results may be insufficient to achieve the established objectives. At maturity level 5, processes are concerned with addressing common causes of process variation and changing the process (that is, shifting the mean of the process performance) to improve process performance (while maintaining statistical probability) to achieve the established quantitative processimprovement objectives. Associated with each level from level two onwards are key areas, which an organization is required to focus on to move on to the next level. Such focus areas are called as Key Process Areas (KPA) in CMM parlance. As part of level 2 maturity, one of the KPAs that have been identified is SCM. Thus any project that has a good SCM process can be leveraged as satisfying one of the KPAs of CMM. Having known the various software development life cycle models, let us now study about the first stage of the SDLC, which is the Requirements Engineering. Let us study how the requirements are elicited from the problem definition. What are the challenges in eliciting the requirements, what is an SRS? All these are given in detail in the next unit.

NOTES

49

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Q1.12.3 Questions 1. What do you mean by Configuration Management in the SE perspective? 2. How important is the SCM process in Software Engineering? 3. Define the various steps in SCM process? 4. Explain in detail the Software Configuration Management Process. 5. What is CMM? 6. Explain the importance of CMM with respect to software engineering. 7. Briefly outline the structure of the CMM. 8. Explain the various levels of CMM REFERENCES 1.Software Engineering A Practitioners Approach, By Roger. S.Pressman, Mc Graw Hill International 6th edition, 2005. http://www.ics.uci.edu/~wscacchi / Papers / SE-Encyc / Process-Models-SEEncyc.pdf

Anna University Chennai

50

DSE 112

SOFTWARE ENGINEERING

NOTES

UNIT II
2.1 INTODUCTION
In software engineering, requirements analysis encompasses those tasks that go into determining the requirements of a new or altered system, taking account of the possibly conflicting requirements of the various stakeholders, such as users. Requirements analysis is critical to the success of a project. Systematic requirements analysis is also known as requirements engineering. It is sometimes referred to loosely by names such as requirements gathering, requirements capture, or requirements specification. The term requirements analysis can also be applied specifically to the analysis proper (as opposed to elicitation or documentation of the requirements, for instance). Requirements must be measurable, testable, related to identified business needs or opportunities, and defined to a level of detail sufficient for system design.

2.2 LEARNING OBJECTIVES


1. The software requirement analysis techniques. 2. The Software Requirement Specification (SRS) 3. Characteristics of a good SRS document? 4. The problems faced during the requirements analysis phase. 5. Software Requiremnt Validation 6. Sotware Requirement Metrics What is Requirement? The IEEE definitions of requirement are given below. 1. A condition or capability needed by a user to solve a problem or achieve an objective
51 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

2. A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification or other formally imposed document 3. A documented representation of a condition or capability as in (1) or (2)

2.3 REQUIREMENTS ENGINEERING PROCESS


The block diagram (figure 2.0) shown below gives an overall idea about the requirement engineering process. It starts with the feasibility study and ends with the preparation of the requirements document.

Feasibility study

Requirements elicitation and analysis Requirements specification

F easibility report S ystem models User and system requirements

Requirements validation

Requirements document

Figure 2.0: Requirement Engineering Process Main techniques Conceptually, requirements analysis includes three types of activity:
1. Feasibility Study: A feasibility study decides whether or not the proposed system

is worthwhile.
2. Eliciting and Analyzing requirements: the task of communicating with

customers and users to determine what their requirements are and analysis is the task of determining whether the stated requirements are unclear, incomplete, ambiguous, or contradictory and resolving these issues.
Anna University Chennai 52

DSE 112

SOFTWARE ENGINEERING

3. Requirements Specification: Requirements may be documented in various

NOTES

forms, such as natural-language documents, use cases, user stories, or process specifications. 4. Requirements Validation: Concerned with demonstrating that the requirements define the system that the customer really wants. The requirements error costs are high so validation is very important The outputs of the requirement engineering process are the agreed requirements, system specification and system models as shown in the figure 2.1 below.
Existing systems information Stakeholder needs Organisational standards Requirements engineering process Agreed requirements System specification System models

Regulations Domain information

Figure 2.1: Inputs and Outputs of the Requirement Engineering Process

2.4 SOFTWARE REQUIREMENTS PROBLEMS


The problems covered in five common errors in requirements analysis. 1. Customers Dont Know What They Want This is often true because much of development has to do with technology thats beyond the customers knowledge. In software development especially large and complex software with many interfaces requirements dont always affect customers. Requirements often focus on the back-end, processing and system interfaces. 2. Different stakeholders may have conflicting requirements. The software project has stakeholders with varied knowledge and who have different levels of stake in the project. There is always a good chance that they have different views, which often conflict with each other. We need to consider all the possible requirements and need to do a study in order to solve the conflicts. Trade offs need to be done most of the times.
53 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

3. Stakeholders express requirements in their own terms. We need to remember that the stakeholders of the project are not technical people. They just have a vague idea as to what they want. They will explain their idea in their own terms and may often give us a problem statement that conveys very little about their needs. 4. Requirements Change during the Project New stakeholders may emerge and the business environment change. Anyone involved in development should have a change request process in place, even a oneperson business. Accept that there will be changes and prepare a change request when this happens. Show the customer how it affects the milestones and get sign off. Another way is to have a phase 1 or soft launch and then add the new requirements for phase 2. 5. Timeline Trouble The customer accepts responsibility for the delay. Be realistic. Map out the timeline based on an analysis of the requirements. If its tight leaving no room for error or impossible communicate this. Which would you rather have? No client because you said the timeline wasnt doable or having a client and missing deadlines that could hurt a companys reputation? 6. Communication Gaps All the stakeholders of the project may not be available at all times and any discussion made on the project should be made known to all the stakeholders. Most of the times, there is a good chance that there will be communication gap and hence things are likely to get lost in between. 7. Organisational and political factors may influence the system requirements This is a difficult problem to overcome with diversity of variables that can get in the way. One way is to communicate in terms of whats in it for the other person rather than your firm or someone else in your clients company. Q 2.4 Questions a) What are the problems faced in the Software Requirements? b) How can the issues in the requirements phase be effectively managed so that we prepare a good SRS? c) Requirements keep changing Write a note on this.

Anna University Chennai

54

DSE 112

SOFTWARE ENGINEERING

2.5 THE REQUIREMENTS SPIRAL


As in the case of the spiral model of the software development, the requirements spiral is one of the models followed for eliciting requirements and preparing the SRS document. The figure 2.2, shown below shown below shows the requirements spiral.

NOTES

Requirements classif ication and orga nisation

Requirements prior itization and ne gotiation

Requirements discove ry

Requirements documentation

Figure 2.2: The Requirements Spiral Requirements analysis can be a long and arduous process during which many delicate psychological skills are involved. New systems change the environment and relationships between people, so it is important to identify all the stakeholders, take into account all their needs and ensure they understand the implications of the new systems. Analysts can employ several techniques to elicit the requirements from the customer. Historically, this has included such things as holding interviews, or holding focus groups (more aptly named in this context as requirements workshops - see below) and creating requirements lists. More modern techniques include prototyping, and use cases. Where necessary, the analyst will employ a combination of these methods to establish the exact requirements of the stakeholders, so that a system that meets the business needs is produced.

55

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES
Requirements elicitation

problem statement Requirements Specification nonfunctional requirements functional model Analysis

Analysis Model dynamic model analysis object

Figure 2.3: Schematic Diagram of the Requirements Elicitation and Analysis

2.6 TECHNIQUES FOR ELICITING REQUIREMENTS


There are many ways by which the requirements can be elicited for the given problem statement. Eliciting requirements is a very tough task as the problem statement given by the customer is often very ambiguous and unclear. It is the duty of the requirements engineer to elicit the requirements from the given problem statement. Many techniques for the elicitation of the requirements have been proposed and the important ones are discusses here. 2.6.1 Stakeholder interviews Stakeholder interviews are a common method used in requirement analysis. Some selection is usually necessary, cost being one factor in deciding whom to interview. These interviews may reveal requirements not previously envisaged as being within the scope of the project, and requirements may be contradictory. However, stakeholder shall have an idea of his expectation or shall have visualized his requirements. 2.6.2 Requirement workshops In some cases it may be useful to gather stakeholders together in requirement workshops. These workshops are more properly termed Joint Requirements
Anna University Chennai 56

DSE 112

SOFTWARE ENGINEERING

Development (JRD) sessions, where requirements are jointly identified and defined by stakeholders. It may be useful to carry out such workshops in a controlled environment, so that the stakeholders are not distracted. A facilitator can be used to keep the process focused and these sessions will often benefit from a dedicated scribe to document the discussion. Facilitators may make use of a projector and diagramming software or may use props as simple as paper and markers. One role of the facilitator may be to ensure that the weight attached to proposed requirements is not overly dependent on the personalities of those involved in the process. 2.6.3 Ethnography Social scientists spend a considerable time observing and analysing how people actually work. People do not have to explain or articulate their work. Social and organisational factors of importance may be observed. Ethnographic studies have shown that work is usually richer and more complex than suggested by simple system models. 2.6.3.1 Scope of Ethnography The scope of ethnography which is shown in the figure 2.4 below, gives how the requirements are elicited and the SRS is prepared by the ethnography technique. Requirements that are derived from the way that people actually work rather than the way I which process definitions suggest that they ought to work. Requirements that are derived from cooperation and awareness of other peoples activities
Ethno g raphic anal ysis Debriefing meetings Focused ethno g raph y Prototype evalua tion Generic system de vel opment System proto yping

NOTES

Figure 2.4: Scope of Ethnography 2.6.4 Prototypes In the mid-1980s, prototyping became seen as the solution to the requirements analysis problem. Prototypes are mock ups of the screens of an application which
57 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

allow users to visualize the application that isnt yet constructed. Prototypes help users get an idea of what the system will look like, and make it easier for users to make design decisions without waiting for the system to be built. Major improvements in communication between users and developers were often seen with the introduction of prototypes. Early views of the screens led to fewer changes later and hence reduced overall costs considerably. However, over the next decade, while proving a useful technique, it did not solve the requirements problem:
1. Managers once they see the prototype have a hard time understanding that the

finished design will not be produced for some time. 2. Designers often feel compelled to use the patched together prototype code in the real system, because they are afraid to waste time starting again. 3. Prototypes principally help with design decisions and user interface design. However, they cant tell you what the requirements were originally. 4. Designers and end users can focus too much on user interface design and too little on producing a system that serves the business process. Prototypes can be flat diagrams or working applications using synthesized functionality. Wireframes are made in a variety of graphic design documents, and often remove all color from the software design in instances where the final software is expected to have graphic design applied to it. This helps to prevent confusion over the final visual look and feel of the application. 2.6.5 Use cases A use case is a technique for documenting the potential requirements of a new system or software change. Each use case provides one or more scenarios that convey how the system should interact with the end user or another system to achieve a specific business goal. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert. Use cases are often co-authored by software developers and end users. Use cases are deceptively simple tools for describing the behavior of the software. A use case contains a textual description of all of the ways which the intended users could work with the software through its interface. Use cases do not describe any internal workings of the software, nor do they explain how that software will be implemented. They simply show the steps that the user follows to use the software to do his work. All of the ways that the users interact with the software can be described in this manner.

Anna University Chennai

58

DSE 112

SOFTWARE ENGINEERING

During the 1990s, use cases have rapidly become the most common practice for capturing functional requirements. This is especially the case within the object-oriented community where they originated, but their applicability is not restricted to objectoriented systems, because use cases are not object-oriented in nature. Each use case focuses on describing how to achieve a single business goal or task. From a traditional software engineering perspective, a use case describes just one feature of the system. For most software projects, this means that perhaps tens or sometimes hundreds of use cases are needed to fully specify the new system. The degree of formality of a particular software project and the stage of the project will influence the level of detail required in each use case. Article Printing Use case: an Illustrative Example The example discussed, as shown in the figure 2.5 below, is regarding the printing of an article. The actors for this particular system are the library user, the books supplier and the library staff. The article search, article printing, user administration and catalogue services.

NOTES

Article search

Library User

Ar ticle printing

User administration

Library St aff

Suppl ier

Catalogue services

Figure 2.5: Use Case Diagram for the Article Printing Example

59

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

A use case defines the interactions between external actors and the system under consideration to accomplish a business goal. Actors are parties outside the system that interact with the system; an actor can be a class of users, a role users can play, or another system. Use cases treat the system as a black box, and the interactions with the system, including system responses, are as perceived from outside the system. This is deliberate policy, because it simplifies the description of requirements and avoids the trap of making assumptions about how this functionality will be accomplished. A use case should:
1. describe a business task to serve a business goal 2. be at an appropriate level of detail 3. be short enough to implement by one software developer in a single release

Use cases can be very good for establishing functional requirements, but they are not suited to capturing Non-Functional Requirements. However Performance Engineering specifies that each critical use case should have an associated performance oriented non-functional requirement. 2.7 SOFTWARE REQUIREMENTS SPECIFICATION (SRS) A software requirements specification (SRS) is a complete description of the behavior of the system to be developed. It includes a set of use cases that describe all of the interactions that the users will have with the software. Use cases are also known as functional requirements. In addition to use cases, the SRS also contains nonfunctional (or supplementary) requirements. Non-functional requirements are requirements which impose constraints on the design or implementation (such as performance requirements, quality standards, or design constraints). Recommended approaches for the specification of software requirements are described by IEEE 830-1998. This standard describes possible structures, desirable contents, and qualities of a software requirements specification. Stakeholder identification A major new emphasis in the 1990s was a focus on the identification of stakeholders. It is increasingly recognized that stakeholders are not limited to the organization employing the analyst. Other stakeholders will include:
Anna University Chennai 60

DSE 112

SOFTWARE ENGINEERING

1. those organizations that integrate (or should integrate) horizontally with the

NOTES

organization the analyst is designing the system for 2. any back office systems or organizations 3. Senior management Stakeholder issues There are many ways the users can inhibit requirements gathering: 1. 2. 3. 4. 5. 6. 7. Users dont understand what they want Users wont commit to a set of written requirements Users insist on new requirements after the cost and schedule have been fixed. Communication with users is slow Users often do not participate in reviews or are incapable of doing so. Users are technically unsophisticated Users dont understand the development process.

This may lead to the situation where user requirements keep changing even when system or product development has been started. Engineer/developer issues Possible problems caused by engineers and developers during requirements analysis are: 1. Technical personnel and end users may have different vocabularies. Consequently, they can believe they are in perfect agreement until the finished product is supplied. 2. Engineers and developers may try to make the requirements fit an existing system or model, rather than develop a system specific to the needs of the client. 3. Analysis may be often carried out by engineers or programmers, rather than personnel with the people skills and the domain knowledge to understand a clients needs properly. Attempted solutions One attempted solution to communications problems has been to employ specialists in business or system analysis. Techniques introduced in the 1990s like Prototyping, Unified Modeling Language (UML), Use cases, and Agile software development were also intended as solutions to problems encountered with previous methods:
61 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Also, a new class of application simulation or application definition tools has entered the market. These tools are designed to bridge the communication gap between business users and the IT organization and also to allow applications to be test marketed before any code is produced. The best of these tools offer: 1. 2. 3. 4. 5. 6. electronic whiteboards to sketch application flows and test alternatives ability to capture business logic and data needs ability to generate high fidelity prototypes that closely imitate the final application interactivity capability to add contextual requirements and other comments ability for remote and distributed users to run and interact with the simulation

Q 2.7 Questions 1. What is Requirement Analysis? 2. What is the need for the requirements to be analyzed? State the importance the same. 3. What are the activities in the Requirements Analysis Process? 4. What is an use case? 5. Explain the requiremets analysis phase in details with an example. Also, mention the how serious it would be if the requiremnts are not analyzed properly.

2.8 SOFTWARE REQUIREMENTS SPECIFICATION


2.8.1 What is a Software Requirements Specification? An SRS is basically an organizations understanding (in writing) of a customer or potential clients system requirements and dependencies at a particular point in time (usually) prior to any actual design or development work. Its a two-way insurance policy that assures that both the client and the organization understand the others requirements from that perspective at a given point in time. The SRS document itself states in precise and explicit language those functions and capabilities a software system must provide, as well as states any required constraints by which the system must abide. The SRS also functions as a blueprint for completing a project with as little cost growth as possible. The SRS is often referred to as the parent document because all subsequent project management documents, such as design specifications, statements of work, software architecture specifications, testing and validation plans, and documentation plans, are related to it.
Anna University Chennai 62

DSE 112

SOFTWARE ENGINEERING

Its important to note that an SRS contains functional and nonfunctional requirements only; it doesnt offer design suggestions, possible solutions to technology or business issues, or any other information other than what the development team understands the customers system requirements to be. A well-designed, well-written SRS accomplishes four major goals: 1. It provides feedback to the customer. An SRS is the customers assurance that the development organization understands the issues or problems to be solved and the software behavior necessary to address those problems. Therefore, the SRS should be written in natural language (versus a formal language, explained later in this article), in an unambiguous manner that may also include charts, tables, data flow diagrams, decision tables, and so on. 2. It decomposes the problem into component parts. The simple act of writing down software requirements in a well-designed format organizes information, places borders around the problem, solidifies ideas, and helps break down the problem into its component parts in an orderly fashion. 3. It serves as an input to the design specification. As mentioned previously, the SRS serves as the parent document to subsequent documents, such as the software design specification and statement of work. Therefore, the SRS must contain sufficient detail in the functional system requirements so that a design solution can be devised. 4. It serves as a product validation check. The SRS also serves as the parent document for testing and validation strategies that will be applied to the requirements for verification. SRSs are typically developed during the first stages of Requirements Development, which is the initial product development phase in which information is gathered about what requirements are neededand not. This information-gathering stage can include onsite visits, questionnaires, surveys, interviews, and perhaps a return on-investment (ROI) analysis or needs analysis of the customer or clients current business environment. The actual specification, then, is written after the requirements have been gathered and analyzed. 2.8.2 Why Technical Writers should be involved with Software Requirements Specifications? Unfortunately, much of the time, systems architects and programmers write SRSs with little (if any) help from the technical communications organization. And when
63

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

that assistance is provided, its often limited to an edit of the final draft just prior to going out the door. Having technical writers involved throughout the entire SRS development process can offer several benefits: 1. Technical writers are skilled information gatherers, ideal for eliciting and articulating customer requirements. The presence of a technical writer on the requirements-gathering team helps balance the type and amount of information extracted from customers, which can help improve the SRS. 2. Technical writers can better assess and plan documentation projects and better meet customer document needs. Working on SRSs provides technical writers with an opportunity for learning about customer needs firsthandearly in the product development process. 3. Technical writers know how to determine the questions that are of concern to the user or customer regarding ease of use and usability. Technical writers can then take that knowledge and apply it not only to the specification and documentation development, but also to user interface development, to help ensure the UI (User Interface) models the customer requirements. 4. Technical writers involved early and often in the process, can become an information resource throughout the process, rather than an information gatherer at the end of the process. In short, a requirements-gathering team consisting solely of programmers, product marketers, systems analysts/architects, and a project manager runs the risk of creating a specification that may be too heavily loaded with technology-focused or marketing focused issues. The presence of a technical writer on the team helps place at the core of the project those user or customer requirements that provide more of an overall balance to the design of the SRS, product, and documentation. 2.8.3 What Kind of Information Should an SRS Include? You probably will be a member of the SRS team (if not, ask to be), which means SRS development will be a collaborative effort for a particular project. In these cases, your company will have developed SRSs before, so you should have examples (and, likely, the companys SRS template) to use. But, lets assume youll be starting from scratch. Several standards organizations (including the IEEE) have identified nine topics that must be addressed when designing and writing an SRS:

Anna University Chennai

64

DSE 112

SOFTWARE ENGINEERING

1. 2. 3. 4. 5. 6. 7. 8. 9.

Interfaces Functional Capabilities Performance Levels Data Structures/Elements Safety Reliability Security/Privacy Quality Constraints and Limitations

NOTES

An SRS document typically includes four ingredients, as discussed in the following sections: 1. 2. 3. 4. A template A method for identifying requirements and linking sources Business operation rules A traceability matrix

2.8.4 SRS Template The first and biggest step to writing an SRS is to select an existing template that you can fine tune for your organizational needs. Theres not a standard specification template for all projects in all industries because the individual requirements that populate an SRS are unique not only from company to company, but also from project to project within any one company. The key is to select an existing template or specification to begin with, and then adapt it to meet your needs. It would be almost impossible to find a specification or specification template that meets your particular project requirements exactly. But using other templates as guides is how its recommended in the literature on specification development. Look at what someone else has done, and modify it to fit your project requirements. Table 2.1 shows what a basic SRS outline might look like. This example is an adaptation and extension of the IEEE Standard 830-1998: 2.8.5 Table 2.1: A sample of a basic SRS outline 1. Introduction 1.1 Purpose 1.2 Document conventions 1.3 Intended audience
65 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

1.4 Additional information 1.5 Contact information/SRS team members 1.6 References 2. Overall Description 2.1 Product perspective 2.2 Product functions 2.3 User classes and characteristics 2.4 Operating environment 2.5 User environment 2.6 Design/implementation constraints 2.7 Assumptions and dependencies 3. External Interface Requirements 3.1 User interfaces 3.2 Hardware interfaces 3.3 Software interfaces 3.4 Communication protocols and interfaces 4. System Features 4.1 System feature A 4.1.1 Description and priority 4.1.2 Action/result 4.1.3 Functional requirements 4.2 System feature B 5. Other Nonfunctional Requirements 5.1 Performance requirements 5.2 Safety requirements 5.3 Security requirements 5.4 Software quality attributes 5.5 Project documentation 5.6 User documentation 6. Other Requirements Appendix A: Terminology/Glossary/Definitions list

Anna University Chennai

66

DSE 112

SOFTWARE ENGINEERING

Table 2.2: A sample of a more detailed SRS outline 1. Scope 1.1 Identification. Identify the system and the software to which this document applies, including, as applicable, identification number(s), title(s), abbreviation(s), version number(s), and release number(s). 1.2 System overview. State the purpose of the system or subsystem to which this document applies. 1.3 Document overview. Summarize the purpose and contents of this document. This document comprises six sections: a. b. c. d. e. f. Scope Referenced documents Requirements Qualification provisions Requirements traceability Notes

NOTES

Describe any security or privacy considerations associated with its use. 2. Referenced Documents 2.1 Project documents. Identify the project management system documents here. 2.2 Other documents. 2.3 Precedence. 2.4 Source of documents. 3. Requirements This section shall be divided into paragraphs to specify the Computer Software Configuration Item (CSCI) requirements, that is, those
67 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

characteristics of the CSCI that are conditions for its acceptance.CSCI requirements are software requirements generated to satisfy the system requirements allocated to this CSCI. Each requirement shall be assigned a projectunique identifier to support testing and traceability and shall be stated in such a way that an objective test can be defined for it. 3.1 Required states and modes. 3.2 CSCI capability requirements. 3.3 CSCI external interface requirements. 3.4 CSCI internal interface requirements. 3.5 CSCI internal data requirements. 3.6 Adaptation requirements. 3.7 Safety requirements. 3.8 Security and privacy requirements. 3.9 CSCI environment requirements. 3.10 Computer resource requirements. 3.11 Software quality factors. 3.12 Design and implementation constraints. 3.13 Personnel requirements. 3.14 Training-related requirements. 3.15 Logistics-related requirements. 3.16 Other requirements. 3.17 Packaging requirements. 3.18 Precedence and criticality requirements. 4. Qualification Provisions 5. Requirements Traceability To be determined. To be determined.
68

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

6. Notes

This section contains information of a general or explanatory nature that may be helpful, but is not mandatory. 6.1 Intended use. This Software Requirements specification shall 6.2 Definitions used in this document. Insert here an alphabetic list of definitions and their source if different from the declared sources specified in the Documentation standard. 6.3 Abbreviations used in this document. Insert here an alphabetic list of the abbreviations and acronyms if not identified in the declared sources specified in the Documentation Standard. 6.4 Changes from previous issue. Will be not applicable for the initial issue. Revisions shall identify the method used to identify changes from the previous issue.

NOTES

2.8.6 Identify and Link Requirements with Sources As noted earlier, the SRS serves to define the functional and nonfunctional requirements of the product. Functional requirements each have an origin from which they came, be it a use case (which is used in system analysis to identify, clarify, and organize system requirements, and consists of a set of possible sequences of interactions between systems and users in a particular environment and related to a particular goal), government regulation, industry standard, or a business requirement. In developing an SRS, you need to identify these origins and link them to their corresponding requirements. Such a practice not only justifies the requirement, but it also helps assure project stakeholders that frivolous or spurious requirements are kept out of the specification. To link requirements with their sources, each requirement included in the SRS
69 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

should be labeled with a unique identifier that can remain valid over time as requirements are added, deleted, or changed. Such a labeling system helps maintain change-record integrity while also serving as an identification system for gathering metrics. You can begin a separate requirements identification list that ties a requirement identification (ID) number with a description of the requirement. Eventually, that requirement ID and description become part of the SRS itself and then part of the Requirements Traceability Matrix, discussed in subsequent paragraphs. 2.8.7 Identifying Requirements and linking them to their sources The Table 2.3 shown below is a sample one that illustrates how the requirements are identified and linked to their sources. Here the Business rule sources are used. Table 2.3: This table identifies requirements and links them to their sources No. Paragraph No. 1 2 3 4 5.1.4.1 5.1.4.1 5.1.4.1 5.1.4.2 Requirement Understand/communicate using SMTP protocol Understand/communicate using POP protocol Understand/communicate using IMAP protocol Open at same rate as OE Business Rule Source IEEE STD xx-xxxx IEEE STD xx-xxxx IEEE STD xx-xxxx Use Case Doc 4.5.4

2.8.8 Establish Business Rules for Contingencies and Responsibilities A top-quality SRS should include plans for planned and unplanned contingencies, as well as an explicit definition of the responsibilities of each party, should a contingency be implemented. Some business rules are easier to work around than others. For example, if a customer wants to change a requirement that is tied to a government regulation, it may not be ethical and/or legal to be following the law. A project manager may be responsible for ensuring that a government regulation is followed as it relates to a project requirement; however, if a contingency is required, then the responsibility for that requirement may shift from the project manager to a regulatory attorney. The SRS should anticipate such actions to the furthest extent possible.

Anna University Chennai

70

DSE 112

SOFTWARE ENGINEERING

2.8.9 Establish a Requirements Traceability Matrix The business rules for contingencies and responsibilities can be defined explicitly within a Requirements Traceability Matrix (RTM), or contained in a separate document and referenced in the matrix, as the example in Table 3 illustrates. Such a practice leaves no doubt as to responsibilities and actions under certain conditions as they occur during the product-development phase. The RTM functions as a sort of chain of custody document for requirements and can include pointers to links from requirements to sources, as well as pointers to business rules. For example, any given requirement must be traced back to a specified need, be it a use case, business essential, industry-recognized standard, or government regulation. As mentioned previously, linking requirements with sources minimizes or even eliminates the presence of spurious or frivolous requirements that lack any justification. The RTM is another record of mutual understanding, but also helps during the development phase. As software design and development proceed, the design elements and the actual code must be tied back to the requirement(s) that define them. The RTM is completed as development progresses; it cant be completed beforehand (see Table 2.3). 2.8.10 Writing an SRS Unlike formal language that allows developers and designers some latitude, the natural language of SRSs must be exact, without ambiguity, and precise because the design specification, statement of work, and other project documents are what drive the development of the final product. That final product must be tested and validated against the design and original requirements. Specification language that allows for interpretation of key requirements will not yield a satisfactory final product and will likely lead to cost overruns, extended schedules, and missed deliverable deadlines. 2.8.11 Quality characteristics of an SRS Table 2.4 shows the fundamental characteristics of a quality SRS, which were originally presented at the April 1998 Software Technology Conference presentation Doing Requirements Right the First Time. These quality characteristics are closely tied to what are referred to as indicators of strength and weakness, which will be defined next.

NOTES

71

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

How do we know when weve written a quality specification? The most obvious answer is that a quality specification is one that fully addresses all the customer requirements for a particular product or system. Thats part of the answer. While many quality attributes of an SRS are subjective, we do need indicators or measures that provide a sense of how strong or weak the language is in an SRS. A strong SRS is one in which the requirements are tightly, unambiguously, and precisely defined in such a way that leaves no other interpretation or meaning to any individual requirement. Theres so much more we could say about requirements and specifications. This information will help you get started when you are called uponor step upto help the development team. Writing top-quality requirements specifications begins with a complete definition of customer requirements. Coupled with a natural language that incorporates strength and weakness quality indicatorsnot to mention the adoption of a good SRS templatetechnical communications professionals well-trained in requirements gathering, template design, and natural language use are in the best position to create and add value to such critical project documentation. Table 2.4: Quality Characteristics of a SRS SRS Quality Characteristic Complete What It Means SRS defines precisely all the go-live situations that will be encountered and the systems capability to successfully address them. Consistent SRS capability functions and performance levels are compatible, and the required quality features (security, reliability, etc.) do not negate those capability functions. For example, the only electric hedge trimmer that is safe is one that is stored in a box and not connected to any electrical cords or outlets. Accurate SRS precisely defines the systems capability in a real-world environment, as well as how it interfaces and interacts with it. This aspect of requirements is a significant problem area for many SRSs.

Anna University Chennai

72

DSE 112

SOFTWARE ENGINEERING

Modifiable

The logical, hierarchical structure of the SRS should facilitate any necessary modifications (grouping related issues together and separating them from unrelated issues makes the SRS easier to modify).

NOTES

Ranked

Individual requirements of an SRS are hierarchically arranged according to stability, security, perceived ease/difficulty of implementation, or other parameter that helps in the design of that and subsequent documents.

Testable

An SRS must be stated in such a manner that unambiguous assessment criteria (pass/fail or some quantitative measure) can be derived from the SRS itself.

Traceable

Each requirement in an SRS must be uniquely identified to a source (use case, government requirement, industry standard, etc.)

Unambiguous

SRS must contain requirements statements that can be interpreted in one way only. This is another area that creates significant problems for SRS development because of the use of natural language.

Valid

A valid SRS is one in which all parties and project participants can understand, analyze, accept, or approve it. This is one of the main reasons SRSs are written using natural language.

Verifiable

A verifiable SRS is consistent from one level of abstraction to another. Most attributes of a specification are subjective and a conclusive assessment of quality requires a technical review by domain experts. Using indicators of strength and weakness provide some evidence that preferred attributes are or are not present.

73

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Q 2.8.12 Questions 1. What is Software Requirements Specification? 2. What is the need for an SRS? 3. What are the ways in which the requirements can be elicited? 4. What makes a good SRS? 5. Explain in detail the SRS.

2.9 SOFTWARE REQUIREMENTS VALIDATION


It is concerned with demonstrating that the requirements define the system that the customer really wants. Requirements error costs are high so validation is very important Fixing a requirements error after delivery may cost up to 100 times the cost of fixing an implementation error. 2.9.1 Requirements Validation Techniques: 1. Requirements reviews: Systematic manual analysis of the requirements. 2. Prototyping: Using an executable model of the system to check requirements. 3. Test-case generation: Developing tests for requirements to check testability. 4. Traceability Matrix: It is concerned with the relationships between requirements, their sources and the system design A traceability matrix is a report from the requirements database or repository. What information the report contains depends on your need. Information requirements determine the associated information that you store with the requirements. Requirements management tools capture associated information or provide the capability to add it. 1. Source traceability a) Links from requirements to stakeholders who proposed these requirements; 1. Requirements traceability a) Links between dependent requirements; 2. Design traceability a) Links from the requirements to the design;
Anna University Chennai 74

DSE 112

SOFTWARE ENGINEERING

The examples show forward and backward tracing between user and system requirements. User requirement identifiers begin with U and system requirements with S. Tracing S12 to its source makes it clear this requirement is erroneous: it must be eliminated, rewritten, or the traceability corrected. Table 2.5: Forward Traceability: an illustrative example

NOTES

Backward Traceability: an illustrative example

2.9.2 Verification and Validation activities should include: 1. Analyzing software requirements to determine if they are consistent with, and within the scope have, system requirements. 2. Assuring that the requirements are testable and capable of being satisfied. 3. Creating a preliminary version of the Acceptance Test Plan, including a verification matrix, which relates requirements to the tests used to demonstrate that requirements are satisfied. 4. Beginning development, if needed, of test beds and test data generators. 5. The phase-ending Software Requirements Review (SRR).

75

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Q 2.9.3 Questions 1. Write a note on the Requirement Validation. 2. What is the need for the requirements to be validated? 3. What are the various activities involved in the requirements validation phase?

2.10 REQUIREMENTS METRICS


A simple set of requirements-related metrics that projects can adopt when adding requirements management practices to their existing development process, or as part of a broader effort to improve the process of eliciting, documenting, and managing requirements throughout the software lifecycle to support organizational goals or attain various certifications. The identified metrics are not meant to represent a comprehensive set. Rather, they represent a simple set a measurements that projects new to requirements management can choose from, depending on their needs. Regardless of the software process being followed, every project documents requirements in some form using a variety of artifacts. However, many projects lack even the simplest requirements-related measurements to help manage the project to successful completion, avoid rework, control scope, or manage change during the project. The identified metrics can be applied to virtually any software development effort. The identified metrics fall into three categories. First, there are metrics that help assess the goodness of the requirements process itself. Second, there are metrics that provide the project manager and project leaders with objective information to help guide the project to successful completion. Finally, there are metrics to help assess the impact requirements management is having on overall project costs and product quality. Some of these metrics can be gathered from requirements management tools; others need to be gathered from the tools used for managing change requests or tracking defects. 2.10.1 Measuring the requirements process Changes to the requirements of a system should be expected and encouraged early in the lifecycle as the stakeholders and development team reach a common understanding of what the system should do. However, excessive changes to the requirements, especially later in the lifecycle, can lead to project failure. Sometimes the
Anna University Chennai 76

DSE 112

SOFTWARE ENGINEERING

failure is spectacular; millions of dollars are spent on a project that is ultimately cancelled. Other cases are less extreme, and perhaps result only in schedule slippage, reduced functionality, customer dissatisfaction, or lost business opportunities. The following requirements measure the amount of change on a project and whether those changes are related to the requirements. Excessive requirements-related change will require corrective action and may be an indicator of a broken requirements process. 1. Frequency of change in the total requirements set 2. Rate of introduction of new requirements 3. Number of requirements changes to a requirements baseline 4. Percentage of defects with requirement errors as the root cause 5. Number of requirements-related change requests (as opposed to defects found in testing or inspections) 1. Measures for the project manager There are several metrics the project manager may use to get an objective measure of the state of a project and, if necessary, take corrective action. Alternatively, the metrics may indicate the project is ahead of plan and may be able to deliver more business value than originally anticipated. While the metrics below are readily extracted if the members of the project team (analysts, developers, testers, project manager, etc.) are updating information about the requirements using a requirements management tool, the metrics need to be interpreted within the context of the project and where it is in its lifecycle. 2. Number of requirements by owner/responsible person These metrics indicate the workload of various people on the project. The project manager to determine whether the project could benefit by shifting some of the workload can use the information. It can also be used to determine whether the right people are assigned to specifying or implementing the most important requirements. 3. Number of requirements by status/total number of requirements The set of requirements for a project are constantly in flux, especially during the earlier phases of development. Some requirements may have been approved and will
77

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

be incorporated into the product being developed. One or more stakeholders may have proposed others, but there is not yet agreement about whether they will be included in the product. Other requirements will be in various stages of development (e.g., being worked on, coding is complete, validated by testing, completed). Still others may be on hold pending clarification of certain issues. Having a clear understanding of exactly what the state of each requirement is and where it is in the development process enables the project manager to effectively manage the project, avoid requirements and scope creep, and take corrective actions to deliver the project on time and within budget while assuring that all the critical business needs are satisfied. 4. Functional requirements allocated to a project release or iteration Understanding exactly how many requirements, and which specific ones, are allocated to a release or iteration allows the project manager to successfully deliver the project on time with the most critical functionality. Making this information available to the team keeps everyone focused. It can also shorten development cycles by allowing the QA team to get an early start with test planning, test development, and establishing the appropriate test environment, while the code is being developed. 5. Requirements growth over time Early in the lifecycle, this metric can help the project manager determine whether adequate progress is being made gathering and specifying the requirements. As the project progresses, unusual growth can be an indicator of scope creep. It may also be an indicator that there are opportunities to improve the way in which requirements are elicited and documented. 6. Number of requirements completed This is an objective indicator of the number of requirements implemented, tested, and validated to date. Trends of requirements completed over time can also help measure how quickly the project is moving toward completion (e.g., the velocity) and whether the project team can include more functionality or is potentially over committed. 7. Number of requirements traced or not traced Many projects adopt various levels of formal requirements traceability to help ensure the completeness of the system and understand the impact on other requirements,

Anna University Chennai

78

DSE 112

SOFTWARE ENGINEERING

designs, code, and tests should the requirements change. Understanding this impact can help the project better understand the cost of proposed changes and control the scope of the project so it can be successful within its cost and schedule constraints. If traceability is being adopted on a project, understanding which requirements are or are not traced can be useful indicators of progress and completeness. 2.10.2 Measuring the benefits of requirements management Several independent studies confirm that requirement errors are the most frequent project errors. These errors precipitate defects in architecture, design, and implementation. If the resulting software errors are not detected during testing, they most certainly will be detected post-launch, and there business impact could be severe. In either case, they lead to costly changes for the project and can result in scrapping or reworking significant parts of the application. Good requirements management, as part of an overall requirements process, can reduce the number of defects, reduce project costs pre- and post-launch, and improve the overall quality of the product. The following metrics can be indicators of the benefits from requirements management. The first two metrics, when combined with other measures, can be used to calculate a monetary return. 1. Trend of post-launch defects over time 2. Trend of number of change requests for rework both pre-launch and post launch 3. Customer satisfaction surveys Having studied the Requirements Engineering in detail, the next step of the SDLC is the Estimation phase. The various topics on software estimation to be discussed in the estimation are the topics such as the the Software cost, effort and schedule estimation, the techniques used to this estimation, the SCM and Software Quality Assurance in brief. Q 2.10.2 Questions 1. What are Requirement Metrics? 2. What is the need to measure the requirements? 3. List the various requirement metric measures. 4. Explain briefly on the requirements metrics.
79

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

REFERENCES 1. Software Engineering A Practitioners Approach, By Roger. S.Pressman, Mc Graw Hill International 6th edition, 2005. 2. Requirements Engineering: A good practice guide by Ian Sommerville and Peter Sawyer. 3. 4. 5. http://www.cs.wustl.edu/~schmidt/PDF/design-principles4.pdf http://www.oodesign.com/ http://scitec.uwichill.edu.bb/cmp/online/cs22l/ design_concepts_and_principles.htm 6. http://www.cs.umd.edu/~vibha/vibha-thesis.pdf

Anna University Chennai

80

DSE 112

SOFTWARE ENGINEERING

NOTES

UNIT III
3 INTRODUCTION
Software development is a highly labor-intensive activity. A larger software project may involve hundreds of people and span many years. A project of this dimension can easily turn into chaos if proper management controls are not imposed. To complete the project successfully, the large workforce has to be properly organized so that the entire workspace is contributing effectively and efficiently the project. Project management controls and checkpoints are required for effective project monitoring. Controlling the development, ensuring quality, satisfying the constraints of the selected process model all require careful management of the project 3.1 LEARNING OBJECTIVES 1. 2. 3. 4. 5. 6. What is meant by planning of a software project Why is it important to plan a software project What are the various methods of planning a software project What role does cost estimation play in the planning of the software project What are the various types of project scheduling? How important is staffing and personnel planning in the software project estimation? 7. What is the need for software configuration management plans? 8. What are Quality Assurance plans? How important are they? 9. What is the need for Risk Management in the initial planning phase of the project? 3.2 PLANNING A SOFTWARE PROJECT For a successful project, competent management and technical staff are both essential. Lack of any one can cause a project to fails. Traditionally, computer professionals have attached title importance to management and have placed greater emphasis on technical skills. This is one of the reasons there is a shortage of competent
81 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

project managers. For software projects. Although the actual management skills can only be acquired by actual experience, some of the principles that have proven to be effective can be taught. We have seen that project management activities can be viewed as having three major phases: project planning, project monitoring and control, and project termination. Broadly speaking, planning entails all activities that must be performed before starting the development work. Once the project is started, project control begins. In other words, during planning all the activities that management needs to perform are planned, while during project control the plan is executed and updated. Planning may be the most important management activity .without a proper plan no real monitoring or controlling of the project is possible .Planning may also be perhaps the weakens activity in many software projects, any many failures caused by mismanagement can be attributed to lack of proper planning .One of the reasons for improper planning is the old thinking that the major activity in a software project is designing and writing code. Consequently, people who make software tend to rush toward implementation and do not spend time and effort planning. No amount of technical effort later can compensate for lack of careful planning. Lack of proper planning is sure ticket to failure for a large software project. For this reason, we treat project planning as an independent chapter. The basic goal of planning is to look into the future, identity the activities that need to be done to complete the project successfully, and plan the scheduling and resource allocation for these activities , Ideally, all future activities should be planned. A good plan is flexible enough to handle the unforeseen events that inevitably occur in a large project. Economic, political and personal factors should by taken into account for a realistic plan and thus for a successful project. The input to the planning activity is the requirements specification. A very detailed requirements document is not essential for planning, but for a good plan all the important requirements must be known. The output of this phase is the project plan, which is a document describing the different aspects of the plan. The project plan is instrumental in driving the development process through the remaining phases. The major issues the project plan addresses are: 1. Cost estimation 2. Schedule and milestones

Anna University Chennai

82

DSE 112

SOFTWARE ENGINEERING

3. 4. 5. 6. 7.

Personal plan Software quality assurance plans Configuration management plans Project monitoring plans Risk management

NOTES

Q3.2 Questions 1. What is the need for project planning? 2. What are the basic goals of project planning? 3. What are the major issues that the project planning? 3.3 COST ESTIMATION For a given set of requirements it is desirable to know how much it will cost to develop to develop the software to satisfy the given requirements, and how much time development will take. These estimates are needed before development is initiated. The primary reason for cost and schedule estimation is to enable the client or developer to perform a cost-benefit analysis and for project monitoring and control. A more practical use of these estimates is in bidding for software project, where the developers must give cost estimates to a potential client for the development contract. For a software development project, detailed and accurate cost and schedule estimates are essential prerequisites for managing the project. Otherwise even simple questions like is the project late, are there cost overruns, and when is the project likely to complete cannot be answered. Cost and schedule estimates are also required to determine the staffing level for a project during different phases. It can be safely said that cost and schedule estimates are fundamental to any form of project management and are generally always required for a project. Cost in a project is due to the requirements for software, hardware and human resources. Hardware resources are such things as the computer time, terminal time and memory required for the project, whereas software resources include the tools and compliers needed during development. The bulk of the cost of software development is due to the human resources needed, as most cost estimation procedures focus on this aspect. Estimates can be based on subjective opinion of some person or determined through the use of models. Though there are approaches to structure the options of persons for achieving a consensus on the cost estimate, it is generally accepted that it is important to have a more scientific approach to estimation through the use of models.
83 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

3.3.1 Uncertainties in Cost Estimation One can perform cost estimation at any point in the software life cycle. As the cost of the project depends on the nature and characteristics of the project, at any point, the accuracy of the estimate will depend on the amount of reliable information we have about the final product. Clearly, when the product is delivered, the cost can be accurately determined, as all the data about the project and the resources spent can be fully known by then. This is cost estimation with complete knowledge about the project. On the other extreme is the point when the project is being initiated of during the feasibility study. At this time we have only some idea of the classes of the data the system will get and product and the major functionality of the system. There is a great deal of uncertainty about the actual specification of the system. Specifications with uncertainty represent a range of possible final products, not one precisely defined product. Hence the cost estimation based on this type of information cannot be accurate. Estimates at this phase of the project can be off by as much as a factor of four from the actual final cost. Despite the limitations, cost estimation models have matured considerably and generally give fairly accurate estimates. For example when COCOMO model was checked with data from some objects, it was found that the estimates where within 20% of the actual cost of the time. It should also be mentioned that achieving cost estimate after the requirements have been specified within 20% is actually quite good. With such an estimate there need not even be available that can be used to meet the targets set for the project based on the estimates. In other words, if the estimate is within 20%, the effect of this inaccuracy will not even be reflected in the final cost and schedule. 3.3.2 Building Cost Estimation Models Let us turn our attention to the nature of cost estimation models and how these models are built. Any cost estimation model can be viewed as a function that outputs the cost estimate. As the cost of a project depends on the nature of the project, clearly this cost estimation function will need inputs about the project, from which it can produce the estimate. The basic idea of having a model or procedure for cost estimation is that it reduces the problem of estimation to estimating or determining the value of the key parameters that characterize the project, based on which the cost can be estimated. The problem of estimation, not yet fully solved, is determining the key parameters whose value can be easily determined and how to get the cost estimate from the value of these.

Anna University Chennai

84

DSE 112

SOFTWARE ENGINEERING

Though the cost for a project is a function of many parameters, it is generally agreed that the primary factor that controls the cost is the size of the project, that is the larger the project, the greater the cost and resource requirement other factors. Software engineering cost (and schedule) models and estimation techniques are used for a number of purposes. These include: 1. Budgeting: the primary but not the only important use. Accuracy of the overall estimate is the most desired capability. 2. Tradeoff and risk analysis: an important additional capability is to illuminate the cost and schedule sensitivities of software project decisions (scoping, staffing, tools, reuse, etc.). 3. Project planning and control: an important additional capability is to provide cost and schedule breakdowns by component, stage and activity. 4. Software improvement investment analysis: an important additional capability is to estimate the costs as well as the benefits of such strategies as tools, reuse, and process maturity. . Beyond regression, several papers [Briand et al. 1992; Khoshgoftaar et al. 1995] discuss the pros and cons of one software cost estimation technique versus another and present analysis results. In contrast, this paper focuses on the classification of existing techniques into six major categories as shown in figure 1, providing an overview with examples of each category. In section 2 it examines in more depth the first category, comparing some of the more popular cost models that fall under Modelbased cost estimation techniques.

NOTES

Figure 3.1: Software Estimation Techniques


85 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

3.3.3 Model-Based Techniques As discussed above, quite a few software estimation models have been developed in the last couple of decades. Many of them are proprietary models and hence cannot be compared and contrasted in terms of the model structure. Theory or experimentation determines the functional form of these models. This section discusses seven of the popular models and table 2 (presented at the end of this section) compares and contrasts these cost models based on the life-cycle activities covered and their input and output parameters. Putnams Software Life-cycle Model (SLIM) Larry Putnam of Quantitative Software Measurement developed the Software Life-cycle Model (SLIM) in the late 1970s [Putnam and Myers 1992]. SLIM is based on Putnams analysis of the life-cycle in terms of a so-called Rayleigh distribution of project personnel level versus time.

Percent of Total Effort

Figure 3.2: The Rayleigh Model It supports most of the popular size estimating methods including ballpark techniques, source instructions, function points, etc. It makes use of a so-called Rayleigh curve to estimate project effort, schedule and defect rate. A Manpower Buildup Index (MBI) and a Technology Constant or Productivity factor (PF) are used to influence the shape of the curve. SLIM can record and analyze data from previously completed projects which are then used to calibrate the model; or if data are not available then a set of questions can be answered to get values of MBI and PF from the existing database.
Anna University Chennai 86

DSE 112

SOFTWARE ENGINEERING

In SLIM, Productivity is used to link the basic Rayleigh manpower distribution model to the software development characteristics of size and technology factors. Productivity, P, is the ratio of software product size, S, and development effort, E. That is,

NOTES

The Rayleigh curve used to define the distribution of effort is modeled by the differential equation

An example is given in figure 2, where K = 1.0, a = 0.02, td = 0.18 where Putnam assumes that the peak staffing level in the Rayleigh curve corresponds to development time (td). Different values of K, a and td will give different sizes and shapes of the Rayleigh curve. Some of the Rayleigh curve assumptions do not always hold in practice (e.g. flat staffing curves for incremental development; less than t4 effort savings for long schedule stretchouts). To alleviate this problem, Putnam has developed several model adjustments for these situations. Recently, Quantitative Software Management has developed a set of three tools based on Putnams SLIM. These include SLIM-Estimate, SLIM-Control and SLIM-Metrics. SLIM-Estimate is a project planning tool, SLIM-Control project tracking and oversight tool, SLIM-Metrics is a software metrics repository and benchmarking tool. Checkpoint Checkpoint is a knowledge-based software project estimating tool from Software Productivity Research (SPR) developed from Capers Jones studies [Jones 1997]. It has a proprietary database of about 8000 software projects and it focuses on four areas that need to be managed to improve software quality and productivity. It uses Function Points (or Feature Points) [Albrecht 1979; Symons 1991] as its primary input of size. SPRs Summary of Opportunities for software development is shown in figure 3. Estimation: Checkpoint predicts effort at four levels of granularity: project, phase, activity, and task. Estimates also include resources, deliverables, defects, costs, and schedules.

87

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Measurement: Checkpoint enables users to capture project metrics to perform benchmark analysis, identify best practices, and develop internal estimation knowledge bases (known as Templates). Assessment: Checkpoint facilitates the comparison of actual and estimated performance to various industry standards included in the knowledge base. Checkpoint also evaluates the strengths and weaknesses of the software environment. Process improvement recommendations can be modeled to assess the costs and benefits of implementation. 3.3.4 Functionality-Based Estimation Models As described above, Checkpoint uses function points as its main input parameter. There is a lot of other activity going on in the area of functionality-based estimation that deserves to be mentioned in this chapter. One of the more recent projects is the COSMIC (Common Software Measurement International Consortium) project. Since the launch of the COSMIC initiative in November 1999, an international team of software metrics experts has been working to establish the principles of the new method, which is expected to draw on the best features of existing models. Since function points is believed to be more useful in the MIS domain and problematic in the real-time software domain, another recent effort, in functionality-based estimation, is the Full Function Points (FFP) which is a measure specifically adapted to real-time and embedded software . The latest COSMIC-FFP version 2.0 uses a generic software model adapted for the purpose of functional size measurement, a two-phase approach to functional size measurement (mapping and measurement), a simplified set of base functional components (BFC) and a scalable aggregation function. The Acquisition Sub model: This sub model forecasts software costs and schedules. The model covers all types of software development, including business systems, communications, command and control, avionics, and space systems. PRICE-S addresses current software issues such as reengineering, code generation, spiral development, rapid development, rapid prototyping, object-oriented development, and software productivity measurement. The Sizing Sub model: This sub model facilitates estimating the size of the software to be developed. Sizing can be in SLOC, Function Points and/or Predictive Object Points (POPs). POPs is a new way of sizing object oriented development projects and was introduced in [Minkiewicz 1998] based on previous work one in Object Oriented (OO) metrics done by Chidamber et al. and others [Chidamber and Kemerer 1994; Henderson-Sellers 1996 ].

Anna University Chennai

88

DSE 112

SOFTWARE ENGINEERING

The Life-cycle Cost Sub model: This sub model is used for rapid and early costing of the maintenance and support phase for the software. It is used in conjunction with the Acquisition Sub model, which provides the development costs and design parameters. PRICE Systems continues to update their model to meet new challenges. Recently, they have added Foresight 2.0, the newest version of their software solution for forecasting time, effort and costs for commercial and non-military government software projects. ESTIMACS Originally developed by Howard Rubin in the late 1970s as Quest (Quick Estimation System), it was subsequently integrated into the Management and Computer Services (MACS) line of products as ESTIMACS [Rubin 1983]. It focuses on the development phase of the system life-cycle, maintenance being deferred to later extensions of the tool. ESTIMACS stresses approaching the estimating task in business terms. It also stresses the need to be able to do sensitivity and trade-off analyses early on, not only for the project at hand, but also for how the current project will fold into the long term mix or portfolio of projects on the developers plate for up to the next ten years, in terms of staffing/cost estimates and associated risks. Rubin has identified six important dimensions of estimation and a map showing their relationships, all the way from what he calls the gross business specifications through to their impact on the developers long term projected portfolio mix. The critical estimation dimensions: 1. Effort hours 2. Staff size and deployment 3. Cost 4. Hardware resource requirements 5. Risk 6. Portfolio impact

NOTES

89

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES
Effort Hours

Gross business specification

Resources Hardware

Staff

Cost

Elapsed time

Risk

Portfolio Iimpact

Figure 3.3: Rubins Map of Relationship of Estimation Dimensions The basic premise of ESTIMACS is that the gross business specifications, or project factors, drive the estimate dimensions. Rubin defines project factors as aspects of the business functionality of the of the target system that are well-defined early on, in a business sense, and are strongly linked to the estimate dimension. Shown in table 1 are the important project factors that inform each estimation dimension.

Anna University Chennai

90

DSE 112

SOFTWARE ENGINEERING

Table 3.1: Estimation Dimensions and corresponding Project Factors

NOTES

The items in Table 3.1 form the basis of the five sub models that comprise ESTIMACS. The sub models are designed to be used sequentially, with outputs from one often serving as inputs to the next. Overall, the models support an iterative approach to final estimate development, illustrated by the following list: 1. Data input/estimate evolution 2. Estimate summarization 3. Sensitivity analysis 4. Revision of step 1 inputs based upon results of step 3 The five ESTIMACS sub models in order of intended use: System Development Effort Estimation: this model estimates development effort as total effort hours. It uses as inputs the answers to twenty-five questions, eight related to the project organization and seventeen related to the system structure itself. The broad project factors covered by these questions include developer knowledge of
91 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

application area, complexity of customer organization, the size, sophistication and complexity of the new system being developed. Applying the answers to the twentyfive input questions to a customizable database of life-cycle phases and work distribution, the model provides outputs of project effort in hours distributed by phase, and an estimation bandwidth as a function of project complexity. It also outputs project size in function points to be used as a basis for comparing relative sizes of systems. Staffing and Cost Estimation: this model takes as input the effort hours distributed by phase derived in the previous model. Other inputs include employee productivity and salaries by grade. It applies these inputs to an again customizable work distribution life-cycle database. Outputs of the model include team size, staff distribution, and cost all distributed by phase, peak load values and costs, and cumulative cost. Hardware Configuration Estimates: this model sizes operational resource requirements of the hardware in the system being developed. Inputs include application type, operating windows, and expected transaction volumes. Outputs are the estimates of required processor power by hour plus peak channel and storage requirements, based on a customizable database of standard processors and device characteristics. Risk Estimator: based mainly on a case study done by the Harvard Business School [Cash 1979], this model estimates the risk of successfully completing the planned project. Inputs to this model include the answers to some sixty questions, half of which are derived from use of the three previous sub models. These questions cover the project factors of project size, structure and associated technology. Outputs include elements of project risk with associated sensitivity analysis identifying the major contributors to those risks. COCOMO II The COCOMO (Constructive Cost Model) cost and schedule estimation model was originally published in [Boehm 1981]. It became one of most popular parametric cost estimation models of the 1980s. But COCOMO 81 along with its 1987 Ada update experienced difficulties in estimating the costs of software developed to new life-cycle processes and capabilities. The COCOMO II research effort was started in 1994 at USC to address the issues on nonsequential and rapid development process models, reengineering, reuse driven approaches, object oriented approaches etc. COCOMO II was initially published in the Annals of Software Engineering in 1995 [Boehm et al. 1995]. The model has three sub models, Applications Composition,

Anna University Chennai

92

DSE 112

SOFTWARE ENGINEERING

Early Design and Post-Architecture, which can be combined in various ways to deal with the current and likely future software practices marketplace. The Application Composition model is used to estimate effort and schedule on projects that use Integrated Computer Aided Software Engineering tools for rapid application development. The Early Design model involves the exploration of alternative system architectures and concepts of operation. Typically, not enough is known to make a detailed fine-grain estimate. This model is based on function points (or lines of code when available) and a set of five scale factors and 7 effort multipliers. The Post-Architecture model is used when top level design is complete and Attributes detailed information about the project is available and as the name suggests, the software architecture is well defined and established. It estimates for the entire development lifecycle and is a detailed extension of the Early-Design model. A primary attraction of the COCOMO models is their fully-available internal equations and parameter values. Over a dozen commercial COCOMO 81 implementations are available; one (Costar) also supports COCOMO II. 3.3.5 Summary of Model Based Techniques Model based techniques are good for budgeting, tradeoff analysis, planning and control, and investment analysis. As they are calibrated to past experience, their primary difficulty is with unprecedented situations.

NOTES

93

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Table 3.2: Activities Covered/ Factors explicitly considered by various models


Group Factor SLIM Check PRICES ESTI point Source Instructions Function Points OO-related metrics Type/Domain Complexity Language Reuse Required Reliability Resource Constraints Platform Volatility Personnel Capability Personnel Continuity Personnel Experience Tools and Techniques Breakage Schedule Constraints Process Maturity Team Cohesion Security Issues Multisite Development Inception Elaboration Construction Transition and Maintenance SEER- SELECT COCO

MACS SEM

Estimator MO II

Size Attribute

YES YES YES YES YES YES YES ? YES ? YES ? YES YES YES YES YES ? ? ? YES YES YES YES

YES YES YES YES YES YES YES ? ? ? YES ? YES YES YES YES YES YES ? YES YES YES YES YES

YES YES YES YES YES YES YES YES YES ? YES ? YES YES YES YES ? YES ? YES YES YES YES YES

NO YES ? YES YES ? ? YES YES ? YES ? YES YES ? YES ? ? ? YES YES YES YES NO

YES YES YES YES YES YES YES YES YES YES YES ? YES YES YES YES YES YES YES YES YES YES YES YES

NO NO YES YES YES YES YES NO NO NO YES NO NO YES YES YES NO YES NO NO YES YES YES NO

YES YES YES NO YES YES YES YES YES YES YES YES YES YES YES YES YES YES NO YES YES YES YES YES

Program Attributes

Computer Attributes

Personnel Attributes

Project Attributes

Activities Covered

Anna University Chennai

94

DSE 112

SOFTWARE ENGINEERING

3.3.6 Expertise-Based Techniques Expertise-based techniques are useful in the absence of quantified, empirical data. They capture the knowledge and experience of practitioners seasoned within a domain of interest, providing estimates based upon a synthesis of the known outcomes of all the past projects to which the expert is privy or in which he or she participated. The obvious drawback to this method is that an estimate is only as good as the experts opinion, and there is no way usually to test that opinion until it is too late to correct the damage if that opinion proves wrong. Years of experience do not necessarily translate into high levels of competency. Moreover, even the most highly competent of individuals will sometimes simply guess wrong. Two techniques have been developed which capture expert judgment but that also take steps to mitigate the possibility that the judgment of any one expert will be off. These are the Delphi technique and the Work Breakdown Structure. Delphi Technique The Delphi technique [Helmer 1966] was developed at The Rand Corporation in the late 1940s originally as a way of making predictions about future events - thus its name, recalling the divinations of the Greek oracle of antiquity, located on the southern flank of Parnassos at Delphi. More recently, the technique has been used as a means of guiding a group of informed individuals to a consensus of opinion on some issue. Participants are asked to make some assessment regarding an issue, individually in a preliminary round, without consulting the other participants in the exercise. The first round results are then collected, tabulated, and then returned to each participant for a second round, during which the participants are again asked to make an assessment regarding the same issue, but this time with knowledge of what the other participants did in the first round. The second round usually results in a narrowing of the range in assessments by the group, pointing to some reasonable middle ground regarding the issue of concern. The original Delphi technique avoided group discussion; the Wideband Delphi technique [Boehm 1981] accommodated group discussion between assessment rounds. Work Breakdown Structure (WBS) The WBS is a way of organizing project elements into a hierarchy that simplifies the tasks of budget estimation and control. It helps determine just exactly what costs are being estimated. Moreover, if probabilities are assigned to the costs associated
95

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

with each individual element of the hierarchy, an overall expected value can be determined from the bottom up for total project development cost [Baird 1989]. Expertise comes into play with this method in the determination of the most useful specification of the components within the structure and of those probabilities associated with each component. Expertise-based methods are good for unprecedented projects and for participatory estimation, but encounter the expertise-calibration problems discussed above and scalability problems for extensive sensitivity analyses. WBS-based techniques are good for planning and control. A software WBS actually consists of two hierarchies, one representing the software product itself, and the other representing the activities needed to build that product [Boehm 1981]. The product hierarchy (figure 7) describes the fundamental structure of the software, showing how the various software components fit into the overall system. The activity hierarchy (figure 8) indicates the activities that may be associated with a given software component. Aside from helping with estimation, the other major use of the WBS is cost accounting and reporting. Each element of the WBS can be assigned its own budget and cost control number, allowing staff to report the amount of time they have spent working on any given project task or component, information that can then be summarized for management budget control purposes. Finally, if an organization consistently uses a standard WBS for all of its projects, over time it will accrue a very valuable database reflecting its software cost distributions. This data can be used to develop a software cost estimation model tailored to the organizations own experience and practices. Figure 3.5 A Product Work Breakdown Structure
Software Application

Component A

Component B

Component N

Subcomponent B1

Subcomponent B2

Anna University Chennai

96

DSE 112

SOFTWARE ENGINEERING

Figure 3.6 An Activity Work Breakdown Structure


Development Activities

NOTES

System Engineering

Programming

Maintenance

Detailed Design

Code and Unit Test

Q3.3 Questions 1. 2. 3. 4. 5. 6. 7. 8. 9. What are cost estimation models? What are the Explain the COCOMO II model to estimate the project cost? Explain the ESTIMATICS model in brief. List down the purposes of the estimation models and techniques? Explain SLIM model for cost estimation What is Checkpoint? How is it useful in estimating the cost of the given project? Explain in detail the model based techniques for software cost estimation. Explain in detail the ESTIMATICS model for cost estimation. Also, state its sub models. 10. Explain in detail on the various cost estimation models. 11. Explain the Work Break Down Structure in detail with an example 12. How is the Delphi Technique used in estimating the cost of the project? 13. Illustrate how the expertise based techniques are used in estimating the cost of a project. 3.4 PROJECT SCHEDULING Schedule estimation and staff requirement estimation are perhaps the most important activities after cost estimation. Both are related, if phase-wise cost is available. Here we discuss the schedule estimation. The goal of schedule estimation is to determine the total duration of the project and the duration of the different phases. First let us see why the schedule is independent of the person month cost. A schedule cannot be simply obtained from the overall effort estimate by deciding on average staff size and then determining the total time requirement by dividing the total
97 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

effort by the average staff size. According to Brooks man and months are interchangeable only for activities that require no communication among men, like sowing wheat or reaping cotton; this is not even approximately true of software. Obviously there is some relationship between the project duration and the staff time required for completing the project. But, this relationship is not linear; to reduce the project duration in half, doubling the staff-months will not work. The basic reason behind this is that if the staff needs to communicate for the completion of the task, then communication time should be accounted for. Communication time increases with the square of the number of the staff. Hence by increasing the staff for a project we may actually increase the time spent in communication. This is often restated as Brookss law adding man power to a late project may make it later. Average Duration Estimation Single variable models can be used to determine the overall duration of the project. Again the constants a and b are determined from historical data. The IBM Federal Systems Division found that the total duration M in calendar months can be estimated by M= 4.1E-36 In COCOMO, the schedule is determined by using the single variable model, like it does the initial effort estimate. However, instead of size, the factor used here is the effort estimate for the project. The equation for an organic type of software is M= 2.5E0-38 For the other project types the constants vary singly slightly. The duration or schedule of the different phases is obtained in the same manner as in effort distribution. The percentages for the different phases are shown in Table 3.3 below. Table 3.3: Percentage of time allocated for the different phases size Phase Small 2KDLSI 19 63 18 Intermediate 8 KDLSI 19 59 22 Medium 32 KDSI 19 55 26 Large 128 KDSI 19 51 30

Product Design Programming Integration

Anna University Chennai

98

DSE 112

SOFTWARE ENGINEERING

In this COCOMO table, the detailed design, coding and unit testing phases are combined into one programming phase. This is perhaps done since all these activities are usually done by different people, who may or may not involve in programming activities of the project.

NOTES

An Illustrative Example of COCOMO


The example shown below uses the COCOMO method to estimate the size (in terms of line of code in C) and the effort (in terms of man month) of a software project, which is described in the following table. Measurement Parameter # of user input # of user output # of user inquires # of files # of external interfaces Solution: Step 1. Compute the UFC (unadjusted function-point) count Measurement Parameter # of user input # of user output # of user inquires # of files # of external interfaces Count 10 15 8 25 6 Weight Factor 3 or 4 or 6 4 or 5 or 7 3 or 4 or 6 7 or 10 or 15 5 or 7 or 10 Weight 60 75 48 375 42 Count 10 15 8 25 6 Weight Factor 3 or 4 or 6 4 or 5 or 7 3 or 4 or 6 7 or 10 or 15 5 or 7 or 10

UFC = 60 + 75 + 48 +375 + 42 = 600 Step 2. Compute the FC (function point) provided the project complexity factor is 350 FP = UFC * (0.65 + 0.01 * 35) = 600 * (0.65 + 0.35) = 600 Step 3. Compute the code size in C, and its average lines of code/FP is 128
99 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Code Size = 128 * 600 = 76800 Step 4. Estimate the effort if the projects complexity is moderate. E = 3.0 * (76.8) ^1.12 = 3 * 129.3 = 388 It can be noticed that in the above example, the Function points is calculated as 600, the size of the code is estimated as 76800 LOC and the effort needed is calculated in terms of man month as 388. Project scheduling and Milestones Once we have the estimates of the effort and time requirement for the different phases, a schedule will then be used later for monitoring the progress of the project. A conceptually simple and effective scheduling technique is the Gantt chart, which uses a calendar oriented chart for representing the project schedule. Each activity is represented as a bar in the calendar starting from the starting date of the activity and ending at the ending date for that activity. The start and end of each activity becomes a milestone for the project. Progress can be represented easily in a Gantt chart, by ticking off each of the milestones, when completed. Alternatively, for each activity another bar can be drawn specifying when the activity actually started and when it ended, i. e , when these two milestones were actually achieved. The main drawback of the Gantt chart is that is does not depict the dependent relations among the different activities. Hence the effect of slippage in one activity on the other activities or on the overall project schedule cannot be determined. However, if is conceptually simple, and easy to understand, and is heavily used. It is sufficient for small and medium sized projects. For large projects, the dependencies among activities are important in order to determine which are critical activities, whose completion should not be delayed, and which activities are not critical. To represent the dependencies, PERT charts are often used. A PERT chart is a graph-based chart. It can be used to determine the activities that form the critical path, which if delayed will cause the overall project to delay. The PERT chart is not conceptually as simple and the representation is graphically not as clear as Gantt charts. Its use is well justified in large projects. We will use the Gantt charts for schedule planning.

Anna University Chennai

100

DSE 112

SOFTWARE ENGINEERING

PERT Chart PERT is the abbreviation of Program Evaluation and Review Technique. Through PERT, complex projects can be blueprinted as a network of activities and events (Activity Network Diagram). Pert charts are used for project scheduling. Pert charts allow software planners, or individuals to: 1. Determine the critical path a project must follow. 2. Establish most likely time estimates for individual task by applying statistical models. 3. Calculate boundary times that define a time window for a particular task. How to create PERT chart? 1. 2. 3. 4. Make a list of the project tasks. Assign a task identification letter to each task. Determine the duration time for each task. Draw the PERT network, number each node, label each task with its task identification letter, connect each node from start to finish, and put each tasks duration on the network. Determine the need for any dummy tasks. Determine the earliest completion time for each task node. Determine the latest completion time for each task node. Verify the PERT network for correctness.

NOTES

5. 6. 7. 8.

Slack Time Calculation Slack time is calculated for each node by subtracting ECT for a node from its LCT. Critical path is any node that has zero slack time. Optimistic time - generally the shortest time in which the activity can be completed. It is common practice to specify optimistic times to be three standard deviations from the mean so that there is approximately a 1% chance that the activity will be completed within the optimistic time. Most likely time - the completion time having the highest probability. Note that this time is different from the expected time. Pessimistic time - the longest time that an activity might require. Three standard deviations from the mean is commonly used for the pessimistic time.
101 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Formulas Expected time = (Optimistic + 4 x Most likely + Pessimistic) / 6 Variance = [(Pessimistic - Optimistic) / 6] 2 Determine the Critical Path The critical path is determined by adding the times for the activities in each sequence and determining the longest path in the project. The critical path determines the total calendar time required for the project. If activities outside the critical path speed up or slow down (within limits), the total project time does not change. The amount of time that a non-critical path activity can be delayed without delaying the project is referred to as slack time. If the critical path is not immediately obvious, it may be helpful to determine the following four quantities for each activity: ES - Earliest Start time EF - Earliest Finish time LS - Latest Start time LF - Latest Finish time These times are calculated using the expected time for the relevant activities. The earliest start and finish times of each activity are determined by working forward through the network and determining the earliest time at which an activity can start and finish considering its predecessor activities. The latest start and finish times are the latest times that an activity can start and finish without delaying the project. LS and LF are found by working backward through the network. The difference in the latest and earliest finish of each activity is that activitys slack. The critical path then is the path through the network in which none of the activities have slack. The variance in the project completion time can be calculated by summing the variances in the completion times of the activities in the critical path. Given this variance, one can calculate the probability that the project will be completed by a certain date assuming a normal probability distribution for the critical path. The normal distribution assumption holds if the number of activities in the path is large enough for the central limit theorem to be applied.

Anna University Chennai

102

DSE 112

SOFTWARE ENGINEERING

Since the critical path determines the completion date of the project, adding the resources required to decrease the time for the activities in the critical path can accelerate the project. Such a shortening of the project sometimes is referred to as project crashing. Update as Project Progresses Make adjustments in the PERT chart as the project progresses. As the project unfolds, the estimated times can be replaced with actual times. In cases where there are delays, additional resources may be needed to stay on schedule and the PERT chart may be modified to reflect the new situation. PERT strengths The PERT network is continuously useful to project managers prior to and during a project. The PERT network is straightforward in its concept and is supported by software. The PERT networks graphical representation of the projects tasks help to show the task interrelationships. The PERT networks ability to highlight the projects critical path and task slack time allows the project manager to focus more attention on the critical aspects of the project-time, costs and people. The project management software that creates the PERT network usually provides excellent project tracking documentation. The use of the PERT network is applicable in a wide variety of projects. PERT weaknesses In order for the PERT network to be useful, project tasks have to be clearly defined as well as their relationships to each other. The PERT network does not deal very well with task overlap. PERT assumes the following tasks begin after their preceding task end. The PERT network is only as good as the time estimates that are entered by the project manager. By design, the project manager will normally focus more attention on the critical path tasks than other tasks, which could be problematic for near-critical path tasks if overlooked. An Illustrative Example: PERT Chart showing Dependency Information This PERT chart displays the type of dependency directly on the dependency line itself. Also notice that the Start-to-Start and Finish-to-Finish dependencies connect to the left and right edges of the PERT boxes. This means that a Start-to-Start (SS) dependency will come from the left edge of a box into the left edge of the other box.

NOTES

103

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Critical Path Method (CPM) The Critical Path Method (CPM) is one of several related techniques for doing project planning. CPM is for projects that are made up of a number of individual activities. If some of the activities require other activities to finish before they can start, then the project becomes a complex web of activities. CPM can help you figure out:

How long your complex project will take to complete Which activities are critical, meaning that they have to be done on time or else the whole project will take longer

If you put in information about the cost of each activity, and how much it costs to speed up each activity, CPM can help you figure out:

Whether you should try to speed up the project, and, if so, What is the least costly way to speed up the project?

Activities An activity is a specific task. It gets something done. An activity can have these properties:

Names of any other activities that have to be completed before this one can start
104

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

A projected normal time duration

NOTES

If you want to do a speedup cost analysis, you also have to know these things about each activity:

A cost to complete A shorter time to complete on a crash basis The higher cost of completing it on a crash basis

CPM analysis starts after you have figured out all the individual activities in your project. CPM Analysis Steps, By Example This example describes the steps for doing CPM analysis using an example. I recommend that you work through the example, so that you can follow the steps.

Activities, precedence, and times


This example involves activities, their precedence (which activities come before other activities), and the times the activities take. The objective is to identify the critical path and figure out how much time the whole project will take. Step 1: List the activities CPM analysis starts when you have a table showing each activity in your project. For each activity, you need to know which other activities must be done before it starts, and how long the activity takes. Heres the example: Activity Description A B C D E F G H I J Product design Market research Production analysis Product model Sales brochure Cost analysis Product testing Sales training Pricing Project report Required Predecessor (None) (None) A A A C D B, E H F, G, I Duration 5 months 1 2 3 2 3 4 2 1 1

105

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Step 2: Draw the diagram Draw by hand a network diagram of the project that shows which activities follow which other ones. This can be tricky. The analysis method well be using requires an activity-on-arc (AOA) diagram. An AOA diagram has numbered nodes that represent stages of project completion. You make up the nodes numbers as you construct the diagram. You connect the nodes with arrows or arcs that represent the activities that are listed in the above table.

Some conventions about how to draw these diagrams:


All activities with no predecessor come off of node 1. All activities with no successor point to the last node, which has to have highest node number.

In this example, A and B are the two activities that have no predecessor. They are represented as arrows leading away from node 1. J is the one activity that has no successor, in this example. It therefore points to the last node, which is node 8. If there were more than one activity with successor, all of those activities arrows point to the highest number node. The trickiest part for me of building the above diagram was figuring what to do with activity H. I had drawn an arrow for activity B coming off node 1 and going to mode 3. I had later drawn an arrow for activity E coming off node 2 and going to node 6. Since H requires both B and E, I had to erase my first E arrow and redraw it so it pointed to the same node 3 that B did. H then comes off of node 3 and goes to node 6.
Anna University Chennai 106

DSE 112

SOFTWARE ENGINEERING

Having completed the network, it would be very easy for you to now draw the table and calculate the earliest start time, latest start time, the earliest end time and the latest end time and the slack. The activities whose slack value is zero are in the critical path, meaning that any delay in completing those activities would make the project to slip from its schedule. Hence those activities are much more important and they need to be taken care properly so that they dont cause any delay. Allocate Resources to the Tasks The first step in building the project schedule is to identify the resources required to perform each of the tasks required to complete the project. (Generating project tasks is explained in more detail in the Wideband Delphi Estimation Process page.) A resource is any person, item, tool, or service that is needed by the project that is either scarce or has little availability. Many project managers use the terms resource and person interchangeably, but people are only one kind of resource. The project could include computer resources (like shared computer room, mainframe, or server time), locations (training rooms, temporary office space), services (like time from contractors, trainers, or a support team), and special equipment that will be temporarily acquired for the project. Most project schedules only plan for human resourcesthe other kinds of resources are listed in the resource list, which is part of the project plan. One or more resources must be allocated to each task. To do this, the project manager must first assign the task to people who will perform it. For each task, the project manager must identify one or more people on the resource list capable of doing that task and assign it to them. Once a task is assigned, the team member who is performing it is not available for other tasks until the assigned task is completed. While some tasks can be assigned to any team member, most can be performed only by certain people. If those people are not available, the task must wait. Identify Dependencies Once resources are allocated, the next step in creating a project schedule is to identify dependencies between tasks. A task has a dependency if it involves an activity, resource, or work product that is subsequently required by another task. Dependencies come in many forms: a test plan cant be executed until a build of the software is delivered; code might depend on classes or modules built in earlier stages; a user interface cant be built until the design is reviewed. If Wideband Delphi is used to generate estimates, many of these dependencies will already be represented in the assumptions. It is the project managers responsibility to work with everyone on the engineering
107

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

team to identify these dependencies. The project manager should start by taking the WBS and adding dependency information to it: each task in the WBS is given a number, and the number of any task that it is dependent on should be listed next to it as a predecessor. The following figure 3.6 shows the four ways in which one task can be dependent on another.

Figure 3.6: Task Dependencies Create the Schedule Once the resources and dependencies are assigned, the software will arrange the tasks to reflect the dependencies. The software also allows the project manager to enter effort and duration information for each task; with this, it can calculate a final date and build the schedule. The most common form for the schedule to take is a Gantt chart. The following figure 3.7 shows an example

Figure 3.7: Gantt chart showing the dependencies among the various tasks
Anna University Chennai 108

DSE 112

SOFTWARE ENGINEERING

Each task is represented by a bar, and the dependencies between tasks are represented by arrows. Each arrow either points to the start or the end of the task, depending on the type of predecessor. The black diamond between tasks D and E is a milestone, or a task with no duration. Milestones are used to show important events in the schedule. The black bar above tasks D and E is a summary task, which shows that these tasks are two subtasks of the same parent task. Summary tasks can contain other summary tasks as subtasks. For example, if the team used an extra Wideband Delphi session to decompose a task in the original WBS into subtasks, the original task should be shown as a summary task with the results of the second estimation session as its subtasks. Q3.4 Questions 1. What is project scheduling 2. What are the milestones in scheduling a project? Bring out their importance with an illustration. 3. State the dependencies in project scheduling. 4. Explain Gantt in detail with an example. 5. Consider the development project for a Travel Agency and try to draw the project scheduling as an exercise.

NOTES

3.5 STAFFING AND PERSONNEL PLANNING


Once the project schedule is determined and the effort and schedule of different phases and different tasks are known, staff requirements can be obtained. From the cost and the overall duration of the project, the average staff size for the project can be determined by dividing the total effort by the overall project duration (months) This average staff size is not detailed enough for proper personnel planning, especially if the variation between the actual staff requirement at different phases is large. Typically the staff requirement and design, and is the maximum during the implementation and testing, and then again drops during the final phases of integration and testing, and then again drops during the implementation and testing, and then again drops using the final phases of integration and testing. Using the COCOMO model, average staff requirement for the different phases can be determined as the effort and schedule for each phase are known. This presents staffing as a step function with time.
109 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

For personnel planning and scheduling, it is useful to have effort and schedule estimates for the sub systems and basic modules in the system. At the planning time, when the system design has not been done, the planner can only expect to know about the major subsystems in the system, and perhaps the major modules in these subsystems. COCOMO can be used to determine the total effort estimate for different subsystems or modules. Detailed cost estimates: An approximate method suitable for small systems is to divide the total schedule in terms of the ratio of the sizes of different components. A more accurate method, used in COCOMO, is to start with the sizes of different components (and the total systems). The initial effort for the total system is determined. From this the nominal productivity of the project is calculated by dividing the overall size by the initial effort. Using this productivity, the effort required for each of the modules is determined by dividing the size by nominal productivity. This gives an initial effort estimate for the modules. For each module the rating of the different cost driver attributes is determined. From these ratings the effort-adjusted factor (EAF) for each module is determined. Using the initial estimates and the EAFs, the final effort estimate of each module is determined. The final effort estimate for the overall system is obtained by adding the final estimates for the different modules. It should be kept in mind that these effort estimates for a module are done by treating module like an independent system, thus including the effort required for design, integration, and testing of the module. When used for personal planning this should be kept in mind if the effort for the design and integration phases is obtained separately. Personnel plan Once the schedule for different activities and the average staff level for each activity is known, the overall personnel allocation for the project can be planned. This plan will specify how many people will be needed for the different activities at different times for the duration of the project. A method for producing the personal plan is to make it a calendar based representation, containing all the months in the duration of the project, by listing the months from the starting date to the ending date. Then for each of the different tasks is identified, and for which cost and schedule estimates are prepared, list the number of

Anna University Chennai

110

DSE 112

SOFTWARE ENGINEERING

people needed in each of the months. The total effort for each month and the total effort for each activity can easily be computed from this plan. The total for each activity should be the same as the overall person-months estimate. Drawing a personnel plan usually requires a few iterations to ensure that the effort requirement for the different phases and activities is consistent with the estimates obtained earlier. The ensurence of consistency is made more difficult by the fact that the effort estimates for individual modules include the design and integration effort for those modules, and this effort is also included in the effort for these phases. It is usually not desirable to state staff requirements in a unit less than 0.5 percent in order to make the plan consistent with the estimates. Some difference between the estimates and the totals in the personal plan is acceptable. Team structure Often a team of people is assigned to a project. For the team to work as a cohesive group and contribute the most to the project, the people in the team have to be organized in some manner. The structure of the team has a direct impact on the product quality and project productivity. The goals of the group are set by consensus, and input from every member is taken for major decisions. Group leadership rotates among the group members. Due to its, nature, ego-less teams are sometimes called democratic teams. The structure allows input from all members, which can lead to better decisions in difficult problems. This suggests that this structure is well suited for long-term, research type projects, which do not have time constraints. On the other hand, it is not suitable for regular tasks; the communication in democratic structure is unnecessary and results in ineffiency. A chief programmer team, in contrast to ego-less teams, has a hierarchy. It contains of a chief programmer, who has a backup programmer, a program librarian, and some programmers. The chief programmer is responsible for all major technical decisions of the project. He does most of the design, and assigns coding of the different parts of the design to the programmers.

NOTES

111

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Chief programmer

Back up programmer Librarian Programmers Figure 3.8: Chief Programmer Team Structure A third team structure, called the controlled decentralized team, tries to combine the strengths of the democratic and chief programmer teams. It consists of a project leader who has a group of senior programmers under him, while under each senior programmer is a group of junior programmers. Q3.5 Questions 1. Bring out the importance of staffing and personnel plan. 2. Explain how planning of personnel is made during the planning phase of any project. 3. Explain the team structure in detail. 4. As an exercise, try to find out the other team structures followed in the corporate these days.

3.6 SOFTWARE CONFIGURATION MANAGEMENT


Software engineers usually find coding to be the most satisfying aspect of their job. This is easy to understand because programming is a challenging, creative activity requiring extensive technical skills. It can mean getting to play with state of the art tools, and it provides almost instant gratification in the form of immediate feedback. Programming is the development task that most readily comes to mind when the profession of software engineering is mentioned.

Anna University Chennai

112

DSE 112

SOFTWARE ENGINEERING

That said, seasoned engineers and project managers realize that programmers are part of a large team. All of the integral tasks, such as quality assurance and verification and validation, behind-the-scenes activities necessary to turn standalone software into a useful and usable commodity. Software configuration management (SCM) falls into this category-it cant achieve star status, like the latest killer app , but it is essential to project success. The smart software project manager highly values the individuals and tools that provide this service. This chapter will answer the following questions about software configuration management. What is Software Configuration Management? Software Configuration Management (SCM) is the organization of the components of a software system so that they fit together in a working order, never out of synch with each other. Those who have studies the best way to manage the configuration of software parts have more elegant responses. Roger Pressman says that SCM is a set of activities designed to control change by identifying the work products that are likely to change, establishing relationships among them, defining mechanisms different versions of these work products, controlling the changes imposed, and auditing and reporting on the changes made. The software Engineering Institute says that it is necessary to establish and maintain the integrity of the products of the software project throughout the software life cycle. Activities necessary to accomplish this include identifying configuration items/ units, systematically controlling changes, and maintaining the integrity and the tracability of the configuration throughout the software life cycle. Military standards view configuration as the functional and/or physical characteristics of hardware/software as set forth in technical documentation and archives in a product. In identifying the items that need to be configured, we must remember that all project artifacts are candidates-documents, graphical models, prototypes, code and any internal or external deliverable that can undergo change. Why is SCM important? Software project managers pay attention to the planning and execution of configuration management, an integral task, because it facilitates the ability to communicate status of documents and code as well as changes that have been made to them. High
113

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

quality released software has been tested and used, making it a reusable asset and saving development costs. Reused components arent free, though-they require integration into new products, a difficult task without knowing exactly what they and where they are. CM enhances the ability to provide maintenance support necessary once the software is deployed. If software didnt change, maintenance wouldnt exist. Of course, changes to occur. The National Institute of Standards and Technology (NIST) say that software will be changed to adapt, perfect, or correct it. Pressman points out that new business, new customer needs, reorganizations and budgetary or scheduling constraints may lead to software revision. CM works for the project and the organization in other ways as well. It helps to eliminate confusion, chaos, double maintenance, the shared data problem, and the simultaneous update problem, to name but a few issues to be discussed in this chapter. Who is involved in SCM? Virtually everyone on a software project is affected by SCM. From the framers of the project plan to the final tester, we rely on it to tell us how to find the object with the latest changes. During development, when iterations are informal and frequent, little needs to be known about a change except what it is, who did it, and where it is. In deployment and base lining. Changes must be prioritized and the impact of a change upon all customers must be considered, a change control board (CCB) is the governing body for modifications after implementations. How can Software Configuration be implemented in Organization? Because SCM is such a key tool in improving the quality of delivered products, understanding it and how to implement it in your organization and on your project is critical success factor. This chapter will review SCM plan templates and provide you with a composite SCM plan template for use in your projects. We will cover the issues and basics for a sound software project CM system, including these, 1. 2. 3. 4. 5. 6. SCM principles The four basic requirements for an SCM system. Planning and organizing for SCM SCM tools Benefits of SCM Path to SCM implementation.
114

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

Configuration Management occurs throughout the Product Development Life cycle, SCM is an integral task, beginning early in the life cycle. Required from the beginning of the system exploration phase, the project software configuration management system must be available for the remainder of the project. Fig illustrates the when or SCM on our full product development life cycle. SCM Principles Understanding of SCM An understanding of SCM is critical to the organization attempting to the institute any system of product control. Understanding through training is a key initial goal, as shown in the pyramid. Executives and management must understand both the benefits and the cost of SCM to provide the needed support in its implementation. Software developers must understand the basics of SCM because they are required to use the tool in building their software products. Without a total understanding, a partial implementation of SCM with workarounds and back doors will result in disaster for an SCM system. SCM Plans and Policies Development of an SCM play policy for an organization and the subsequent plans for each product developed is crucial to successful SCM implementation. Putting SCM into an organization is a project like any other, requiring resources of time and money. There will be specific deliverables and a timeline against which to perform. The policy for the organization lays out in a clear, concise fashion the expectations that the organizational leadership has for its system. It must lay out the anticipated benefits and the method to measure the performance to those benefits. SCM process The specification processes of SCM are documented for all users to recognize. Not all SCM processes need to be used within an organization or on a product, yet it is important to have available, in plain sight, those processes that are used specifically in your organization. This also maps those processes to how they are implemented. Metrics The measures used to show conformance to policy and product plans are important details. These measures show where the organization is along the path to reaping the benefits of SCM.
115

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Tools for SCM The tools used to implement SCM are the next to last item on the pyramid. For too many mangers, this is often the first instead of the fifth stem in SCM many organizations and project simply buy a tool, plop it in place, and expect magic. Actually it makes little sense to pick the tools to use in SCM without having done all the previous work. Putting a tool in place without training, policy or metrics is an exercise in automatic chas. You will simply have an automated way to turn out the wrong product faster. SCM is an SEI CMM Level 2 key Process Area The goals for SCM at maturity Level2 are: Software configuration management activities are planned Selected software work products are identified ,controlled and available Changes to identified software work products are controlled Affected groups and individuals are informed of the status and content of software baselines.

Questions that assessors might ask include: Is a mechanism used for controlling changes to the software requirements? Is a mechanism used for controlling changes to the software design Is a mechanism used for controlling changes to the code Is a mechanism used for configuration management of the software tools used in the development process?

The Four Basic Requirements for an SCM system 1. 2. 3. 4. Identification Control Audit Status accounting

Configuration Identification: The basic goal of SCM is to manage the configuration of the software as it evolves during development. The configuration of the software is essentially the arrangement or organization of its different functional units or components. Effective management of the software configuration requires careful definition of the different baselines, and controlling the changes to these baselines. Since the baseline consists of
Anna University Chennai 116

DSE 112

SOFTWARE ENGINEERING

the SCI s, SCM starts with identification of configuration items. One common practice is to have only coded modules as configuration items since usually in coding a large number of people are involved and the code of one person often depends on the code of another. Configuration Control: The engineering change proposal is the basic document that is used for defining and requesting a change to an SCI. This proposal describes the proposed change, the rationale for it, baselines and SCI s that are affected and cost and schedule impacts. The engineering change proposals are sent to a Configuration Control Board (CCB) .The important factor in configuration control is the procedure for controlling the changes. Once a engineering change proposal has been approved by the CCB, the actual change in the SCI will occur. The procedures for making these changes must be specified. Tools can be used to enforce these procedures. One method for controlling the changes during the coding stages is using program support libraries. Status Accounting and Auditing Configuration auditing is concerned with determining how accurately the current software system implements the system defined in the baseline and the requirements document, and with increasing the visibility and trace ability of software. Auditing procedures are also responsible for establishing a new baseline. Auditing procedures may be different for different baselines. Configuration Management Plans SCM plan needs to specify the type of SCI s that will be selected and that the stages during the project where baselines should be established. Note that in the plan only the type of objects that should be selected can be specified: it may not be possible to identify the exact item, as the item may not exist at the planning time. For example, we can specify that code of any module that is independently unit tested will be considered as an SCI. However, we cannot identify the particular modules that will eventually become the SCIs. Q3.6 Questions 1. What is Software Configuration Management? 2. What is the importance of SCM in any project?
117

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

3. 4. 5. 6. 7.

Who are the personnel involved in SCM? Explain in detail the principles of SCM. What are the goals of SCM at Maturity Level 2? Explain in detail the basic requirements of the SCM system. Consider a project of your choice and try to incorporate the SCM activities.

3.7 QUALITYASSURANCE PLAN


Basic Requirements and Scope This section defines the requirements of a quality program that the Consultant shall establish, implement and execute before and during the performance of the design contract to furnish the design, specified materials, baseline survey, design processes and studies that are in conformance with the Design Agreement requirements. 1. The Consultant shall be responsible for providing a quality product to the Department under this Agreement. To this end, the Consultant shall have planned and established a QAP that shall be maintained throughout the term of the Agreement. The elements of the Consultants QAP shall be imposed on all entities within the Consultants organization. 2. All surveys, design calculations and studies shall be in accordance with standard specifications for bridge and highway design. Failure of the Consultant to follow standard design practice, unless deviations are specifically described in the Agreement, shall constitute justification for rejection of the work. 3. During the term of the Agreement, the Consultants designated QualityAssurance Manager shall perform quality assurance functions. These functions shall include random checks of the QAP. Definitions Customer - Any internal unit that receives a product or service from the Consultant whose Quality System is being considered. Customers could also include supervisors, coworkers or management. External customers could include other agencies, political officials, communities or permitting agencies. Non-conforming Product - Any product produced by the Consultant that does not meet the established specifications or requirements for quality as outlined in the Consultants procedures and Quality Assurance plan. Products could include items
Anna University Chennai 118

DSE 112

SOFTWARE ENGINEERING

produced, reports, designs, studies, calculations, letters, memos or services performed for the customer. Product - The result of a Consultants activities or processes. It may include a service provided to a customer. Quality Assurance (QA) - The process of checking or reviewing work tasks or processes to ensure quality. Personnel independent of the organizational unit responsible for the task or process typically conduct this. All those planned and systematic actions are necessary to provide adequate confidence that a product or service will satisfy the requirements for quality. QA includes the development of project requirements that meet the needs of all relevant internal and external agencies, planning the processes needed to achieve quality, providing equipment and personnel capable of performing tasks related to project quality, documenting the quality control efforts, and most importantly, performing checks necessary to verify that an adequate product is furnished as specified in the Agreement. Quality Assurance Program - The coordinated execution of applicable Quality Control Plans and activities for a project. Quality Assurance Program Plan - A written description of intended actions to achieve quality for the Consultants organization. Quality Control (QC) - The measuring, testing or inspection of a task or process by the personnel who perform the work. The Consultants operational techniques and activities that are used to fulfill requirements for quality. These techniques are used to provide a product or service that meets requirements. QC is carried out by the operating forces of the Consultant. Their goal is to do the work and meet the design goals. Generally, QC refers to the act of taking measurements and surveys and checking design calculations to meet contract specifications. Products may be design drawings, calculations, studies or surveys. QC also refers to the process of documenting such actions. Quality Management - That aspect of the overall management function that determines and implements the quality policy. Quality Oversight (QO) - The administration and review of a Quality Assurance Plan to ensure its success.
119

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Activities conducted by the Department to verify the satisfactory implementation of approved Quality Assurance and Quality Control by organizations authorized to do so. QO can range from an informal process of keeping in touch with the QA organization to a second layer of QA activities, depending upon the circumstances. QO verifies the execution of the quality program. Quality Policy - The overall quality intentions and direction of the Consultants organization regarding quality, as formally expressed by the Consultants management. Quality Procedures the written instructions for implementing various components of the organizations total Quality System. Management Responsibility Quality Control Policy The Consultants management with executive responsibilities shall define and document its policy for quality, including objectives for quality and its commitment to quality. The quality policy shall be relevant to the Consultants organizational goals and the expectations and needs of the Department. The Consultant shall provide that this policy is understood, implemented and maintained within the Consultants organization. Organization The Consultant shall include a project organization chart that includes quality assurance and quality control functions. It shall include relationships between project management, key personnel of Subconsultants, design engineering and quality control. Resumes and responsibilities of the Consultants Quality Control staff and its Quality Assurance staff shall be provided. Responsibility and Authority The Consultant shall assign an independent Quality Assurance Manager not directly responsible for the work to this project that shall manage quality matters for the project and have the authority to act in all quality matters for the Consultant. The Quality Assurance Manager shall be fully qualified by experience and technical training to perform the quality control activities. The Quality Assurance Managers responsibilities shall include a method for verifying the implementation of adequate corrective actions for the non-conforming work and notifying appropriate project management personnel. A specific description of the duties, responsibilities and methods used by the Consultants

Anna University Chennai

120

DSE 112

SOFTWARE ENGINEERING

Quality Assurance staff to identify and correct non-conformities shall be included. The resume of the Quality Assurance Manager must include a description of his duties, responsibilities, and his record of quality control experience. The responsibility, authority and interrelation of all personnel who manage, perform and verify work affecting quality shall be defined and documented. Resource The Consultant shall identify resource requirements and provide adequate resources, including the assignment of trained personnel, for management, performance of work and verification activities including internal quality audits. Quality System General The Consultant shall establish, document and maintain a quality assurance program plan as a means of providing a design product that conforms to specified requirements. The quality assurance program plan shall include or make reference to the work procedures and outline the structure of the documentation used in the quality assurance program. Quality Plan Procedures The Consultant shall prepare documented procedures consistent with the requirements of this section and the Consultants or Subconsultants stated quality policy. Documented procedures may make reference to work instructions that define how an activity is performed. Quality Planning The Consultant shall define and document how the requirements for quality will be met. Quality planning shall be consistent with all other requirements of a Consultants Quality Assurance Program and shall be documented in a format to suit the Consultants methods of operation. Agreement Review The Consultant shall establish and maintain documented procedures for Agreement reviews and for the coordination of all applicable activities, to verify that the services meet the requirements.
121

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Review and Amendment to Agreement The Consultant shall review and concur with allAgreement commitments prior to the execution of the Agreement. The Consultant shall also establish the responsibilities for coordinating and conducting Agreement reviews, distribution of documents for review, and the process for identifying and amending discrepancies within the Agreement. Records Records of Agreement reviews and amendments shall be maintained and made accessible to personnel directly involved in the review process, in accordance with the terms of the Agreement. Design Control General The Consultant shall establish and maintain documented procedures to control and verify that the design meets the specified requirements. Design Input A framework for initial design planning activities shall be established. The designer shall compile record and verify information on field surveys and inspections. All relevant design criteria, including codes and standards, shall be established and made available to design personnel. Design schedules and design cost estimates shall be monitored and adhered to, with documentation of any deviations. A documented procedure for responding to all comments from the units, which have been coordinated by the Project Manager, shall be established. Design Output The designer shall establish methods and implement reviews to determine that completed designs are constructable, functional, meet the requirements and conform to established regulatory standards. Furthermore, the Consultant shall establish and implement procedures to determine that only the most recent revisions to written procedures, codes, standards and relevant documents are used. Design Changes Before their implementation, all design changes and modifications shall be identified, documented, reviewed and reported for approval.

Anna University Chennai

122

DSE 112

SOFTWARE ENGINEERING

Organizational and Technical Interfaces Organizational and technical communication interfaces between different groups that input into the design process shall be defined and the necessary information documented, transmitted and regularly reviewed. These groups shall include the Consultant, outside agencies and any Sub consultants.

NOTES

Document Control
General The Consultant shall establish and maintain documented procedures to control all documents and data that relate to the requirements of this section including, to the extent applicable, documents of external origin such as studies, reports, calculations, standards and record drawings. These procedures shall control the generation, distribution and confidentiality of all documents, as well as establish a system to identify, collect, index, file, maintain and dispose of all records. Documents and data can be in the form of any media, such as hard copy or electronic media. Document and Data Approval and Issue The documents and data shall be reviewed and approved for adequacy by authorized personnel prior to issue. Amaster list or equivalent document control procedure identifying the current revision status of documents shall be established and be readily available to preclude the use of invalid and/or obsolete documents. Document and Data Changes Changes to documents and data shall be reviewed and approved by the same functions or organizations that performed the original review and approval, unless specifically designated otherwise. The designated functions or organization shall have access to pertinent background information upon which to base their review and approval. Where practical, the nature of the change shall be identified in the document or the appropriate attachments.

Control of Sub consultants


General The Consultant shall establish and maintain documented procedures to provide subcontracted or purchased services that conform to specified requirements.
123 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Evaluation of Sub consultants The Consultant shall: a. Select subconsultants on the basis of their ability to meet agreement requirements and any specific quality control requirements. The sub consultant shall be required to accept and implement the consultants QAP or to submit their own for review and approval by the consultant b. Define the type and extent of control exercised by the consultant over sub consultants. Include a description of the system used to review and monitor the activities and submissions of the sub consultant. This control shall be dependent upon the type of service, the impact of a subcontracted service on the quality of the final design and, where applicable, dependent on the quality audit reports and/or quality records of the sub consultants c. Review quality records of subconsultants consisting of quality control and quality assurance data for the project Design Product Identification and Traceability Where appropriate, the consultant shall establish and maintain documented procedures for identifying its design product by suitable means from its inception and during all stages of development, design and delivery. Where and to the extent that traceability is a specified requirement, the consultant shall establish and maintain documented procedures for unique identification of individual design products. This identification shall be recorded Control of Department Supplied Product The consultant shall establish and maintain documented procedures for the control of, verification, storage and maintenance of the supplied products, such as record drawings or special equipment, provided for incorporation into the contract or for related activities. Any such product that is lost, damaged, or is otherwise unsuitable for use shall be recorded and reported. Process Control The Consultant shall identify and plan the design, survey, research or servicing processes which directly affect quality and shall carry out these processes under controlled conditions. Controlled conditions shall include the following:

Anna University Chennai

124

DSE 112

SOFTWARE ENGINEERING

a. Documented procedures defining the manner of design, survey, research or servicing, where the absence of such procedures could adversely affect quality; b. Use of suitable design, survey, research or servicing equipment, and a suitable working environment; c. Compliance with referenced standards/codes, quality plans and/or documented procedures; d. Monitoring and control of suitable process parameters and end product characteristics; e. The approval of special processes and equipment, if applicable; f. Criteria for workmanship, which shall be stipulated in the clearest practical manner (e.g., written standards, representative samples or illustrations);

NOTES

g. Suitable maintenance of equipment, if applicable, to provide continuing process capability; h. A detailed description of unique procedures. The requirements for any qualification of special survey or research work, including the associated equipment and personnel.

Corrective and Preventive Action


General The Consultant shall document procedures to be utilized to implement corrective and preventive action. Corrective or preventive action taken to eliminate actual or minimize potential design non-conformities shall be to a degree appropriate to the magnitude of problems and commensurate with the risks encountered. The Consultant shall implement and record any changes to the documented procedures resulting from corrective and preventive action. Corrective Action The corrective action procedures to eliminate actual non-conforming design products shall include:

125

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

a. The effective handling of observations and reports of design product nonconformities, including developing interim measures, if warranted, to correct the actual non-conformity; b. Conducting an investigation into the root cause of non-conformities relating to the design product, process and quality system, and recording the results of the investigation c. Determination of the corrective action needed to eliminate the cause of the design non-conformities; d. Application of measures to determine that corrective action has been taken and that it is effective. Preventive Action The procedures for preventive action to minimize nonconformities shall include: a. The use of appropriate sources of information relating to the quality of the design product (such as concessions, audit results, quality records, service reports and complaints) to detect, analyze, and eliminate potential causes of nonconformities; b. Determination of the steps needed to deal with any problems requiring preventive action; c. Initiation of preventive action and appropriate follow-up reviews to determine that it is effective; d. Confirmation that relevant information on actions taken is submitted for the consultant management review. Control of Quality Records The Consultant shall establish and maintain documented procedures for identification, collection, indexing, access, filing, storage, maintenance, and disposition of quality records. Records may be in the form of any type of media, such as hard copy or electronic media. Quality records shall be maintained to demonstrate conformance to specified requirements and the effective operation of the quality system. Pertinent quality records from the Sub consultant shall be an element of these data.

Anna University Chennai

126

DSE 112

SOFTWARE ENGINEERING

All quality records shall be legible and shall be retained in such a way that they are readily retrievable in files that provide a suitable environment to prevent damage, deterioration or loss. Where agreed contractually, quality records shall be made available for evaluation for an agreed period. Internal Quality Audits The Consultant shall establish and maintain documented procedures for planning and implementing internal quality audits to verify whether quality activities and related results comply with planned arrangements and to determine the effectiveness of the quality system. Internal quality audits shall be scheduled on the basis of the status and importance of the activity to be audited and shall be carried out by personnel independent of those having direct responsibility for the activity being audited. The results of the audits shall be recorded and brought to the attention of the personnel having responsibility in the area audited. The management personnel responsible for the area shall take timely corrective action on deficiencies found during the audit. Follow-up audit activities shall verify and record the implementation and effectiveness of the corrective action taken. Training The Consultant shall establish and maintain documented procedures for identifying training needs and provide for the training of all personnel performing activities affecting quality. Personnel performing specific assigned tasks shall be qualified on the basis of appropriate education, training and/or experience, as required. Appropriate records of training shall be maintained Servicing of the Design Product Where servicing of the Consultants design product is a specified requirement, the Consultant shall establish and maintain documented procedures for performing, verifying, and reporting that the servicing meets the specified requirements. Servicing of a design product, for example, may include providing for field visits to investigate construction problems or providing related engineering support until the project is complete.
127

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Statistical Techniques
Identification of Need The Consultant shall identify the need for statistical techniques required for special survey or research projects, if applicable. Procedures The Consultant shall establish and maintain documented procedures to implement and control the application of the statistical techniques. Handling, Storage, Packaging, Preservation and Delivery General The Consultant shall establish and maintain documented procedures for handling, storage, packaging, and delivery of the final design, survey or research product. Handling The Consultant shall provide methods of handling its final design, survey or research product to minimize damage, deterioration, loss or incorrect identification. Storage The Consultant shall use designated areas or files to minimize damage or deterioration to documents, plans, studies or reports prior to use or delivery. Appropriate methods for authorizing receipt to and dispatch from such areas shall be stipulated. Packaging The Consultant shall control packaging and labeling processes to the extent necessary to conform to specified requirements. Preservation The Consultant shall apply appropriate methods for preservation and segregation of the documents, plans, studies or reports when they are under its control. Delivery The Consultant shall arrange for the protection of the documents, plans, studies or reports after final checking. Where contractually specified, this protection shall be extended to include delivery to the destination.

Anna University Chennai

128

DSE 112

SOFTWARE ENGINEERING

Contractors Quality Assurance and Management System Contractors Quality Assurance and Management System (herein after referred to as the QA System) shall comply with the requirements of ISO 9001 for work associated with design and ISO 9002 for manufacturing and construction work. Contractor shall maintain effective control of the quality of the Work, provide test facilities and perform all examinations and tests necessary to demonstrate conformance of the Work to the requirements of the Contract and shall offer for acceptance only those aspects of the Work that so conform. Contractor shall be responsible for the provision of Objective Evidence that Contractors controls and inspections are effective. For this purpose Objective Evidence means any statement of fact, quantitative or qualitative, pertaining to the quality of the Work based on observations, measurements or tests which can be verified. Quality System Documentation At a minimum, the following documents shall be provided for surveillance of Quality System during execution of the Contract: 1. Quality Manual 2. Quality Plan 3. Schedule of Quality Records The Quality Plan shall include: a) A policy statement identifying the quality system to be implemented for the Contract; b) Management responsibilities specific to the Contract including the responsibility and authority for quality; c) The organization proposed for the Contract; d) An outline of procedures for reviewing, updating and controlling the Quality Plan and referenced documentation; e) Quality System implementation plan; f) Reference to technical/quality features peculiar to the Contract; g) Method by which Contractor intends to control quality and complete the Work; h) Contractors method of control of sub-contract work; i) Details of special processes and control procedures; j) Details of design verification activities to be performed, including the methods to be employed to control design and the Design and Documentation Plan; and k) Details of the quality records to be taken and maintained by Contractor.
129

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Quality Verification Contractor is responsible for ensuring that work (including subcontracts) delivered as part of the Contract meet all the technical and quality requirements. a) Contractor shall provide the work as specified in the contract, together with documented evidence that the work conform to the requirements. b) Contractor shall provide the work, as specified in the contract, together with inspection reports and/or certificates of adequacy and compliance from a suitably qualified person, certifying the sufficiency, serviceability and integrity of the work. Q 3.7 Questions 1. What are the basic requirements of the Quality Assurance Plans? 2. Explain the terms customer, product, and non-confirming product in the context of QA. 3. Explain sub consultants control. 4. What is the importance of corrective and preventive actions? 5. Explain the term Quality Control Policy. 6. Explain in detail the Quality Assurance. 7. Explain in detail the Quality System and Quality Plan Procedure 8. Explain in detail design control, documentation control and process control 9. Write short notes on Responsibility and Authority, Reviews and Internal Quality Audits. 10. Explain the term Contractors Quality Assurance and Management System in detail. 11. Explain Quality System Documentation in detail. 12. Write the Quality Document for any real time application of your choice.

3.8 RISK MANAGEMENT


In this chapter we are concerned with the risk of the development projects not proceeding according to plan. We are primarily concerned with the risks of the projects running late or over budget and with the identification of the steps that can be taken to avoid or minimize those risks. Some risks are more important than others .Whether or not a particular risk is important depends on the nature of the risk, its likely effects on a particular activity and
Anna University Chennai 130

DSE 112

SOFTWARE ENGINEERING

the criticality of the activity. High risk activities on a projects critical path are a cause for concern To reduce these dangers, we must ensure that risks are minimized or, at least , distributed over the project and ideally removed from critical path activities. The risk of an activities running over time is likely to depend , at least in part on who is doing or managing it. Evaluation of risk and the allocation of staff and other resources are therefore closely connected. The nature of risk; For the purpose of identifying and managing those risks that may cause a project to overrun its time-scale or budget, it is convenient to identify three types of risk: Those caused by the inherent difficulties of estimation Those due to assumptions made during the planning process Those of unforeseen event occurring

NOTES

Estimation Errors: Some tasks are harder to estimate than others because of the lack of experience of similar tasks or because of the nature of the task. Producing a set of user manuals is reasonably straightforward and given, that we have carried out similar tasks previously, we should be able to estimate with some degree of accuracy how long it will take and how much it will cost. On the other hand, the time required for program testing and debugging, might be difficult to predict with a similar degree of accuracy even if we have written similar programs in the past. Planning Assumptions At every stage during planning, assumptions are made which if not valid may put the plan at risk Our activity network for example, is likely to be built on the assumption of using a particular design methodology which may be subsequently changed. We generally assume that following coding a module will be tested and then integrated with others. We might not plan for module testing showing up the need for changes in the original design but in the event if might happen. At each stage in the planning process, it is important to list explicitly all of the assumptions that have been made and identity what effects they might have on the plan if they are inappropriate.
131 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Eventualities: Some eventualities might never be foreseen and we can only resign ourselves to the fact that unimaginable thing to, sometimes happen. They are however very rate. The majority of unexpected events can., in fact, be identified the requirements specification might be altered after some of the modules have been coded, the senior programmer might take maternity leave, the required hardware might not be delivered on time. Such events do happen from time to time and although the likelihood of any one of them happening during a particular project may be relatively low, they must be considered and planned for. Managing risk The objectives of risk management are to avoid or minimize the adverse effects of unforeseen events by avoiding the risks or drawing up contingency plans for dealing with them. There are number of models for risk management, but most are similar, in that they identify two main components risk identification and risk management. Risk identification consists of listing all of the risks that can adversely affect the successful execution of the project. Risk estimation consists of assessing the likelihood and impact of each hazard. Risk evaluation consists of ranking the risks and determining risk aversion strategies. Risk planning consists of drawing up contingency plans and where appropriate, adding these to the projects task structure. With small projects risk planning is likely to be the responsibility of the project manager but medium or large projects will benefit from the appointment of a full time risk manager Risk control concerns the main functions of the risk manager in minimizing and reacting to problems throughout the project. This function will include aspects of quality control in addition to dealing with problems as they occur. Risk monitoring must be an ongoing activity, as the importance and likely hood of particular risks can change as the project proceeds.

Anna University Chennai

132

DSE 112

SOFTWARE ENGINEERING

Risk directing and risk staffing are concerned with the day to day management of risk. Risk aversion and problem solving strategies frequently involve the use of additional staff and this must be planned for and directed. Risk identification The first stage in any risk assessment exercise is to identify the hazards that might affect the duration or resource costs of the project. A hazard is an event that might occur and will, if it does occur create a problem for the successful completion of the project. In identifying and analyzing risks, we can usefully distinguish between the cause, its immediate effect and risk that will pose to the project. For example the illness of a team member is a hazard that might result in the problem of late delivery of a component. The late delivery of that component is likely have an effect on other activities and might, particularly if it is on the critical path, put the project completion date at risk. A common way of identifying hazards is to use a checklist listing all the possible hazards and factors that influence them. Typical checklists may, even hundreds of factors and there are, today a number of knowledge based software products available to assist in this analysis. Some hazards are generic risks, that is they are relevant to all software projects and standard checklists can be used and augmented from an analysis of past projects to identify them, The categories of factors that will need to be considered include the following. Application factors the nature of the application-whether it is a simple data processing application, a safety critical system or a large distributed system with real time elementsis likely to be a critical factor. The expected size of the application is also important-the larger the system, the greater is the likelihood of errors and communication and management problems Staff factors the experience and skills of the staff involved are clearly major factors an experienced programmer is , one would hopeless likely to make errors than one with little experience-experience in coding small data processing modules in Cobal may be little value if we are developing a complex real time control system using C++
133

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Project factors are important that the project and its objectives are well defined and that they are absolutely clear to all members of the project team and all key stakeholders. Any possibility that this is not the case will pose a risk to the success adhered to by all participants and any possibility that the quality plan is inadequate or not adhered to will jeopardize the project. Project methods using well specified and structured methods for project management and system development will decrease risk of delivering a system that is unsatisfactory or later. Using such methods for the first time, through may cause problems and delays it is only with experience that the benefits accrue. Hardware/software factors a project that requires new hardware for development is likely to pose a higher risk than one where the software can be developed on existing hardware where a system is developed on one type of hardware or software platform to be used on another there might be additional risks at instillation Changeover factors the need for an all in one changeover to the new system poses particular risks. Incremental or gradual changeover minimizes the risks involved but is not always practical. Parallel running can provide a safety net but might be impossible or too costly Supplier factor the extent to which a project relies on external organizations that cannot be directly controlled often influences the projects success. Delays in for example the installation of telephone lines of delivery of equipment may be difficult to avoid particularly if the project is of little consequence to the external supplier Environment factors changes in the environment can affect a projects success. A significant change in the taxation regulations could, for example, have serious consequences for the development of a payroll application. Health and safety factors While for generally a major issue for software projects, the possible effects of project activities on the health and safety of the participants and the environment should be considered. Risk analysis Having identified the risks that might affect our project we need some way of assessing their importance. Some risks will be relatively unimportant whereas some will be major significance. Some are quite likely to occur.

Anna University Chennai

134

DSE 112

SOFTWARE ENGINEERING

The probability of a hazards occurring is known as risk likelihood; the effect that the resulting problem will have on the project, if its occurs, is known as the risk impact and the importance of the risk is known as risk value or risk exposure. The risk value is calculated as: Risk exposure =risk likelihood * risk impact Ideally the risk impact is estimated in monetary terms and the likelihood assessed as a probability. In that case the risk exposure will represent an expected cost in the same sense that we calculated expected costs and benefits when discussing cost benefit analysis. The risk exposure for various risks can then be compared with each other to assess the relative importance of each risk and they can be directly compared with the costs and likelihoods of success of various contingency plans. Many risk managers use a simple scoring method to provide a quantitative measure for assessing each risk. Some just categorize likelihoods and impacts as high, medium or low. But this form of ranking does not allow the calculation of a risk exposure. A better and popular approach is to score the likelihood and impact on a scale of, say 1 to 10 where the hazard that is most likely to occur receives a score of 10 and the least likely a score of 1. Ranking likelihoods and impacts on a scale of 1 to 15 is relatively easy but most risk mangers will attempt to assign scores in a more meaningful way. Impact measures scored on a similar scale, must take into account the total risk to the project. This must include the following potential costs: The cost of delays to scheduled dates for deliverables Cost overruns caused by using additional and more expensive resources The costs incurred or implicit in any compromise to the systems quality or functionality Prioritizing the risks Managing risk involves the use of two strategies: Reducing the risk exposure by reducing the likelihood or impact Drawing up contingency plans to deal with the risk should occur
135

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Any attempt to reduce a risk exposure or put a contingency plan in place will have a cost associated with it. It is therefore important to ensure that this effort is applied in the most effective way and we need a way of prioritizing the risks so that the more important ones can receive the greatest attention. Estimate values for the likelihood and impact of each of these terms and calculate their risk exposures. Rank each of your risks according to their risk exposure and try to categorize each of them as high medium or low priority. In practice there are generally other factors, in addition to the risk exposure value that must also be taken into account when prioritizing risks. Confidence of the risk assessment Some of our risk exposure assessments will be relatively poor. Where there is the case, there is a need for further investigation before action can be planned. Compound risks Some risks will be dependant on others. Where this is the case, they should be treated together as a single risk The number of risks There is a limit to the number of risks that can be effectively considered and acted on by a project manager. We might therefore wish to limit the size of the prioritized list. Cost of action. Some risks once recognized can be reduced or avoided immediately with very little cost of effort and it is sensible to take action on these regardless of their risk value. For other risks we need to compare the costs of taking action with the benefits of reducing the risk. One method for doing this is to calculate the Risk Reduction Leverage (RRL) using the equation RRL = (REbefore - REafter) / (Risk reduction cost) Where RE before is the original risk exposure value. REafter is the expected risk exposure value after taking action and the risk reduction cost is the cost of implementing the risk reduction action Risk reduction costs must be expressed I n the same units as risk values-that is, expected monetary values than the RRL greater than one indicate that we can expect to gain from implementing the risk reduction plan because the expected reduction in risk exposure is greater than the cost of the plan.

Anna University Chennai

136

DSE 112

SOFTWARE ENGINEERING

Reducing the risks Broadly, there are five strategies for risk reduction Hazard prevention Some hazards can be prevented from occurring or their likelihood reduced to in significant levels. The risk of key staff being unavailable for meetings can be minimized by early scheduling. Likelihood reduction Some risks while they cannot be prevented can have their likelihoods reduced by prior planning. The risk of late changes to a requirements specification can, for example, be reduced by prototyping. Risk avoidance A project can, for example, be projected from the risk of overrunning the schedule by increasing duration estimates or reducing functionality. Risk transfer The impact of some risks can be transferred away from the project, by example contracting out or taking out insurance, Contingency planning Some risks are not preventable and contingency plans will need to be drawn up to reduce the impact should the hazard occur. A project manager should draw up contingency plans for using agency programmers to minimize the impact of any unplanned absence of programming staff. Table 3.4: Software projects risks and strategies for risk reduction Risk Personnel shortfalls Risk reduction techniques Staffing with top talent; job matching; team building; training and career development; early scheduling of key personal Unrealistic time and cost estimates Multiple estimation techniques; design to cost; incremental development; recording and analysis of past projects; standardization of methods Developing the wrong software functions Improved project evaluation; formal specification methods; user surveys ; prototyping ;early users manuals

NOTES

Developing the wrong user interface

Prototyping; task analysis; user involvement.

137

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Gold plating

Requirements scrubbing ;prototyping; cost-benefit analysis; design to cost

Late changes to requirements Stringent change control procedure; high change threshold ;incremental prototyping; incremental development(defect change) Short fails in external supplied contents Benchmarking; inspections; formal specifications; contractual agreements ;quality assurance procedures and certification Quality assurance procedures; competitive design or prototyping ;teambuilding ;contract incentives

Shortfalls in externally performed tasks

Evaluating risks to the schedule We have seen that not all risks can be eliminated even those that are classified as avoidable or manageable can, in the event, still cause problems affecting activity durations .By identifying and categorizing those risks, and in particular, their likely effects on the duration of planned activities, we can assess what impact they are likely to have on our activity plan Using PERT to evaluate the effects of uncertainty PERT was developed t take account of the uncertainty surrounding estimates of task durations. It was developed in an environment of expensive, high risk and state of are projects not that dissimilar to many of todays large software projects. The methods of very similar to the CPM technique but, instead of using a single estimate for the duration of each task, PERT requires three estimates. Most likely time- the time we would expect the take to take under normal circumstances, we shall denote this by letter m Optimistic time-t he shortest time in which we could expect to complete the activity, barring outright miracles, we shall use the letter a to denote this Pessimistic time-the worst possible time allowing for all reasonable eventualities but excluding acts of God and war face

Anna University Chennai

138

DSE 112

SOFTWARE ENGINEERING

PERT then combines these three circumstances to form a single expected duration, the using the formula Te= a +4m+b / 6 Using expected durations: The expected durations are used to carry out a forward pass through a network: using the same method as the CPM technique, In this case, however, the calculated even dates not the earliest possible dates but are the dates by which we expect to achieve those events. Having studied the Software cost, effort and schedule estimation, the techniques used to this estimation, the SCM and Software Quality Assurance in brief, the next step of the SDLC is the Design phase. The various topics on software design such as the software design principles, software design methodologies and the design validation and metrics are discussed in the next unit. Q3.9 Questions 1. Explain the following terms. a. Most likely time b. Optimistic time c. Pessimistic time 2. What are the various ways of reducing the risks? 3. Explain the term RRL. 4. How does one prioritize the risks? 5. What is risk impact? 6. What is risk exposure? 7. Write a note on risk analysis. 8. What are the various factors that cause risks? 9. Explain in detail the Risk Management. 10. How are the risks identified? Explain in detail. 11. Explain PERT in detail with an example.

NOTES

139

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

REFERENCES 1. Software Engineering A Practitioners Approach, By Roger. S.Pressman, Mc Graw Hill International 6th edition, 2005. 2. Software Project Management, 4th Edition by Bob Hughes and Mike Cotterell. 3. http://www.netmba.com/operations/project/cpm/ 4. http://www.sce.carleton.ca/faculty/chinneck/po/Chapter11.pdf 5. http://www.netmba.com/operations/project/pert/ 6. http://www.cs.utsa.edu/~niu/teaching/cs3773Spr07/SoftPlan.pdf

Anna University Chennai

140

DSE 112

SOFTWARE ENGINEERING

NOTES

UNIT IV
4 INTRODUCTION
The design of any software system is one of the tedious processes as there are several strategies, concepts that go into it. There are many paradigms for the software design. Here we would be discussing on the various strategies of designs, approaches, verification and metrics for measuring their effectiveness.

4.1 LEARNING OBJECTIVES


1. What is function-oriented design? 2. What are the design principles? 3. Module level concepts coupling and cohesion. 4. Structured design methodology 5. Module Specifications. 6. Detailed design 7. Design Verification and design metrics. 4.2 FUNCTION-ORIENTED DESIGN It is the design with functional units, which transform inputs to outputs Objectives 1. To explain how a software design may be represented as a set of functions which share state 2. To introduce notations for function-oriented design 3. To illustrate the function-oriented design process by example This approach has been practiced informally since programming began and thousands of systems have been developed using this approach. Supported directly by most programming languages Most design methods are functional in their approach
141 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

CASE tools are available for design support. The function oriented view of software design is given in the figure 4.1 shown below.
Shared Memory

F1

F2

F3

F4

F5

Figure 4.1: A function-oriented view of Design Structured System Analysis and Design and Object Oriented Analysis and Design The SSAD and OOAD are the two main approaches followed in the development of the system. The following section gives the difference between the SSAD and the OOAD. The difference between the Structured System Analysis and Design and Object Oriented Analysis and Design can be done as an exercise. Functional and Object-Oriented Design 1. For many types of application, object-oriented design is likely to lead to a more reliable and maintainable system 2. Some applications maintain little state - function-oriented design is appropriate 3. Standards, methods and CASE tools for functional design are well-established 4. Existing systems must be maintained - function-oriented design will be practiced well. Functional design process The functional design process consists of the following;

Anna University Chennai

142

DSE 112

SOFTWARE ENGINEERING

1. Data-flow design a. Model the data processing in the system using dataflow diagrams 2. Structural decomposition b. Model how functions are decomposed to sub-functions using graphical structure charts Data-flow design The following figure 4.2 gives the notations used in the design using the Data Flow Diagrams (DFD)

NOTES

Figure 4.2: DFD Notations Data Flow Diagrams (DFDs) are a graphical/representation of systems and systems components. They show the functional relationships of the values computed by a system, including input values, output values, and internal data stores. Its a graph showing the flow of data values from their sources in objects through processes/functions that transform them to their destinations in other objects. Some use a DFD to show control information, others might not. Steps for Developing DFDs 1. 2. 3. 4. 5. 6. 7. 8. 9. Requirements determination Divide activities Model separate activities Construct preliminary context diagram Construct preliminary System Diagram/ Level 0 diagrams - As far as I expect for starting students to get. Deepen into preliminary level n diagrams (primitive diagrams in text) Combine and adjust separate level-0 to level-n diagrams Combine level-0 diagram into definitive diagram Complete diagrams

143

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Step 1: Requirements determination This is the result of the preceding phases. Through different techniques, the analyst has obtained all kinds of specifications in natural language. This phase never stops until the construction of the DFD is completed. This is also a recursive phase. At this moment, he should filter the information valuable for the construction of the data flow diagram. Step 2: Divide activities The analyst should separate the different activities, their entities and their required data. The completeness per activity can be achieved by asking the informant the textual specification with the lacking components in the activity. Step 3: Model separate activities The activities have to be combined with the necessary entities and data stores into a model where input and output of an activity, as well the sequence of data flows can be distinguished. This phase should give a preliminary view of what data is wanted from and given to whom. Step 4: Construct preliminary context diagram The organization-level context diagram is very useful to identify the different entities. It gives a steady basis in entity distinction and name giving for the rest of the construction. From here on, the analyst can apply his top-down approach and start a structured decomposition. Step 5: Construct preliminary level 0 diagrams The overview, or parent, data flow diagram shows only the main processes. It is the level 0 diagram. This diagram should give a readable overview of the essential entities, activities and data flows. An over-detailed level 0 diagram should generalize appropriate processes into a single process. Step 6: Deepen into preliminary level n diagrams This step decomposes the level 0 diagrams. Each parent process is composed of more detailed processes, called child processes. The most detailed processes, which cannot be subdivided any further, are known as functional primitives. Process specifications are written for each of the functional primitives in a process.

Anna University Chennai

144

DSE 112

SOFTWARE ENGINEERING

Step 7: Combine and adjust level 0-n diagrams During the structured decomposition, the creation of the different processes and data flows most often generate an overlap in names, data stores and others. Within this phase, the analyst should attune the separate parent and child diagrams to each other into a standardized decomposition. The external sources and destinations for a parent should also be included for the child processes. Step 8: Combine level 0 diagrams into a definitive diagram The decomposition and adjustment of the leveled diagrams will most often affect the quantity and name giving of the entities. Step 9: Completion The final stage consists of forming a structured decomposition as a whole. The input and output shown should be consistent from one level to the next. The result of these steps, the global model, should therefore obey all the decomposition rules. Table 4.1: Some DFD rules Overall: 1. Know the purpose of the DFD. It determines the level of detail to be included in the diagram. 2. Organize the DFD so that the main sequence of actions reads left to right and top to bottom. Processes: 3. Very complex or detailed DFDs should be levelled. 4. Identify all manual and computer processes (internal to the system) with rounded rectangles or circles. 5. Label each process symbol with an active verb and the data involved. 6. A process is required for all data transformations and transfers. Therefore, never connect a data store to a data source or destination or another data store with just a data flow arrow. 7. Do not indicate hardware or whether a process is manual or computerized. 8. Ignore control information (ifs, ands, ors). 9. Identify all data flows for each process step, except simple record retrievals.
145

NOTES

Data flows:

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Data stores: External entities: Context Diagram

10. Label data flows on each arrow. 11. Use data flow arrows to indicate data movement, not nondata physical transfers. 12. Dot not indicates file types for data stores. 13. Draw data flows into data stores only if the data store will be changed. 14. Indicate external sources and destination of data, when known, with squares. 15. Number each occurrence of repeated external entities. 16. Do not indicate persons or places as entity squares when the process is internal to the system.

It very briefly explains the system to be designed as to what is the system input, the system process and the system output. It is just a black box representation of the system to be developed. Student Administration System: Illustrative Example The example given below is the context diagram for the student administration system. The system has to get the details of the student and process it. It has to either confirm or reject the student. External entity Process Data Flows Student Student Administration process application Application Form, Confirmation/Rejection Letter

Student

Application Details

Student Administration System

Confirmation/Rejection Details

Figure 4.2: Context Diagram of the Student Administration System

Anna University Chennai

146

DSE 112

SOFTWARE ENGINEERING

System/Level 0 DFD External entity Processes Data Flow Student Check available, Enroll student, Confirm Registration Application Form, Course Details, Course Enrolment Details, Student Details, Confirmation/Rejection Letter Courses, Students
1.0 Check Course Available

NOTES

Data Stores

Student

Application Details

Course Details

Confirmation/Rejection Details

Accepted/ Rejected Selections 2.0 Enroll Student Course Details

Courses

Course Enrolment Details Registration Details Student Details 3.0 Confirm Registered Students

Figure: 4.3 Level 0 DFD of Student Administration System Structural decomposition A structure chart (module chart, hierarchy chart) is a graphic depiction of the decomposition of a problem. It is a tool to aid in software design. It is particularly helpful on large problems. A structure chart illustrates the partitioning of a problem into sub problems and shows the hierarchical relationships among the parts. A classic organization chart for a company is an example of a structure chart. The top of the chart is a box representing the entire problem; the bottom of the chart shows a number of boxes representing the less complicated sub problems. (Left right on the chart is irrelevant).
147 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

A structure chart is NOT a flowchart. It has nothing to do with the logical sequence of tasks. It does NOT show the order in which tasks are performed. It does NOT illustrate an algorithm. Each block represents some function in the system, and thus should contain a verb phrase, e.g. Print report heading.

Figure 4.4: Event-partitioned DFD for the Order-Entry Subsystem Steps to Create a Structure Chart from a DFD Fragment

1 2 3 4

Determine primary information flow which is the main stream of data transformed from some input form to output form Find process that represents most fundamental change from input to output Redraw DFD with inputs to left and outputs to right central transform process goes in middle Generate first draft structure chart based on redrawn data flow
148

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Figure 4.5: High-level Structure Chart for the Customer Order Program

Figure 4.6: The Create New Order DFD Fragment

149

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Figure 4.7: Exploded View of Create New Order DFD

Figure 4.8: Rearranged Create New Order DFD

Anna University Chennai

150

DSE 112

SOFTWARE ENGINEERING

NOTES

Figure 4.9: First Draft of the Structure Chart Steps to Create a Structure Chart from a DFD Fragment Add other modules o o o Get input data via user-interface screens Read from and write to data storage Write output data or reports

Add logic from structured English or decision tables Make final refinements to structure chart based on quality control concepts

Figure 4.10: The Structure Chart for the Create New Order Program
151 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Figure 4.11: Combination of Structure Charts Evaluating the Quality of a Structure Chart 1 Module coupling o Measure of how module is connected to other modules in program o Goal is to be loosely coupled 2 Module cohesion o Measure of internal strength of module o Module performs one defined task o Goal is to be highly cohesive Decomposition guidelines 1. For business applications, the top-level structure chart may have four functions namely input, process, master-file-update and output
Anna University Chennai 152

DSE 112

SOFTWARE ENGINEERING

2. Data validation functions should be subordinate to an input function 3. Coordination and control should be the responsibility of functions near the top of the hierarchy 4. The aim of the design process is to identify loosely coupled, highly cohesive functions. Each function should therefore do one thing and one thing only 5. Each node in the structure chart should have between two and seven subordinates Summary of Function-Oriented Design 1. Function-oriented design relies on identifying functions which transform inputs to outputs 2. Many business systems are transaction processing systems which are naturally functional 3. The functional design process involves identifying data transformations, decomposing functions into sub-functions and describing these in detail 4. Data-flow diagrams are a means of documenting end-to-end data flow. Structure charts represent the dynamic hierarchy of function calls 5. Data flow diagrams can be implemented directly as cooperating sequential processes Q4.2 Questions 1. What are natural functional systems? 2. Explain the term functional design process. 3. What are the structural decomposition guidelines? 4. What is abstraction? Explain with an example. 5. What is Information Hiding? Bring out its importance. 6. Explain the term Modularity 7. What is meant by central transform? Explain with an example. 8. Explain function-oriented design in detail with an example. 9. Explain concurrent system design in detail. 10. Explain the detailed design process in detail.

NOTES

153

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

4.3 DESIGN PRINCIPLES


Producing the design of large systems can be an extremely complex task. Adhoc methods for design will not be sufficient, especially since the criteria for judging the quality of design are not quantifiable. Effectively handling the complexity will not only reduce the effort needed for design but can also reduce the scope of introducing errors during design. Problem Partitioning When solving a problem, the entire problem cannot be tackled at once. The complexity of the large problems and the limitations of human minds do not allow large problems to be treated as huge monolithes. For solving larger problems, the basic principle is the time-tested principle of Divided and Conquer. Clearly, dividing in such a manner that all the divisions have to be conquered together is not the intend of this wisdom. This principle, if elaborated would mean, Divide into smaller pieces, so that each piece can be conquered separately. For software design therefore the goal is to divide the problem into manageably small pieces that can be solved separately. It is this restriction of being able to solve each part separately that makes dividing the problem into pieces a more complex problem, and which many methodologies for system design aim to address. The basic motivation behind this restriction is the belief that is the pieces of the problem are solvable separately; the cost of solving the entire problem is more than the sum of the cost of solving all the pieces. However the different pieces cannot be entirely independent of each other, as they together form the system. The different pieces have to cooperate and communicate in order to solve the larger problem. This communication adds complexity, which arises due to partitioning and which may not have the there in the original problem. As the no of component increases, the cost of partitioning together with the cost of this add complexity may become more than the savings achieved by partitioning. It is at this point that no further partitioning needs to be done. The designer has to make the judgment about when to stop partitioning. One of the most important quality criteria for software design is simplicity and understandability. It can be argued that maintenance is minimized if each part in the system can be easily related to the application, and that each piece can be modified separately. If a piece can be modified separately we call it independent of other pieces.

Anna University Chennai

154

DSE 112

SOFTWARE ENGINEERING

If a module A is independent of module B, then we can modify A without introducing any unanticipated side effects in B. Total independence of modules of one system is not possible, but the design process should support as much independence between modules as possible. The dependence between modules in a software system is one of the reasons for high maintenance cost. Clearly proper partitioning will make the system easier to maintain by making the design easier to understand. Problem partitioning also aids design verification. Abstraction Abstraction is a very powerful concept that is used in all-engineering disciplines. Abstraction is a tool that permits the designer to consider a component at an abstract level, without worrying about the details of the implementation of the component. Any component or system provides some services to its environment. An abstraction of a component describes the external behavior of the component without bothering about internal details that produce the behavior. Presumably the abstract definition of the component is much simpler than the component itself. Abstraction is an indispensable part of the design process, and is essential for problem partitioning. Partitioning essentially is the exercise in determining components of the system. However these components are not isolated from each other but interact with each other and the designer has to specify how a component interacts with other components. If the designer has to understand the details of the other components to determine their external behavior then we have defeated the very purpose of partitioningisolating the component from others. In order allow the designer to concentrate on one component at a time; abstraction of other components is used. Abstraction is used for existing components as well as the components that are being designed. An abstraction of existing components plays an important role in the maintenance phase. For modifying a system the first step understands what the system does and how it does it. The process of comprehending an existing system involves identifying the abstractions of subsystems and components from the details of their implementations. Using these abstractions, the behavior of the entire system can be understood. This also helps in determining how modifying the component affects the system. During the design process, abstraction is used in the reverse manner than in the process of understanding a system. During design the component do not exist and in
155

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

the design designer specifies only the abstract specification of the different components. The basic goal of system design is to specify the modules in a system and their abstractions. Once the different modules are specified during the detailed design the designer can concentrate on one module at a time. The task in detail design and implementation is essentially to implement the modules such that the abstract specifications of each module are satisfied. There are two common abstractions mechanism for software systems-functional abstraction and data abstraction. In functional abstraction, a module is specified by the function it performance. For example the module to compute the sine of a value can be abstractly represented by the function sine. Similarly a module to sort an input array can be represented by the specification of sorting. The second unit for abstraction is data abstraction. Any entity in the real world provides some services to the environment to which it belongs. Often the entities provide some fixed predefined services. The case of data entities is similar. There are certain operations that are required from data object, depending on the object and the environment in which it is used. Data abstraction supports this view. Data is not treated simply as objects, but is treated as objects with the some predefined operations on them. Abstractions defined on a data object are the only operations that can be performed on those objects. From outside an object, the internals of the objects are hidden and only the operations on the object are visible. Functional abstraction forms basis of structural design methodology, while data abstraction forms the basis of object oriented design methodology. Top Down and Bottom Up Strategies A system consists of components, which have components of their own; indeed a system is hierarchy of components, the highest-level component corresponding to the total system. To design such a hierarchy there are two different approaches possible top-down and bottom-up. The top-down approach starts from the highest-level component of the hierarchy and proceeds through to lower level. By contrast, a bottom approach starts with the lowest level component of the hierarchy and proceeds through progressively through higher levels to the top-level component. A top-down design approach starts by identifying the major components of the system, decomposing them into their lower level components and iterating until the desired level of detail is achieved. A bottom-up design approach starts with designing

Anna University Chennai

156

DSE 112

SOFTWARE ENGINEERING

the most basic or primitive components and proceeds to higher level components that use these lower-level components. Top-down design methods often result in some form of step-wise refinement. Starting from an abstract design, in each step the design is refined to a more concrete level, until we reach a level where no more refinement is needed and the design can be implemented directly. Bottom-up methods work with layers of abstraction. Starting from very bottom, operations are implemented that provide a layer of abstraction. Operations of this layer are then used to implement more powerful operations and a still higher layer of abstractions until the stage is reached where the operation supported by the layer are the ones that are desired by the system. Pure top-down or pure bottom-up approaches are often not practical. For a bottom approach to be successful we must have a good notion of top where the design should be heading. Without a good idea above the operations needed at the higher layer it is difficult to determine what operations the current layer should support. Topdown approaches require some idea about the feasibility of the components specified during the design. The components that are specified during design should be implementable, which requires some idea about the feasibility of the lower level parts of the component. However this is not very major drawback particularly in application areas where the existence of solutions is known. The top-down approach has been promulgated by many researchers and has been found to be extremely useful for design. Many design methodologies are based on top-down approach. Q4.3 Questions 1. What is problem partitioning? 2. Explain the design principles in detail. 3. What are the design strategies? Explain them in detail and compare the different strategies.

NOTES

4.4 MODULE LEVEL CONCEPTS


A module is logically separable part of a program. It is a program unit that is discreet and identifiable with respect to compiling and loading. In terms of common programming language construct a module can be a macro, a function, a procedure, a process, or a package. A system is considered modular if it consists of discreet component such that each component supports a well-defined abstraction and if a change to a one component has minimal impact on other components. Coupling and cohesion are two modularization criteria, which are often used together.
157 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Coupling Two modules are considered independent if one can function completely without the presence of other. Obviously if two modules are independent then they are solvable and modifiable separately. However all the modules in the system cannot be independent of each other, as they must interact with each other together they can produce the desired external behavior of the system. The more the connections between modules, the more dependent they are in the sense that more knowledge about one module is required to understand or solve the other module. Hence the fewer and simpler the connections between the modules the easier it is to understand one without understanding the other. The notion of coupling attempts to capture this concept of how strongly different modules are interconnected with each other. Coupling between modules is the strength of interconnections between modules, or a measure of interdependence among modules. In general the more we must know about module A in order to understand module B, the more closely connected is A to B. Highly Coupled modules are joined by strong interconnections while Loosely coupled modules have weak interconnections. Independent modules have no interconnections. Coupling is an abstract concept and is as yet not quantifiable. So no formulas can be given to determine the coupling between two modules. However some major factors can be identified as influencing coupling between modules. Among them the most important are the type of connection between modules, the complexity of the interface and the type of information flow between modules. To keep coupling low we would like to minimize the number of interfaces per module and minimize the complexity of each interface. An interface of module is used to pass information to and from other modules. Coupling would increase if other modules via an indirect use a module and obscure difficult interface like directly using the internals of a module or utilizing shared variables. Complexity of interface is another factor affecting coupling. The more complex each interface is the higher will be the degree of coupling. For example the complexity of the entry interface of a procedure depends on the number of items being passed as parameters and on the complexity of the items. The type of information flow along the interfaces is the major factor-affecting coupling. There are two kinds of information that can flow along an interface: data or control. Passing or receiving back control information means that the action of the module will depend on this control information; which makes it more difficult to understand the module and provide its abstractions.

Anna University Chennai

158

DSE 112

SOFTWARE ENGINEERING

Transfer of data information means that a module passes as input some data to another module and gets in return some data as output. This allows a module to be treated as a single input output function that performs some transformation on the input data to produce the output data. Cohesion Cohesion is the concept that tries to capture intra module. With cohesion we can determine how closely the elements of a module are related to each other. Cohesion of a module represents how tightly bound the internal elements of the module are to one another. Cohesion of a module gives the designer an idea about whether the different elements of a module belong together in the same module. Usually the greater the cohesion of each module in the system, lower will be the coupling between modules. There are several levels of cohesion 1. 2. 3. 4. 5. 6. 7. Coincidental Logical Temporal Procedural Communicational Sequential Functional

NOTES

Coincidental cohesion occurs when there is no meaningful relationship among the elements of a module. Coincidental cohesion can occur if an existing program is modularized by chopping it into pieces and making different pieces to be modules. A module has logical cohesion if there is some logical relationship between the elements of the module and the elements perform functions that fall in the same logical class. A typical example of this kind of cohesion is a module that performs all the inputs are perform all the outputs. In such a situation if we want to input or output a particular record we have to some how convey this to the module. Temporal cohesion is the same as the logical cohesion except that the elements are also related in time and are executed together. Modules that perform activities like initialization, cleanup and termination are usually temporally bound. A procedurally cohesive module contains elements belong to a common procedural unit. For example a loop or a sequence of a decision statement in a module
159 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

may be combined to form a separate module. Procedural cohesion often cuts across functional lines. A module with only procedural cohesion may contain only a part of a complete function or parts of several functions. A module with communicational cohesion has elements that are related by a reference to the same input or output data. That is, in a communicationally bound module the elements are together because they operate on the same input or output data. An example of this could be a module to print and punch record. When the elements are together in a module because the output of one forms input to another, we get sequential cohesion. If we have a sequence of elements in which the output of one forms input to another, sequential cohesion does not provide any guidelines or how to combine them into modules. Sequentially cohesive modules bear a close resemblance to the problem structure. However they are considered to be far from the ideal, which is functional cohesion. Functional cohesion is the strongest cohesion. In a functionally bound module all elements of the module are related to performing a single function. By function, we do not mean simply mathematical functions. Modules accomplishing a single goal are also included. Functions like compute square root and sort the array are clear examples of functionally cohesive modules. To find the cohesion level of the module the following test can be made 1. If the sentence must be a compound sentence, if it contains a comma, are has more than one verb, the module is probably performing more than one function, and probably has sequential or communicational cohesion. 2. If the sentence contains words relating to time like first, next, when, after etc then the module probably has sequential or temporal cohesion. 3. If the predicate of the sentence does not contain a single specific object following the verb (such as edit all data), the module probably has logical cohesion. 4. Words like initialize and cleanup imply temporal cohesion. Q4.4 Questions 1. 2. Explain coupling and cohesion. What are the various types of coupling and cohesion? Explain them in detail. How do you measure the goodness of a design?
160

3.
Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

4.5 STRUCTURED DESIGN Structured design is based on functional decomposition, where the decomposition is centered on the identification of the major system functions and their elaboration and refinement in a top-down manner. It follows typically from dataflow diagram and associated processes descriptions created as part of Structured Analysis. Structured design uses the following strategies 1. 2. Transformation analysis Transaction analysis

NOTES

and a few heuristics (like fan-in / fan-out, span of effect vs. scope of control, etc.) to transform a DFD into a software architecture (represented using a structure chart). In structured design we functionally decompose the processes in a large system (as described in DFD) into components (called modules) and organize these components in a hierarchical fashion (structure chart) based on following principles: 1. 2. 3. Abstraction A view of a problem that extracts the essential information relevant to a particular purpose and ignores the remainder of the information. [IEEE, 1981] A simplified description, or specification, of a system that emphasizes some of the systems details or properties while suppressing others. A good abstraction is one that emphasizes details that are significant to the reader or user and suppress details that are, at least for the moment, immaterial or diversionary. [Shaw, 1984] While decomposing, we consider the top level to be the most abstract, and as we move to lower levels, we give more details about each component. Such levels of abstraction provide flexibility to the code in the event of any future modifications. Information Hiding Every module is characterized by its knowledge of a design decision which it hides from all others. Its interface or definition was chosen to reveal as little as possible about its inner workings. [Parnas, 1972]
161 Anna University Chennai

Abstraction (functional) Information Hiding Modularity

DSE 112

SOFTWARE ENGINEERING

NOTES

Parnas advocates that the details of the difficult and likely-to-change decisions be hidden from the rest of the system. Further, the rest of the system will have access to these design decisions only through well defined, and (to a large degree) unchanging interfaces. This gives a greater freedom to programmers. As long as the programmer sticks to the interfaces agreed upon, she can have flexibility in altering the component at any given point. There are degrees of information hiding. For example, at the programming language level, C++ provides for public, private, and protected members, and Ada has both private and limited private types. In C language, information hiding can be done by declaring a variable static within a source file. The difference between abstraction and information hiding is that the former (abstraction) is a technique that is used to help identify which information is to be hidden. The concept of encapsulation as used in an object-oriented context is essentially different from information hiding. Encapsulation refers to building a capsule around some collection of things [Wirfs-Brock et al, 1990]. Programming languages have long supported encapsulation. For example, subprograms (e.g., procedures, functions, and subroutines), arrays, and record structures are common examples of encapsulation mechanisms supported by most programming languages. Newer programming languages support larger encapsulation mechanisms, e.g., classes in Simula, Smalltalk and C++, modules in Modula, and packages in Ada. Modularity Modularity leads to components that have clearly defined inputs and outputs, and each component has a clearly stated purpose. Thus, it is easy to examine each component separately from others to determine whether the component implements its required tasks. Modularity also helps one to design different components in different ways, if needed. For example, the user interface may be designed with object orientation and the security design might use state-transition diagram. 4.6 STRUCTURED DESIGN METHODOLOGY The two major design methodologies are based on 1. 2. Functional decomposition Object-oriented approach
162

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

Strategies for converting the DFD into Structure Chart 1. 2. 3. Break the system into suitably tractable units by means of transaction analysis Convert each unit into a good structure chart by means of transform analysis Link back the separate units into overall system implementation

NOTES

Transaction Analysis: An Illustrative Example The transaction is identified by studying the discrete event types that drive the system. For example, with respect to railway reservation, a customer may give the following transaction stimulus:

Figure4.12: Use Case Diagram of Transaction Analysis The three transaction types here are: Check Availability (an enquiry), Reserve Ticket (booking) and Cancel Ticket (cancellation). On any given time we will get customers interested in giving any of the above transaction stimuli. In a typical situation, any one stimulus may be entered through a particular terminal. The human user would inform the system her preference by selecting a transaction type from a menu. The first step in our strategy is to identify such transaction types and draw the first level breakup of modules in the structure chart, by creating separate module to co-ordinate various transaction types. This is shown in the figure 4.13 as follows:

Figure 4.13: First Cut Structure Chart


163 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

The Main (), which is an over-all coordinating module, gets the information about what transaction the user prefers to do through TransChoice. The TransChoice is returned as a parameter to Main (). Remember, we are following our design principles faithfully in decomposing our modules. The actual details of how GetTransactionType () is not relevant for Main (). It may for example, refresh and print a text menu and prompt the user to select a choice and return this choice to Main (). It will not affect any other components in our breakup, even when this module is changed later to return the same input through graphical interface instead of textual menu. The modules Transaction1 (), Transaction2 () and Transaction1 () are the coordinators of transactions one, two and three respectively. The details of these transactions are to be exploded in the next levels of abstraction. We will continue to identify more transaction centers by drawing a navigation chart of all input screens that are needed to get various transaction stimuli from the user. These are to be factored out in the next levels of the structure chart (in exactly the same way as seen before), for all identified transaction centers. Transform Analysis Transform analysis is strategy of converting each piece of DFD (may be from level 2 or level 1, etc.) for all the identified transaction centers. In case, the given system has only one transaction (like a payroll system), then we can start transformation from level 1 DFD itself. Transform analysis is composed of the following five steps [PageJones, 1988]: 1. 2. 3. 4. 5. Draw a DFD of a transaction type (usually done during analysis phase) Find the central functions of the DFD Convert the DFD into a first-cut structure chart Refine the structure chart Verify that the final structure chart meets the requirements of the original DFD

Payroll System: An Illustrative Example A payroll system deals with the management of the salary payment for all the employees in the organization. It has to calculate the no of hours the employee has worked and the payment that he has to receive. If he has taken any leave then the corresponding amount has to be deducted from his salary. It should also consider and calculate the pay for the no of extra hours he has worked.

Anna University Chennai

164

DSE 112

SOFTWARE ENGINEERING

1. Identifying the central transform

NOTES

Figure 4.14: Identifying the Central Transform The central transform is the portion of DFD that contains the essential functions of the system and is independent of the particular implementation of the input and output. One way of identifying central transform (Page-Jones, 1988) is to identify the centre of the DFD by pruning off its afferent and efferent branches. Afferent stream is traced from outside of the DFD to a flow point inside, just before the input is being transformed into some form of output (For example, a format or validation process only refines the input does not transform it). Similarly an efferent stream is a flow point from where output is formatted for better presentation. The processes between afferent and efferent stream represent the central transform (marked within dotted lines above). In the above example, P1 is an input process, and P6 & P7 are output processes. Central transform processes are P2, P1, P4 and P5 - which transform the given input into some form of output.

165

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

2. First-cut Structure Chart To produce first-cut (first draft) structure chart, first we have to establish a boss module. A boss module can be one of the central transform processes. Ideally, such process has to be more of a coordinating process (encompassing the essence of transformation). In case we fail to find a boss module within, a dummy-coordinating module is created.

Figure 4.15: First Cut Structure Chart of the Payroll System In the above illustration, we have a dummy boss module Produce Payroll which is named in a way that it indicate what the program is about. Having established the boss module, the afferent stream processes are moved to left most side of the next level of structure chart; the efferent stream process on the right most side and the central transform processes in the middle. Here, we moved a module to get valid timesheet (afferent process) to the left side (indicated in yellow). The two central transform processes are move in the middle (indicated in orange). By grouping the other two central transform processes with the respective efferent processes, we have created two modules (in blue) essentially to print results, on the right side. The main advantage of hierarchical (functional) arrangement of module is that it leads to flexibility in the software. For instance, if Calculate Deduction module is to select deduction rates from multiple rates, the module can be split into two in the next level one to get the selection and another to calculate. Even after this change, the Calculate Deduction module would return the same value. 3. Refine the Structure Chart Expand the structure chart further by using the different levels of DFD. Factor down till you reach to modules that correspond to processes that access source / sink or data stores. Once this is a ready, other feature of the software like error handling, security, etc. has to be added. A module name should not be used for two different modules. If the same module is to be used in more than one place, it will be demoted down such that fan in can be done from the higher levels. Ideally, the name should sum up the activities done by the module and its sub-ordinates.

Anna University Chennai

166

DSE 112

SOFTWARE ENGINEERING

4. Verify Structure Chart vis--vis with DFD Because of the orientation towards the end product, the software, the finer details of how data gets originated and stored (as appeared in DFD) is not explicit in Structure Chart. Hence DFD may still be needed along with Structure Chart to understand the data flow while creating low-level design. 5. Constructing Structure Chart (An illustration) Some characteristics of the structure chart as a whole would give some clues about the quality of the system. Page-Jones (1988) suggest following guidelines for a good decomposition of structure chart: 1. Avoid decision splits - Keep span-of-effect within scope-of-control: i.e. A module can affect only those modules which comes under its control (All sub-ordinates, immediate ones and modules reporting to them, etc.) 2. Error should be reported from the module that both detects an error and knows what the error is. 3. Restrict fan-out (number of subordinates to a module) of a module to seven. Increase fan-in (number of immediate bosses for a module). High fan-ins (in a functional way) improve reusability. How to measure the goodness of the design? To Measure design quality, we use coupling (the degree of interdependence between two modules), and cohesion (the measure of the strength of functional relatedness of elements within a module). Page-Jones gives a good metaphor for understanding coupling and cohesion: Consider two cities A & B, each having a big soda plant C & D respectively. The employees of C are predominantly in city B and employees of D in city A. What will happen to the highway traffic between city A & B? By placing employees associated to a plant in the city where plant is situated improves the situation (reduces the traffic). This is the basis of cohesion (which also automatically improve coupling). Coupling Coupling is the measure of strength of association established by a connection from one module to another. Minimizing connections between modules also minimizes the paths along which changes and errors can propagate into other parts of the system (ripple effect). The use of global variables can result in an enormous number of connections between the modules of a program.
167

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

The degree of coupling between two modules is a function of several factors; 1. How complicated the connection is. 2. Whether the connection refers to the module itself or something inside it. 3. What is being sent or received? We aim for a loose coupling. We may come across a case of module A calling module B, but no parameters passed between them (neither send, nor received). This is strictly should be positioned at zero point on the scale of coupling (lower than Normal Coupling itself). Two modules A &B are normally coupled if A calls B B returns to A (and) all information passed between them is by means of parameters passed through the call mechanism. The other two types of coupling (Common and Content) are abnormal coupling and not desired. Even in Normal Coupling we should take care of following issues; 1. Data coupling can become complex if number of parameters communicated between is large. 2. In Stamp coupling there is always a danger of over-exposing irrelevant data to called module. (Beware of the meaning of composite data. Name represented as a array of characters may not qualify as a composite data. The meaning of composite data is the way it is used in the application NOT as represented in a program) 3. What-to-do flags are not desirable when it comes from a called module (inversion of authority): It is all right to have calling module know internals of called module and not the other way around. When data is passed up and down merely to send it to a desired module, the data will have no meaning at various levels. This will lead to tramp data. Hybrid coupling will result when different parts of flags are used (misused?) to mean different things in different places (Usually we may brand it as control coupling but hybrid coupling complicate connections between modules). Two modules may be coupled in more than one way. In such cases, their coupling is defined by the worst coupling type they exhibit.

Q4.6 Questions
1. Explain structured design methodology in detail. 2. Mention the strategies for converting the DFD into structured chart.
Anna University Chennai 168

DSE 112

SOFTWARE ENGINEERING

4.7 DETAILED DESIGN Software design is the process of defining the architecture, components, interfaces, and other characteristics of a system or component [Ref 2]. Detailed design is the process of defining the lower level components, modules and interfaces. Production is the process of: 1. 2. 3. Programming - coding the components; Integrating - assembling the components; Verifying - testing modules, subsystems and the full system

NOTES

The physical model outlined in the Architecture Design phase is extended to produce a structured set of component specifications that are consistent, coherent and complete. Each specification defines the functions, inputs, outputs and internal processing of the component. The software components are documented in the Detailed Design Document (DDD). The DDD is a comprehensive specification of the code. It is the primary reference for maintenance staff in the Transfer phase (TR phase) and the Operations and Maintenance phase (OM phase). The main outputs of the DD phase are the: 1. 2. 3. 4. 5. 6. 7. Source and object code; Detailed Design Document (DDD); Software User Manual (SUM); Software Project Management Plan for the TR phase (SPMP/TR); Software Configuration Management Plan for the TR phase (SCMP/TR); Software Quality Assurance Plan for the TR phase (SQAP/TR); Acceptance Test specification (SVVP/AT).

Progress reports, configuration status accounts, and audit reports are also outputs of the phase. These should always be archived. The detailed design and production of the code is the responsibility of the developer. Engineers developing systems with which the software interfaces may be consulted during this phase. User representatives and operations personnel may observe system tests. DD phase activities must be carried out according to the plans defined in the AD phase (DD01). Progress against plans should be continuously monitored by project management and documented at regular intervals in progress reports.
169 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Figure 4.17: DD phase activities Figure 4.17 shown is an ideal representation of the flow of software products in the DD phase. The reader should be aware that some DD phase activities can occur in parallel as separate teams build the major components and integrate them. Teams may progress at different rates; some may be engaged in coding and testing while others are designing. The following subsections discuss the activities shown in Figure 4.17. Detailed design Design standards must be set at the start of the DD phase by project management to coordinate the collective efforts of the team. This is especially necessary when development team members are working in parallel. The developers must first complete the top-down decomposition of the software started in the AD phase (DD02) and then outline the processing\ to be carried out by each component. Developers must continue the structured approach and not introduce unnecessary complexity. They must build defenses against likely problems. Developers should verify detailed designs in design reviews, level by level. Review of the design by walkthrough or inspection before coding is a more efficient way of eliminating design errors than testing. The developer should start the production of the user documentation early in the DD phase. This is especially important when the HCI component is significantly
Anna University Chennai 170

DSE 112

SOFTWARE ENGINEERING

large: writing the SUM forces the developer to keep the users view continuously in mind. Definition of design standards Wherever possible, standards and conventions used in the AD phase should be carried over into the DD phase. They should be documented in part one of the DDD. Standards and conventions should be defined for; 1. 2. 3. 4. 5. Design methods Documentation Naming components Computer Aided Software Engineering (CASE) tools Error handling

NOTES

Detailed design methods Detailed design first extends the architectural design to the bottom level components. Developers should use the same design method that they employed in the AD phase. Architectural Design Phase discusses; 1. 2. 3. 4. Structured Design Object Oriented Design Jackson System Development Formal Methods

The next stage of design is to define module processing. This is done by methods such as; 1. 2. 3. 4. 5. 6. Flowcharts A flowchart is a control flow diagram in which suitably annotated geometrical figures are used to represent operations, data, equipment, and arrows are used to indicate the sequential flow from one to another. It should represent the processing.
171 Anna University Chennai

Flowcharts Stepwise refinement Structured programming Program design languages (PDLs) Pseudo coding Jackson Structured Programming (JSP)

DSE 112

SOFTWARE ENGINEERING

NOTES

Flowcharts are an old software design method. A box is used to represent process steps and diamonds are used to represent decisions. Arrows are used to represent control flow. Flowcharts predate structured programming and they are difficult to combine with a stepwise refinement approach. Flowcharts are not well supported by tools and so their maintenance can be a burden. Although directly related to module internals, they cannot be integrated with the code, unlike PDLs and pseudo-code. For all these reasons, flowcharts are no longer a recommended technique for detailed design. Stepwise refinement Stepwise refinement is the most common method of detailed design. The guidelines for stepwise refinement are: 1. 2. 3. 4. 5. Start from functional and interface specifications; Concentrate on the control flow; Defer data declarations until the coding phase; Keep steps of refinement small to ease verification; Review each step as it is made. i. Stepwise refinement is closely associated with structured programming

Structured programming Structured programming is commonly associated with the name of E.W. Dijkstra. It is the original structured method and proposed: 1. 2. 3. Hierarchical decomposition; The use of only sequence, selection and iteration constructs; Avoiding jumps in the program.

Myers emphasizes the importance of writing code with the intention of communicating with people instead of machines. The Structured Programming method emphasizes that simplicity is the key to achieving correctness, reliability, maintainability and adaptability. Simplicity is achieved through using only three constructs: sequence, selection and iteration. Other constructs are unnecessary. Structured programming and stepwise refinement are inextricably linked. The goal of refinement is to define a procedure that can be encoded in the sequence, selection and iteration constructs of the selected programming language.
Anna University Chennai 172

DSE 112

SOFTWARE ENGINEERING

Structured programming also lays down the following rules for module construction: 1. 2. 3. 4. Each module should have a single entry and exit point; Control flow should proceed from the beginning to the end; Related code should be blocked together, not dispersed around the module; Branching should only be performed under prescribed conditions (e.g. on error).

NOTES

The use of control structures other than sequence, selection and iteration introduces unnecessary complexity. The whole point about banning GOTO was to prevent the definition of complex control structures. Jumping out of loops causes control structures only to be partially contained within others and makes the code fragile. Modern block-structured languages, such as Pascal and Ada, implement the principles of structured programming, and enforce the three basic control structures. Ada supports branching only at the same logical level and not to arbitrary points in the program. The basic rules of structured programming can lead to control structures being nested too deeply. It can be quite difficult to follow the logic of a module when the control structures are nested more than three or four levels. Three common ways to minimize this problem are to: 1. 2. 3. Define more lower-level modules; Put the error-handling code in blocks separate to the main code. Branching to the end of the module on detecting an error.

Program Design Languages Program Design Languages (PDL) is used to develop, analyze and document a program design. A PDL is often obtained from the essential features of a high-level programming language. APDL may contain special constructs and verification protocols. A PDL should provide support for: 1. 2. 3. Abstraction Decomposition Information hiding
173 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

4. 5. 6. 7. 8. 9.

Stepwise refinement Modularity Algorithm design Data structure design Connectivity Adaptability

Adoption of a standard PDL makes it possible to define interfaces to CASE tools and programming languages. The ability to generate executable statements from a PDL is desirable. Using an entire language as a PDL increases the likelihood of tool support. However, it is important that a PDL be simple. Developers should establish conventions for the features of a language that are to be used in detailed design. PDLs are the preferred detailed design method on larger projects, where the existence of standards and the possibility of tool support make them more attractive than pseudo-code. Pseudo-code Pseudo-code is a combination of programming language constructs and natural language used to express a computer program design. Pseudo-code is distinguished from the code proper by the presence of statements that do not compile. Such statements only indicate what needs to be coded. They do not affect the module logic. Pseudo-code is an informal PDL that gives the designer greater freedom of expression than a PDL, at the sacrifice of tool support. Pseudo-code is acceptable for small projects and in prototyping, but on larger projects a PDL is definitely preferable. Jackson Structured Programming Jackson Structured Programming (JSP) is a program design technique that derives a programs structure from the structures of its input and output data. The JSP dictum is that the program structure should match the data structure. In JSP, the basic procedure is to: 1. Consider the problem environment and define the structures for the data to be processed; 2. Form a program structure based on these data structures;
Anna University Chennai 174

DSE 112

SOFTWARE ENGINEERING

3. Define the tasks to be performed in terms of the elementary operations available, and allocate each of those operations to suitable components in the program structure. The elementary operations (i.e. statements in the programming language) must be grouped into one of the three composite operations: sequence, iteration and selection. These are the standard structured programming constructs, giving the technique its name. JSP is suitable for the detailed design of software that processes sequential streams of data whose structure can be described hierarchically. JSP has been quite successful for information systems applications. Jackson System Development (JSD) is a descendant of JSP. If used, JSD should be started in the SR phase. Programming languages Programming languages are best classified by their features and application domains. Classification by generation (e.g. 3GL, 4GL) can be very misleading because the generation of a language can be completely unrelated to its age (e.g. Ada, LISP). Even so, study of the history of programming languages can give useful insights into the applicability and features of particular languages. The following classes of programming languages are widely recognized: 1. 2. 3. 4. Procedural languages Object-oriented languages Functional languages Logic programming languages

NOTES

Application-specific languages based on database management systems are not discussed here because of their lack of generality. Control languages, such as those used to command operating systems, are also not discussed for similar reasons. Procedural languages are sometimes called imperative languages or algorithmic languages. Functional and logic programming languages are often collectively called declarative languages because they allow programmers to declare what is to be done rather than how. Procedural languages A procedural language should support the following features:
175 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

1. 2. 3. 4.

Sequence (composition) Selection (alternation) Iteration Division into modules

The traditional procedural languages such as COBOL and FORTRAN support these features. The sequence construct, also known as the composition construct, allows programmers to specify the order of execution. This is trivially done by placing one statement after another, but can imply the ability to branch (e.g. GOTO). The sequence construct is used to express the dependencies between operations. Statements that come later in the sequence depend on the results of previous statements. The sequence construct is the most important feature of procedural languages, because the program logic is embedded in the sequence of operations, instead of in a data model (e.g. the trees of Prolog, the lists of LISP and the tables of RDBMS languages). The selection constructs, also known as the condition or alternation construct, allows programmers to evaluate a condition and take appropriate action (e.g. IF THEN and CASE statements). The iteration construct allows programmers to construct loops (e.g. DO...). This saves repetition of instructions. The module construct allows programmers to identify a group of instructions and utilize them elsewhere (e.g. CALL...). It saves repetition of instructions and permits hierarchical decomposition. Some procedural languages also support: 1. 2. 3. Block structuring Strong typing Recursion

Block structuring enforces the structured programming principle that modules should have only one entry point and one exit point. Pascal, Ada and C support block structuring. Strong typing requires the data type of each data object to be declared. This stops operators being applied to inappropriate data objects and the interaction of data
Anna University Chennai 176

DSE 112

SOFTWARE ENGINEERING

objects of incompatible data types (e.g. when the data type of a calling argument does not match the data type of a called argument). Ada and Pascal are strongly typed languages. Strong typing helps a compiler to find errors and to compile efficiently. Recursion allows a module to call itself (e.g. module A calls module A), permitting greater economy in programming. Pascal, Ada and C support recursion. Object-oriented languages An object-oriented programming language should support all structured programming language features plus: 1. Inheritance 2. Polymorphism 3. Messages Examples of object-oriented languages are Smalltalk and C++. Inheritance is the technique by which modules can acquire capabilities from higher-level modules, i.e. simply by being declared as members of a class, they have all the attributes and services of that class. Polymorphism is the ability of a process to work on different data types, or for an entity to refer at runtime to instances of specific classes. Polymorphism cuts down the amount of source code required. Ideally, a language should be completely polymorphic, so the need to formulate sections of code for each data type is unnecessary. Polymorphism implies support for dynamic binding. Object-oriented programming languages use messages to implement interfaces. A message encapsulates the details of an action to be performed. A message is sent from a sender object to a receiver object to invoke the services of the latter. Functional languages Functional languages, such as LISP and ML, support declarative structuring. Declarative structuring allows programmers to specify only what is required, without stating how it is to be done. It is an important feature, because it means standard processing capabilities are built into the language (e.g. information retrieval). With declarative structuring, procedural constructs are unnecessary. In particular, the sequence construct is not used for the program logic. An underlying information
177

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

model (e.g. a tree or a list) is used to define the logic. If some information is required for an operation, it is automatically obtained from the information model. Although it is possible to make one operation depend on the result of a previous one, this is not the usual style of programming. Functional languages work by applying operators (functions) to arguments (parameters). The arguments themselves may be functional expressions, so that a functional program can be thought of as a single expression applying one function to another. For example if DOUBLE is the function defined as DOUBLE(X) = X + X, and APPLY is the function that executes another function on each member of a list, then the expression APPLY (DOUBLE, [1, 2, 3]) returns [2, 4, 6]. Programs written in functional languages appear very different from those written in procedural languages, because assignment statements are absent. Assignment is unnecessary in a functional language, because the information model implies all relationships. Functional programs are typically short, clear, and specification-like, and are suitable both for specification and for rapid implementation, typically of design prototypes. Modern compilers have reduced the performance problems of functional languages. A special feature of functional languages is their inherent suitability for parallel implementation, but in practice this has been slow to materialize. Logic programming languages Prolog is the foremost logic programming language. Logic programming languages implement some form of classical logic. Like functional languages, they have a declarative structure. In addition they support: 1. 2. 3. Backtracking Backward chaining Forward chaining

Backtracking is the ability to return to an earlier point in a chain of reasoning when an earlier conclusion is subsequently found to be false. It is especially useful when traversing a knowledge tree. Backtracking is incompatible with assignment, since assignment cannot be undone because it erases the contents of variables. Languages which support backtracking are, of necessity, non-procedural. Backward chaining starts from a hypothesis and reasons backwards to the
Anna University Chennai 178

DSE 112

SOFTWARE ENGINEERING

facts that cause the hypothesis to be true. For example if the fact A and hypothesis B are chained in the expression IF A THEN B, backwards chaining enables the truth of A to be deduced from the truth of B (note that A may be only one of a number of reasons for B to be true). Forward chaining is the opposite of backward chaining. Forward chaining starts from a collection of facts and reasons forward to a conclusion. For example if the fact X and conclusion Y are chained in the expression IF X THEN Y, forward chaining enables the truth of Y to be deduced from the truth of X. Forward chaining means that a change to a data item is automatically propagated to all the dependent items. It can be used to support data-driven reasoning. Tools for detailed design CASE tools In all but the smallest projects, CASE tools should be used during the DD phase. Like many general purpose tools (e.g. such as word processors and drawing packages), CASE tools should provide: 1. Windows, icons, menu and pointer (WIMP) style interface for the easy creation and editing of diagrams; What you see is what you get (WYSIWYG) style interface that ensures that what is created on the display screen closely resembles what will appear in the document.

NOTES

2.

Method-specific CASE tools offer the following features not offered by general purpose tools: 1. 2. 3. 4. 5. 6. 7. Enforcement of the rules of the methods; Consistency checking; Easy modification; Automatic traceability of components to software requirements; Configuration management of the design information; Support for abstraction and information hiding; Support for simulation.

179

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Configuration managers Configuration management of the physical model is essential. The model should evolve from baseline to baseline as it develops in the DD phase, and enforcement of procedures for the identification, change control and status accounting of the model are necessary. In large projects, configuration management tools should be used for the management of the model database. Precompilers A precompiler generates code from PDL specifications. This is useful in design, but less so in later stages of development unless software faults can be easily traced back to PDL statements. Production Tools A range of production tools are available to help programmers develop, debug, build and test software. Table 4.2 lists the tools in order of their appearance in the production process. Table4.2: Production tools

Q4.7 Questions 1. 2. 3. Explain the detailed design methods in details. Explain Logic Programming Language in detail. Write a note on the tools used for detailed design.

Anna University Chennai

180

DSE 112

SOFTWARE ENGINEERING

4.8 MODULE SPECIFICATIONS The detailed design module specification The purpose of a DDD is to describe the detailed solution to the problem stated in the SRD. The DDD must be an output of the DD phase. The DDD must be complete, accounting for all the software requirements in the SRD. The DDD should be sufficiently detailed to allow the code to be implemented and maintained. Components (especially interfaces) should be described in sufficient detail to be fully understood. A DDD is clear if it is easy to understand. The structure of the DDD must reflect the structure of the software design, in terms of the levels and components of the software. The natural language used in a DDD must be shared by all the development team. The DDD should not introduce ambiguity. Terms should be used accurately. A diagram is clear if it is constructed from consistently used symbols, icons, or labels, and is well arranged. Diagrams should have a brief title, and be referenced by the text, which they illustrate. Diagrams and text should complement one another and be as closely integrated as possible. The purpose of each diagram should be explained in the text, and each diagram should explain aspects that cannot be expressed in a few words. Diagrams can be used to structure the discussion in the text. The DDD must be consistent. There are several types of inconsistency: 1. 2. 3. 4. Different terms used for the same thing The same term used for different things Incompatible activities happening simultaneously Activities happening in the wrong order

NOTES

Where a term could have multiple meanings, a single meaning should be defined in a glossary, and only that meaning should be used in the DDD. Duplication and overlap lead to inconsistency. Clues to inconsistency are a single functional requirement tracing to more than one component. Methods and tools help consistency to be achieved. Consistency should be preserved both within diagrams and between diagrams in the same document. Diagrams of different kinds should be immediately distinguishable. A DDD is modifiable if changes to the document can be made easily, completely, and consistently. Good tools make modification easier, although it is always necessary to check for unpredictable side effects of changes. For example a global string search
181 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

and replace capability can be very useful, but developers should always guard against unintended changes. Diagrams, tables, spreadsheets, charts and graphs are modifiable if they are held in a form, which can readily be changed. Such items should be prepared either within the word processor, or by a tool compatible with the word processor. For example, diagrams may be imported automatically into a document: typically, the print process scans the document for symbolic markers indicating graphics and other files. Where graphics or other data are prepared on the same hardware as the code, it may be necessary to import them by other means. For example, a screen capture utility may create bitmap files ready for printing. These may be numbered and included as an annex. Projects using methods of this kind should define conventions for handling and configuration management of such data. The software detailed design specification is as follows:1.0 Introduction This section provides an overview of the entire design document. This document describes all data, architectural, interface and component-level design for the software. 1.1 Goals and objectives: Overall goals and software objectives are described. 1.2 Statement of scope: A description of the software is presented. Major inputs, processing functionality, and outputs are described without regard to implementation detail. 1.3 Software context: The software is placed in a business or product line context. Strategic issues relevant to context are discussed. The intent is for the reader to understand the big picture. 1.4 Major constraints: Any business or product line constraints that will impact he manner in which the software is to be specified, designed, implemented or tested are noted here. 2.0 Data design A description of all data structures including internal, global, and temporary data structures. 2.1 Internal software data structure: Data structures that are passed among components the software are described.

Anna University Chennai

182

DSE 112

SOFTWARE ENGINEERING

2.2 Global data structure: Data structured that are available to major portions of the architecture are described. 2.3 Temporary data structure: Files created for interim use are described. 2.4 Database description Database(s) created as part of the application is (are) described. 3.0 Architectural and component-level design: A description of the program architecture is presented. 3.1 Program Structure: A detailed description the program structure chosen for the application is presented. 3.1.1 Architecture diagram: A pictorial representation of the architecture is presented. 3.1.2 Alternatives: A discussion of other architectural styles considered is presented. Reasons for the selection of the style presented in Section3.1.1 are provided. 3.2 Description for Component n: A detailed description of each software component contained within the architecture is presented. Section 3.2 is repeated for each of n components. 3.2.1 Processing narrative (PSPEC) for component n: A processing narrative for component n is presented. 3.2.2 Component n interface description: A detailed description of the input and output interfaces for the component is presented. 3.2.3 Component n processing detail: A detailed algorithmic description for each component is presented. Section 3.2.3 is repeated for each of n components. 3.2.3.1 Interface description 3.2.3.2 Algorithmic model (e.g., PDL) 3.2.3.3 Restrictions/limitations 3.2.3.4 Local data structure 3.2.3.5 Performance issues 3.2.3.6 Design constraints 3.3 Software Interface Description: The softwares interface(s) to the outside world are described. 3.3.1 External machine interfaces: Interfaces to other machines (computers or devices) are described.
183

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

3.3.2 External system interfaces: Interfaces to other systems, products, or networks are described. 3.3.3 Human interface: An overview of any human interfaces to be designed for the software is presented. See Section 4.0 for additional detail. 4.0 User interface design: A description of the user interface design of the software is presented. 4.1 Description of the user interface : A detailed description of user interface including screen images or prototype is presented. 4.1.1 Screen images: Representation of the interface forms the users point of view. 4.1.2 Objects and actions: All screen objects and actions are identified. 4.2 Interface design rules: Conventions and standards used for designing/ implementing the user interface are stated. 4.3 Components available: GUI components available for implementation are noted. 4.4 UIDS description: The user interface development system is described. 5.0 Restrictions, limitations, and constraints Special design issues which impact the design or implementation of the software are noted here. 6.0 Testing Issues: Test strategy and preliminary test case specification are presented in this section. 6.1 Classes of tests: The types of tests to be conducted are specified, including as much detail as is possible at this stage. Emphasis here is on black-box and white-box testing. 6.2 Expected software response: The expected results from testing are specified. 6.3 Performance bounds: Special performance requirements are specified. 6.4 Identification of critical components: Those components that are critical and demand particular attention during testing are identified. 7.0 Appendices: Presents information that supplements the design specification. 7.1 Requirements traceability matrix: A matrix that traces stated components and data structures to software requirements is developed. 7.2 Packaging and installation issues: Special considerations for software packaging and installation are presented.

Anna University Chennai

184

DSE 112

SOFTWARE ENGINEERING

7.3 Design metrics to be used: A description of all design metrics to be used during the design activity is noted here. 7.4 Supplementary information (as required) Q4.8 Questions 1. Explain Module Specification in detail.

NOTES

4.9 DESIGN VERIFICATION


There are a few techniques available for verifying that the detailed design is consistent with the system design. The focus of verification in the detailed design phase is on showing that the detailed design meets the specification laid out in the system design. Validating that the system as designed is consistent with the requirements of the system is not stressed during the detailed design. There are three validation methods. 1. 2. 3. 4. Design Reviews Design walkthroughs Critical design review Consistency checkers

Design reviews Detailed designs should be reviewed top-down, level by level, as they are generated during the DD phase. Reviews may take the form of walkthroughs or inspections. Walkthroughs are useful on all projects for informing and passing on expertise. Inspections are efficient methods for eliminating defects before production begins. Two types of walkthrough are useful: 1. 2. Code reading; What-if? analysis.

In a code reading, reviews trace the logic of a module from beginning to end. In what-if? analysis, component behavior is examined for specific inputs. Static analysis tools evaluate modules without executing them. Static analysis functions are built in to some compilers. Output from static analysis tools may be input to a code review. When the detailed design of a major component is complete, a critical design review must certify its readiness for implementation (DD10). The project leader should participate in these reviews, with the team leader and team members concerned.
185 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

The development team should hold walkthroughs and internal reviews of a product before its formal review. After production, the DD Review (DD/R) must consider the results of the verification activities and decide whether to transfer the software. Normally, only the code, DDD, SUM and SVVP/AT undergo the full technical review procedure involving users, developers, management and quality assurance staff. The Software Project Management Plan (SPMP/TR), Software Configuration Management Plan (SCMP/TR), and Software Quality Assurance Plan (SQAP/TR) are usually reviewed by management and quality assurance staff only. In summary, the objective of the DD/R is to verify that: 1. The DDD describes the detailed design clearly, completely and in sufficient detail to enable maintenance and development of the software by qualified software engineers not involved in the project; Modules have been coded according to the DDD; Modules have been verified according to the unit test specifications in the SVVP/UT; Major components have been integrated according to the ADD; Major components have been verified according to the integration test specifications in the SVVP/IT; The software has been verified against the SRD according to the system test specifications in the SVVP/ST; The SUM explains what the software does and instructs the users how to operate the software correctly; The SVVP/AT specifies the test designs, test cases and test procedures so that all the user requirements can be validated.

2. 3.

4. 5.

6.

7.

8.

The DD/R begins when the DDD, SUM, and SVVP, including the test results, are distributed to participants for review. A problem with a document is described in a Review Item Discrepancy(RID) form. A problem with\ code is described in a Software Problem Report (SPR). Review meetings are\ then held that have the documents, RIDs and SPRs as input. A review meeting should discuss all the RIDs and SPRs and decide an action for each. The review meeting may also discuss possible solutions to the problems raised by them. The output of the meeting includes the processed RIDs, SPRs and Software Change Requests (SCR).
Anna University Chennai 186

DSE 112

SOFTWARE ENGINEERING

The DD/R terminates when a disposition has been agreed for all the RIDs. Each DD/R must decide whether another review cycle is necessary, or whether the TR phase can begin. Design Walkthroughs A design walkthrough is a manual method of verification. The definition and the use of walkthroughs changes from organization to organization. A design walkthrough is done in an informal meeting called by the designer or the leader of the designers group. The walkthrough group is usually small and contains along with the designer, a group leader and/or another designer of the group. In a walkthrough the designer explains the logic step by step and the members of the group ask questions, point out the possible errors or seek clarifications. A beneficial side of the effect of walkthrough is that in the process of articulating the design in detail, the designer himself can uncover some of the errors. Walkthroughs are essentially a form of peer review. Due to its informal nature they are usually they are not as effective as the design reviews. Critical design review The purpose of critical design review is to ensure that the detailed design satisfies the specifications laid down during the system design. It is desirable to detect and remove design errors early, as the cost of removing them later can be considerably more that the cost of removing them at the design time. Detecting the errors in detailed design is the aim of critical design review. The critical design review process is similar to the other reviews, in that a group of people get together to discuss the design with the aim of revealing designs errors or undesirable properties. The review group includes, besides the author of the detailed design, a member of the system design team, the programmer responsible for ultimately coding the modules under review and an independent software quality engineer. The review can be held in the same manner as the requirement review or the system design review. That is, each member studies the design beforehand and with the aid of a checklist, marks out items that the reviewer feels are incorrect or need clarification. The members ask questions and the designer tries to explain the situation. During the course of the discussion design errors are revealed. As with any review, it should be kept in mind that the aim of the meeting is to uncover the errors and not try to fix them.
187

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Fixing is done later. The designer should not be put in a defensive position. The meeting should end with a list of action items, which are later acted upon by the designer. The use of checklists, as with any other reviews, is considered important for the success of the review. The checklist is a means of focusing the discussion or the search of errors. Checklists can be used by each member during private study of the design and also during the review meeting. For best results the checklist should be tailored to the project at hand, to uncover problem specific errors. Shown below is a sample checklist. A Sample checklist: 1. Does each of the modules in the system design exist in the detailed design? 2. Are there analyses to demonstrate that the performance requirements can be met? 3. Are all the assumptions explicitly stated, and are they acceptable? 4. Are all relevant aspects of the system design reflected in the detailed design? 5. Have the exceptional conditions been handled? 6. Are all the data formats consistent with the system design? 7. Is the design structured and does it conform to local standards? 8. Are the sizes of data structures estimated? Are provisions made to guard against overflow? 9. Is each statement specified in natural language easily codable? 10. Are the loop termination conditions properly specified? 11. Are the conditions in the loops ok? 12. Is the nesting proper? 13. Is the module logic too complex? 14. Are the modules highly cohesive? Consistency Checkers Design reviews and walkthroughs are manual processes. The people involved in the review and walkthrough determine the errors in the design. If the design is specified in PDL or some other formally defined design language, it is possible to detect some design defects by using consistency checkers. Consistency checkers are essentially compilers that take as input the design specified in a design language (PDL). Clearly, they cannot produce executable code as inner syntax of PDL allows natural language, however the module interface specifications

Anna University Chennai

188

DSE 112

SOFTWARE ENGINEERING

is specified formally. A consistency checker can ensure that any module invoked or used by a given module actually exist in the design and that the interface used by the caller is consistent with the interface definition of the called module. It can also check if the used global data items are indeed defined globally in the design. Depending on the precision and syntax of the design language, consistency checkers can produce other information as well. In addition, these tools can be used to compute the complexity of the module and other metrics, since these metrics are based on alternate and loop constructs, which have a formal syntax in PDL. The tradeoff here is that the more formal design language, the more checking can be done during design, but the cost is that the design language becomes less flexible and tends towards a programming language. Q4.9 Questions 1. Explain in detail as to how the design verification is carried to check for the completeness and consistency of the design.

NOTES

4.10 DESIGN METRICS


There are many design metrics been proposed to quantify the complexity of the design that has been developed? Some of them are listed below and are discussed. 1. 2. 3. 4. 5. 6. McCabes Cyclomatic Complexity Number of Parameters Number of modules Data bindings Module Coupling Cohesion Metric

McCabes Cyclomatic Complexity Cyclomatic complexity is the most widely used member of a class of static software metrics. Cyclomatic complexity may be considered a broad measure of soundness and confidence for a program. Introduced by Thomas McCabe in 1976, it measures the number of linearly independent paths through a program module. This measure provides a single ordinal number that can be compared to the complexity of other programs. Cyclomatic complexity is often referred to simply as program complexity, or as McCabes complexity. It is often used in concert with other software metrics. As one of the more widely accepted software metrics, it is intended to be independent of language and language format.
189 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Cyclomatic complexity has also been extended to encompass the design and structural complexity of a system. The cyclomatic complexity of a software module is calculated from a connected graph of the module (that shows the topology of control flow within the program): Cyclomatic complexity (CC) = E - N + p where E = the number of edges of the graph N = the number of nodes of the graph p = the number of connected components To actually count these elements requires establishing a counting convention. The complexity number is generally considered to provide a stronger measure of a programs structural complexity than is provided by counting lines of code. Figure shown below is a connected graph of a simple program with a cyclomatic complexity of seven. Nodes are the numbered locations, which correspond to logic branch points; edges are the lines between the nodes.

Figure4.18: Connected Graph of a Simple Program

Anna University Chennai

190

DSE 112

SOFTWARE ENGINEERING

A large number of programs have been measured, and ranges of complexity have been established that help the software engineer determine a programs inherent risk and stability. The resulting calibrated measure can be used in development, maintenance, and reengineering situations to develop estimates of risk, cost, or program stability. Studies show a correlation between a programs cyclomatic complexity and its error frequency. A low cyclomatic complexity contributes to a programs understandability and indicates it is amenable to modification at lower risk than a more complex program. A modules cyclomatic complexity is also a strong indicator of its testability. A common application of cyclomatic complexity is to compare it against a set of threshold values. One such threshold set is in Table 4.3 below. Table 4.3: Cyclomatic Complexity Cyclomatic Complexity 1-10 11-20 21-50 greater than 50 Risk Evaluation A simple program, without much risk More complex, moderate risk Complex, high risk program Untestable program (very high risk)

NOTES

Cyclomatic complexity can be calculated manually for small program suites, but automated tools are preferable for most operational environments. For automated graphing and complexity calculation, the technology is language-sensitive; there must be a front-end source parser for each language, with variants for dialectic differences. Cyclomatic complexity is usually only moderately sensitive to program change. Other measures may be very sensitive. It is common to use several metrics together, either as checks against each other or as part of a calculation set. Other metrics bring out other facets of complexity, including both structural and computational complexity, as shown in Table 4.4 shown below. Table 4.4: Other Facets of Complexity Complexity Measurement Halstead Complexity Measures Primary Measure of Algorithmic complexity, measured by counting operators and operands Coupling between modules (parameters, global variables, calls)
191 Anna University Chennai

Henry and Kafura metrics

DSE 112

SOFTWARE ENGINEERING

NOTES

Bowles metrics

Module and system complexity; coupling via parameters and global variables Modularity or coupling; complexity of structure (maximum depth of structure chart); calls-to and called-by Modularity of the structure chart

Troy and Zweben metrics

Ligier metrics

Marciniak offers a more complete description of complexity measures and the complexity factors they measure. Number of Parameters It tries to capture coupling between modules. Understanding modules with large number of parameters will require more time and effort (assumption). Modifying modules with large number of parameters likely to have side effects on other modules. Number of Modules Here we measure the complexity of the design with the number of modules called (estimating complexity of maintenance). There are two terms that are used in this context. They are Fan-in and Fan-out. Fan-in: number of modules that call a particular module. Fan-out: how many other modules it calls. High fan-in means many modules depend on this module. High fan-out means module depends on many other modules. Makes understanding harder and maintenance more time-consuming. Data Bindings It is also one of the design metrics to measure its complexity. It is a triplet (p,x,q) where p and q are modules and X is variable within scope of both p and q. There are three types of Data Binding Metric as listed below. 1. Potential data binding 2. Used data binding 3. Actual data binding

Anna University Chennai

192

DSE 112

SOFTWARE ENGINEERING

Potential data binding: It is the X declared in both, but does not check to see if accessed. It reflects possibility that p and q might communicate through the shared variable. Used data binding: It is the potential data binding where p and q use X. It is harder to compute than potential data binding and requires more information about internal logic of module. Actual data binding: It is the used data binding where p assigns value to x and q references it. It is the hardest to compute but indicates information flow from p to q. Cohesion Metric Construct flow graph for module. Each vertex is an executable statement. For each node, record variables referenced in statement. Determine how many independent paths of the module go through the different statements. If a module has high cohesion, most of variables will be used by statements in most paths. Highest cohesion is when all the independent paths use all the variables in the module. Module Coupling Module coupling and cohesion are commonly accepted criteria for measuring the maintenance quality of software design. Coupling describes the inter-module connections while cohesion represents the intra-module relationship of components. As a basic idea of system theory, reduce coupling and increase cohesion has been recognized as one of the core concept of structural design of software. It is natural that we expect design metrics based on the criteria of minimizing coupling and maximizing cohesion of modules. However, several surveys of existing design metrics show that the most widely used design metrics for the inter-module relation are based on the information flow rather than the coupling cohesion criteria. Many experienced people believe that the existing concept of module coupling as well as cohesion are abstract and cannot be quantified. It is not surprising that, with the current understanding of coupling concept, it is hard to measure, in practice, the quality of software in terms of module coupling. Despite the difficulty, people still stuck to measure inter-module dependence using variants of coupling such as the number of
193

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

inflows and outflows or data binding. Some researchers tried to refine coupling levels while others tempted to simply them. There was also an effort to quantify coupling based on its leveling which was considered as a sort of pseudo measurement. What is certain is that the concept of coupling is important whereas its measurement is difficult. Having studied the design phase of the SDLC, the next unit covers the next stage of the software development process, which is the coding, and the testing parts. The coding standards and the guidelines, the coding metrics and the coding verification are discussed. Also the testing phase is discussed in detail as to the different types of testing, the testing principles, guidelines and the testing metrics. Q4.10 Questions 1. Explain the various design metrics in detail. 2. What is the usefulness of metrics in the design of software? 3. Write the code for the simulation the traffic signal system and also verify the same. REFERENCES 1. Software Engineering A Practitioners Approach, By Roger. S.Pressman, Mc Graw Hill International 6th edition, 2005. 2. An integrated approach to software Engineering, By Pankaj Jalote, Second edition, Springer verlag 1997.

Anna University Chennai

194

DSE 112

SOFTWARE ENGINEERING

NOTES

UNIT V
5 INTRODUCTION The goal of coding or programming phase is to translate the design of the system produced during the design phase into code in a given programming language which can be executed by a computer, and which performs the computation specified by the design. For a given design the aim is to implement the design in the best possible manner.

5.1 LEARNING OBJECTIVES


1. What are the coding practices. 2. 3. 4. 5. 6. 7. 8. 9. The coding strategies. Code Verification Coding Metrics Unit and Integration Testing Testing Strategies Types of testing Functional vs Structural Testing Reliability Estimation.

5.2 CODING
The coding phase affects both the testing and maintenance profoundly. The time spent in coding is small percent of the total software cost, while testing and maintenance consume the major percentage. Thus, it should be clear that the goal during coding should not be to reduce the implementation cost, but the goal should be to reduce the cost of the later phases, even if it means that the cost of this phase has to increase. In other words, the goal during this phase is not to simplify the job of the programmer. Rather, the goal should be to simplify the job of the tester and the maintainer.

195

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

During the implementation it should be noted that they are kept in mind that the programmer should not be constructed so that they are easy to write, but that they are easy to read and understand. There are many different criteria for judging a program, including readability, size of the program, execution time, and required memory. For our purposes, ease of understanding and modification should be the basic goal of the programming activity. This means that simplicity is desirable, while cleverness and complexity are undesirable. Q5.2 Questions 1. What are the goals of coding software? 2. What are the points to be borne in mind while starting the coding of the software?

5.3 PROGRAMMING PRACTICES


The primary goal of coding phase is to translate the given detailed design into source code in a given programming language, such that the code is simple, easy to test, easy to understand and modify. Simplicity and clarity are the properties a programmer should strive for in the programs. Good programming is a skill that comes only by practice. However, much can be learned from the experience of others. Good programming is a practice independent of the target programming language, although some well structured languages like Pascal, Ada, and Modula make the job of programming simpler. The following of some of the good programming practices which would help in producing good quality software discussed in detail. 5.4 TOP-DOWN AND BOTTOM-UP The design of a software system consists of a hierarchy of modules. The main program invokes its subordinate modules, which in turn invoke their subordinate modules and so on. Given a design of a system, there are two ways in which the design can be implemented top-down and bottom-up. In a top-down implementation, the implementation starts from the top of the hierarchy, and then proceeds to the lower levels. First the main module is implemented and then its subordinates are implemented, and then their subordinates, and so on. In a bottom-up implementation, the process is the reverse. The development starts with
Anna University Chennai 196

DSE 112

SOFTWARE ENGINEERING

implementing the modules that are at the bottom of the hierarchy. The implementation proceeds through the higher levels, until it reaches the top. Top-down and bottom-up implementation should not be confused with topdown and bottom-up design. Here the design is being implemented and if the design is fairly detailed and complete, its implementation can proceed in either the top-down or the bottom-up manner, even if the design was produced in a top-down manner. Which of the two is used mostly affects testing. For any large system implementation and testing are done in parts: system components are separately built and tested before they are integrated to form the complete system. Testing can also proceed in a bottom-up or a top-down manner. It is most reasonable to have implementation proceed in a top-down manner if testing is being done in a top-down manner. On the other hand, if bottom-up testing is planned, then bottom-up implementation should b preferred. For systems where the design is not detailed enough, some of the design decisions have to be made during development. This may be true, for example, when building a prototype. In such cases top-down development may be preferable to aid the design while the implementation is progressing. Many complex systems like operating systems or networking software systems are organized as layers. In a layered architecture, a layer provides some services to the layers above, which use these services to implement the services that it provides. For a layered architecture, it is generally best for the implementation to proceed in a bottom-up manner.

NOTES

5.5 STRUCTURED PROGRAMMING


A program has a static structure as well as dynamic structure. The static structure is the structure of the text or the program, which is usually just a linear organization of statements of the program. The dynamic structure of the program is the sequence in which the statements are executed during the program execution. The goal of structured programming is to write a program such that its dynamic structure is the same as its static structure. In other words, the program should be written in a manner such that during execution its control flow is linearized and follows the linear organization of the program text. Programs, in which the statements are executed linearly, as they are organized in the program text, are easier to understand, test and modify. Since the program text is

197

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

organized as a sequence of statements, the close correspondence between execution and text structure makes a program more understandable. However, the main reason why structured programming was promulgated was formal verification of programs. During verification a program is considered to be a sequence of executable statements and verification proceeds step by step, considering one statement in the statement list at a time. Implied in these verification methods is the assumption that during execution, the statements will be executed in the sequence in which they are organized in the program text. If this assumption is satisfied, the task of verification becomes easier. Clearly, no meaningful program can be written as a simple sequence of statements without any branching or repetition. For structured programming, a statement is not a simple assignment statement, but could be a structured statement. The key property is that the statement should have a single entry and single exit. That is, during execution, the execution of the statement should start from one defined point and the execution should terminate at a single defined point. The most commonly used single entry and single exit statements are: Selection: if B then S1 else S2 if B then S1 Iteration: While B do S repeat S until B Sequencing: S1;S2;S3; It can be shown that these three basic constructs are sufficient to program any conceivable algorithm. Modern languages have other such constructs, like the CASE statement. Often the use of constructs, other than the ones that constitute the theoretically minimal set of constructs to write a program, can simplify the logic of a program. Hence, from a practical point of view, programs should be written such that, as far as possible, single entry, single exit control constructs can be used. The basic goal, as we have tried to emphasize, is to make the logic of the program simple to understand. No hard and fast rule can be formulated the program simple to understand. No hard and fast rule can be formulated that will be applicable under all circumstances. Structured programming practice forms a good basis and guideline for writing programs clearly.

Anna University Chennai

198

DSE 112

SOFTWARE ENGINEERING

5.6 INFORMATION HIDING


To reduce coupling between modules of a system it is best that different modules be allowed to access and modify only those data items that are needed by them. The other data items should be hidden from such modules and the modules should not be allowed to access these data items. Language and operating system mechanisms should preferably enforce this restriction. Thus modules are given access to data items on a need to know basis. In principle, every module should be allowed to access only some specified data that it requires. This level of information hiding is usually not practical, and most languages do not support this level of access restriction. One form of information hiding that is supported by many modern programming languages is data abstraction. With support for data abstraction, a package or a module is defined which encapsulates the data. Some operations are defined by the module on the encapsulated data. Other modules that are outside this module can only invoke these predefined operations on the encapsulated data. The advantage of this form of data abstraction is that the data is entirely in the control of the module in which the data is encapsulated. Other modules cannot access or modify the data, and the operations that can access and modify are also a part of this module. Many of the older languages, like Pascal, C, and FORTRAN do not provide mechanisms to support data abstraction. With such languages data abstraction can be supported only by a disciplined use of the language. That is, the access restrictions will have to be imposed by the programmers; the language does not provide them. For example, to implement a data abstraction of a STACK in Pascal, one method is to define a record containing all the data items needed to implement the STACK, and then define functions and procedures on variables of this type. A possible definition of the record and the interface of the push operation are given below. type stk = record elts: array [1..100] of integer; Top: 1..100; end; procedure push (var s: stk; i:integer); Note that in implementing information hiding in languages like Pascal, the language does not impose any access restrictions. In the example of the stack above,
199

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

the structure if a variable s, declared of the type stk, could be accessed from procedures other than the ones that have been defined for stack. That is why discipline by the programmers is needed to emulate data abstraction. Regardless of whether the language provides constructs for data abstraction or not, it is desirable to support data abstraction in cases where the data and operations on the data are well defined. Data abstraction is one way to increase the clarity of the program and helps in clean partitioning of the program into pieces that can be separately implemented and understood.

5.7 PROGRAMMING STYLE


It is impossible to provide a exhaustive list of what to do and what not to do in order to produce a simple and readable code. Here we list some general rules which are usually applicable. Names: Selecting module and variable names is often not considered of importance by novice programmers. Only when one starts reading programs written by others where the variable names are too cryptic and not representative does one realize the importance of selecting proper names. Most variables in a program reflect some entity in the problem domain, and the modules reflect some process. Variable names should be closely related to the entity they represent, and module names should reflect their activity. It is bad practice to choose cryptic names or totally unrelated names. It is also bad practice to use the same name for multiple purposes. Control Constructs: It is desirable that as much as possible single-entry, singleexit constructs should be used. It is also desirable to use a few standard control constructs rather than using a wide variety of constructs, just because they are available in the language. Gotos: Gotos should be used sparingly and in a disciplined manner. Only when the alternative to using gotos is more complex should the gotos be used. In any case, alternatives must be thought of before finally using a goto. If a goto must be used forward transfers is more acceptable than a backward jump. Use of gotos for exiting a loop, or for invoking error handlers is quite acceptable. Information Hiding: Information hiding should be supported where possible. Only the access functions for the data structures should be made visible, while hiding the data structure behind these functions. User Defined Types: Modern languages allow the users to define types like
Anna University Chennai 200

DSE 112

SOFTWARE ENGINEERING

the enumerated type. When such facilities are available, they should be exploited where applicable. For example, when working with dates, a type can be defined for the day of the week. In Pascal this is done as follows: type days = (Mon, Tue, Wed, Thur, Fri, Sat, Sun); Variables can then be declared of this type. Using such types makes the program much more clear than defining codes for each of the days and then working with codes. Nests: The different control constructs, particularly the if-then-else can be nested. If the nesting becomes too deep, the programs become harder to understand. In case of deeply nested if-then-else, it is often difficult to determine the if statement to which a particular else clause is associated. Where possible, deep nesting should be avoided, even if it means a little inefficiency. For example, consider the following construct of nested if-then-elses. if C1 then S1 else if C2 then S2 else if C3 then S3 else if C4 then S4; If the different conditions are disjoint then this structure can be converted into the following structure. if C1 then S1; if C2 then S2; if C3 then S3; if C4 then S4; This sequence of statements will produce the same result as the earlier sequence but it much easier to understand. The price is a little inefficiency in that the latter conditions will be evaluated even if a condition evaluates to true, while in the previous case the condition evaluation stops when one evaluates to true. Other such situations can be constructed where alternative program segments can be constructed to avoid a deep level of nesting. In general, if the price is only a little inefficiency, it is more desirable to avoid deep nesting. Module Size: A programmer should carefully examine any routine with very few statements or with too many statements. Large modules often will not be functionally cohesive and too small modules might be incurring unnecessary overhead. There can
201

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

be no hard and fast rule about module size and the guiding principle should be cohesion and coupling. Module Interface: A module having a complex interface should be carefully examined. Such modules might not be functionally cohesive and might be implementing multiple functions. As a rule of thumb, any module whose interface has more than five parameters should be carefully examined and broken into multiple modules with a simpler interface, if possible. Program Layout: How the program is organized and presented can have great effect on the readability of programs. Proper indentation, blank spaces and parenthesis should be employed to enhance the readability of programs. Automated tools are available to pretty print a program, but it is good practice to have a clear layout of programs. Side effects: When a module is invoked, it sometimes has side effects of modifying the program state beyond the modification of parameters listed in the module interface definition, for example, modifying global variables. Such side effects should be avoided where possible and if a module has side effects, they should be properly documented. Robustness: A program is robust if it does something planned even for exceptional conditions. A program might encounter exceptional conditions in such forms as incorrect input, the incorrect value of some variable, and overflow. A program should try to handle such situations. In general, a program should check for validity of inputs, where possible and should check for possible overflow of the data structures. If such situations do arise, the program should not just crash or care dump, but should produce some meaningful message and exit gracefully.

5.8 INTERNAL DOCUMENTATION


In the coding phase, the output document is the code itself. However, some amount of internal documentation in the code can be extremely useful in enhancing the understandability of programs. Internal documentation of programs is done by the use of comments. All languages provide means for writing comments in programs. Comments are textual statements that are meant for the program reader and are not executed. Comments, if properly written, and if are kept consistent with the code, can be invaluable during maintenance.

Anna University Chennai

202

DSE 112

SOFTWARE ENGINEERING

The purpose of comments is not to explain in English the logic of the program the program itself is the best documentation for the details of the logic. The comments should explain what the code is doing, and not how it is doing it. This means that a comment is not needed for every line of the code, as is often done by novice programmers who are taught the virtues of comments. Comments should be provided for blocks of code, particularly those parts of code which are hard to follow. In most cases only comments for the modules need be provided. Providing comments for modules is most useful, as modules form the unit of testing, compiling, verification and modification. Comments for a module are often called prologue for the module. It is best to standardize the structure of the prologue of the module. It is best to standardize the structure of the module. It is desirable if the prologue contains the following information 1. 2. 3. 4. Module Functionality or what the module is doing. Parameters and their purpose. Assumptions about the inputs, if any. Global variables accessed and/or modified in the module.

NOTES

An explanation of parameters (whether they are input only, output only or both input and output, why they are needed by the module, how the parameters are modified) can be quite useful during maintenance. Stating how the global data is affected and what the side effects of a module are is also very useful during maintenance. In addition to that given above, often other information can be included, depending on the local coding standards. Examples include the name of the author, date of compilation, and last date of modification. It should be pointed out that the prologues are useful only if they are kept consistent with the logic of the module. If the module is modified, then the prologue should also be modified, if necessary. A prologue that is inconsistent with the internal logic of the module is probably worst than having no prologue at all. Q5.8 QUESTIONS 1. 2. 3. 4. 5. What is Information Hiding? State its importance. Write a note on structured programming. Explain the top-down and bottom-up strategies. Explain in detail about the programming style with relevant examples. Write in detail on internal documentation.
203 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

5.9 CODE VERIFICATION


Verification of the output of the coding phase is primarily intended for detecting errors introduced during this phase. That is, the goal of verification of the code produced is to show that the code is consistent with the design it is supposed to implement. It should be pointed out that by verification we do not mean proving correctness of programs, which for our purposes is only one method of program verification. Program verification methods fall into two categoriesstatic and dynamic methods. In dynamic methods the program is executed on some test data, and the outputs of the program are examined to determine if there are any errors present. Hence, dynamic techniques follow the traditional pattern of testing, and the common notion of testing refers to this technique. Static techniques, on the other hand, do not involve actual program execution on actual numeric data, through it may involve some form of conceptual execution. In static techniques, the program is not compiled and then executed, as is the case in testing. Common forms of static techniques are program verification, code reading, code reviews and walk-throughs and symbolic execution. In static techniques often the errors are detected directly, unlike dynamic techniques where only the presence of an error is detected. This aspect of static testing makes it quite attractive and economical. It has been found that the types of errors detected by the two categories of verification techniques are different. The types of errors detected by static techniques are either often not found by testing, or it is more cost effective to detect these errors by static methods. Consequently, testing and static methods are complimentary in nature and both should be employed for reliable software.

5.10 CODE READING


Code reading involves careful reading of the code by the programmer to detect any discrepancies between the design specifications and the actual implementation. It involves determining the abstraction of a module and then comparing it with its specifications. The process is just the reverse of design. In design, we start from an abstraction and move towards more details. In code reading we start from the details of a program and move towards an abstract description. The process of code reading is best done by reading the code inside-out, starting with the inner-most structure of the module. First determine its abstract behavior and
Anna University Chennai 204

DSE 112

SOFTWARE ENGINEERING

specify the abstraction. Then the higher level structure is considered, with the inner structure replaced by its abstraction. This process is continued until we reach the module or the program being read. At that time the abstract behavior of the program/module will be known, which can then be compared to the specifications to determine any discrepancies. Code reading is very useful and can detect errors often not revealed by testing. Reading in the manner of stepwise-abstraction also forces the programmer to code in a manner conducive to this process, which will lead to well-structured programs. Code reading is sometimes called desk review.

NOTES

5.11 STATIC ANALYSIS


Analysis of programs by methodically analyzing the program text is called static analysis. Static analysis is usually performed mechanically by the aid of software tools. During static analysis the program itself is not executed but the program text is the input to the tools. The aim of the static analysis tools is to detect errors, potential errors, or generate information about the structure of the program that can be useful for documentation or understanding of the program. Different kinds of static analysis tools can be designed to perform different types of analyses. Many compilers perform some limited static analysis. More often, tools explicitly for static analysis are used. Static analysis can be very useful for exposing errors that may escape other techniques. As the analysis is performed by the aid of software tools, static analysis is a very cost effective way of discovering errors. An advantage is that static analysis sometimes detects the errors themselves, not just the presence of errors as in testing. This saves the effort of tracing the error from the data that reveals the presence pf errors. Furthermore, static analysis can also provide warnings against potential errors, and can provide insight into the structure of the program. It is also useful for determining violations of local programming standards, which the standard compilers will be unable to detect. Extensive static analysis can considerably reduce the effort later needed during testing. Data flow analysis is one form of static analysis that concentrates on the use of data by the programs and detects some data flow anomalies. Data flow anomalies are suspicious uses of data in a program. In general, data flow anomalies are technically not errors and can go undetected by the compiler. However, they are often a symptom of some error, caused due to carelessness in typing, or error in coding. At the very
205 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

least, presence of data flow anomalies, it is a cause of concern which should be properly addressed. An example of the data flow anomaly is the live variable problem, in which a variable is assigned some value but then the variable is not used in any later computation. Such a live variable and assignment to the variable are clearly redundant. Another simple example of this is having two assignments to a variable without using the value of the variable between the two assignments. In this case the first assignment is redundant. For ex, consider the simple vase of the code segment shown below. x:=a; : x does not appear in the right hand side of any assignment x:=b; Clearly, the first assignment statement is useless. The question is why is that statement in the program? Perhaps the programmer meant to say y : = b in the second statement and mistyped y as x. In that case detecting this anomaly and directing the programmers attention can save considerable amount of effort in testing and debugging. In addition to revealing anomalies data flow analysis can provide valuable information for documentation of programs. For ex, data flow analysis can provide information about which variables are modified on invoking a procedure in the caller program, and the value of the variables used in the called procedure. This analysis can identify aliasing, which occurs when different variables represent the same data object. This information can be useful during maintenance to ensure that are no undesirable side effects of some modifications being made to a procedure. Other examples of data flow anomalies are unreachable code, unused variable and unreferenced labels. Unreachable code is that part of the code to which there is not a feasible path; there is no possible execution in which it can be executed. Technically this is not an error and a compiler will at most generate a warning. The program behavior during execution may also be consistent with its specifications. However often the presence of unreachable code is a sign of lack of proper understanding of the program by the programmer which suggest that the presence of error may be likely. Often unreachable code comes into existence when an existing program is modified. In that situation unreachable code may signify undesired or unexpected side effects of the modifications. Unreferenced labels and unused variables are like unreachable code in

Anna University Chennai

206

DSE 112

SOFTWARE ENGINEERING

that they are technically not errors, but often are symptoms of errors; thus their presence often implies the presence of errors. Data flow analysis is usually performed by representing a program as a graph, sometimes called the flow graph. The nodes in a flow graph representation statements of a program, while the edges represent control paths from one statement to another. Correspondence between the nodes and statements is maintained and the graph is analyzed to determine different relationships between the statements. By use of different algorithms different kind of anomalies can be detected. Many of the algorithms to detect anomalies can be quite complex and require a lot of processing time. For ex, the execution time of algorithms to detect unreachable code increases as the square of the number of nodes in the graph. Consequently, this analysis is often limited to modules or to a collection of some modules, and is rarely performed on complete systems. To reduce processing times of algorithms the search of a flow grap has to be carefully organized. Another way to reduce the time for executing algorithms is to reduce the size of the flow graph. Flow graphs can get extremely large for large programs and transformations are often performed on the flow graph to reduce its size. The most common transformation is to have each node represent a sequence of contiguous statements that have no branches in them, thus representing a block of code that will be together. Another transformation often done is to have each node represent a procedure/ function. In that case the resulting graph is often called the all graph, in which an edge from one node n to another node m represents the fact that the execution of the module represented by n directly invokes the module m. Other uses of static analysis Data flow analysis is the technique for statistically analyzing a program to reveal some types of anomalies. Other forms of static analysis to detect different errors/anomalies can also be performed. Here we list some of the common uses of static analysis tools. An error often made especially when different teams are developing different parts of the software, is mismatched parameter lists, where the argument list of a module invocation is different in number or type from the parameters of the invoke d module. This can be detected by a compiler if no separate compilation is allowed and the entire program text is available to the compiler. However if the programs are separately developed and compiled, which is almost always the case with large software developments, this error will not be detected. A static analyzer with access to the different
207

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

parts of the program can easily detect this error. Such kind of error can also be detected during code reviews, but it is more economical to do it mechanically. An extension of this is to detect calls to nonexistent program modules. Essentially the interfacing of different modules, developed and compiled separately can be checked for mutual consistency easily through static analysis. In some limited cases static analysis can also detect infinite loops, or potentially infinite loops and illegal recursion. There are different kinds of documents that static analyzers can produce which can be useful for maintenance or for increased understanding of the program. The first is the cross reference of where different variables and constants are used. Often looking at the cross reference one can detect some subtle errors, like many constants defined to represent the same entity. For ex, the value of pi could be defined as constant in different routines with slightly values. A report with cross references can be useful for detecting such errors. To reduce the size of such reports, it is perhaps more useful to limit it to the use of constants and global variables. Information about the frequency of use of different constructs of the programming language can also be obtained by static analysis. Such information is useful for statistical analyses of programs, such as what types of modules are more prone to defect. Another use is to evaluate the complexity. There are some complexity measures that are a function of the frequency of occurrence of different types of statements. To determine complexity from such measures, this information can be useful. Static analysis can also produce the structure chart of programs. The actual structure chart of a system is a useful documentation aid. It can also be used to determine the changes made in the design during the coding phase by comparing it to the structure chart produced during system design. A static nesting hierarchy of procedures can also be easily produced by static analyses. There are some coding restrictions that the programming language imposes. However, different organizations may have further restrictions on the use of different features for reliability, portability or efficiency reasons. Examples of these include mixed type arithmetic, type conversion, using features that are machine dependant and too many gotos. Such restrictions cannot be checked by the compiler, but static analysis can be used to enforce these standards. Such violation can also be checked in code review, but it is more efficient and economical to let a program do this checking.

Anna University Chennai

208

DSE 112

SOFTWARE ENGINEERING

5.12 SYMBOLIC EXECUTION


This is another approach where the program is not executed with the actual data. Instead, the program is symbolically executed with the symbolic data. Hence, the inputs to the program are not numbers but symbols representing the input data. Which can take different values. The execution of the program proceeds like normal execution, except that it deals with values that are not common numbers but formulas consisting of the symbolic values. The outputs are symbolic formulas consisting of the symbolic input values. The outputs are the symbolic formulas of the input values. These formulas can be checked to see if the program will behave as expected. This approach is called by different names like symbolic execution, symbolic evaluation and symbolic testing. Although the concept is simple and promising for verifying programs, we will see the performing symbolic execution of even the modest size programs is very difficult. The problem basically comes due to the conditional execution of statements in programs. As conditions of a symbolic expression cannot be usually evaluated to true or false, without substituting actual values to the symbols, a case analysis becomes necessary and all possible cases with a condition have to be considered. In programs with loops this result in an unmanageably large number of cases. To introduce the basic concepts of the symbolic execution, let us consider a simple program without any conditional statement. A simple program to compute the product of three positive integers is shown below. Function product (x, y, z : integer):integer; Var tmp1, tmp2: integer; Begin tmp1 : = x*y; tmp2 : = y*z; product : = tmp1 * tmp2/y; end; Function to determine product. Let us consider that the symbolic inputs to the function are xi,yi, and zi. We start executing this function with this input. The aim is to determine the symbolic values to of the different variables in the program after executing each statement, such that eventually we can determine the result of executing this function. The trace of the symbolic execution of the function is show in figure.
209

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

After statement 6 the value of the product is (xi*yi*)*(yi*zi)/yi. Since this is a symbolic value, we simplify this formula. Simplification yields product= xi*yi2d*zi/yi=xi*yi*zi, the desired result. This example there is only one path in the function and this symbolic execution is equivalent to checking for all possible values of x, y and z. essentially, with only one path and having an acceptable symbolic result of the path, we can claim that the program is correct. After statement 1 4 5 6 x xi xi xi xi y yi yi yi yi z zi zi zi zi Values of temp1 ? xi*yi xi*yi xi*yi Values of temp2 ? ? yi*zi yi*zi product ? ? ? (xi*yi)*( yi*zi)

Path Conditions In symbolic execution, when dealing with conditional execution, it is not sufficient to just look at the state of the variables of the program at different statements , as a statement will only be executed if the inputs satisfy certain conditions in which the execution of the program will follow a path which includes the statement . To capture this concept in symbolic execution, we require a notion of path condition. Path condition at a statement gives the conditions which the inputs must satisfy in order for an execution to follow the path such that the statement will be executed. Path condition is a Boolean expression over the symbolic inputs, and never contains any program variables. It will be represented in a symbolic execution by pc. Each symbolic execution begins with pc initialized to true. As conditions are encountered, for difference cases referring to different paths in the program, the path condition will take different paths in the program; the path condition will take different values. For example, symbolic execution of an IF statement of the form If C then S1 else S2 Will require two cases to be considered, corresponding to the two possible paths; one where C evaluates to true and S1 is executed, and the other where C evaluates to false and S2 is executed. For the first, case we set the path condition pc to

Anna University Chennai

210

DSE 112

SOFTWARE ENGINEERING

Pcpc& C Which is the path condition for the statements in S1, For the second case we set the path condition to Pcpc& ~ C which is the path condition for statements in s2. Loops and Symbolic Execution Trees The different paths followed during symbolic execution can be represented by an execution tree. A node in this tree represents the execution of a statement. While an are represented the transition for one statement to another. For each IF statement where both the paths are followed, there are two arcs from the node corresponding to the IF statement, one labeled with T (true) and the other with F (false), for the then and the else paths. At each branching the path condition is also often shown in the tree. Note that the execution tree is different from the flow graph of a program, where nodes represent a statement, while in the execution tree nodes, Stmt pr max ? ? xi

NOTES

1. true Case(x>y) 2. (xi>yi) 3. 4. (xi>yi) &(xi< yi) zi return this value of max 4.(xi>yi) & (xi <= zi)xi return this value of max case (x<=y)

The execution tree of a program has some interesting properties. Each leaf in the tree represents a path that will be followed for some input values. For each terminal leaf there exist some actual numerical inputs such that the sequence of statements executed with these inputs is the leaf. An additional property of the symbolic execution tree is that path conditions associated with two different leaves are distinct. Thus there is no execution for which both path conditions are true. This is due to the property of sequential programming languages that in one execution we cannot follow two different paths.

211

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Because of the presence of infinite execution trees, a symbolic execution should not be considered as a tool for proving correctness of programs. A program to perform symbolic execution may not stop. For this reason, a more practical approach is to build tools where only some of the paths are symbolically executed, and the user can select the paths to be executed. A symbolic execution tool can also be useful in selecting test cases to obtain brach of statement coverage. Suppose that results of testing reveal that a certain path has not been path, input test data has to be carefully selected to ensure that the given path is needed executed. Selecting such test cases can often be quite difficult. A symbolic execution tool can be useful here, by symbolically executing that particular path, the path condition for the leaf node for that path can be determined, the input test data can then be selected using his path condition. The test case data that will execute the path are the ones that satisfy the path condition. Proving Correctness Many techniques for verification aim to reveal errors in the programs since the ultimate goal of making programs correct by removing the errors. In proof of correctness, the aim is to prove a program correct, so, correctness is directly established, unlike the other techniques in which correctness is never really established, but implied by the absence of detection of any errors. Proofs are perhaps more valuable during program construction, rather then an after thought. Proving while developing a program may result in more reliable programs that of course can be proved more easily. Proving a program not constructed with formal verification in mind can be quite difficult. Any proof technique must begin with a formal specification of the program. No formal proof can be provided if what we have to prove is not stated, or is stated informally in an imprecise manner. So, first we have to state formally what the program is supposed to do. A program will usually not operate on an arbitrary set of input data, and may produce valid results only for some range of inputs. Hence, often it is not sufficient merely to state the goal of the program, but we should also state the input conditions in which the program is to be invoked and for which the program is expected to produce valid results. The assertion about the expected final state of a program is called the post condition of that program, and the assertion about the input condition is called the precondition of the program .Often determining the precondition for which the post condition will be satisfied is the goal of proof.

Anna University Chennai

212

DSE 112

SOFTWARE ENGINEERING

Construct a sequence of assertions each of which can be inferred from previously proved assertions and the rules and axioms about the statements and operations in the program. For this we need a mathematical model of a program and all the constructs in the programming language. Using Hoares notation, the basic assertion about a program segment is of the form P{s} Q The interpretation of this is that if assertion P is true before executing S, then assertion Q will be true after executing S ,if the execution of S terminates. Assertion P is the precondition of the program and Q is the postconditon. These assertions are about the values and the relationship among them. To prove a theorem of the form P{s} Q, we need some rules and axioms about the programming language in which the program statement S is written. Here we consider a simple programming language, which deals only with integers and has the following types of statements 1) Assignment 2) Conditional statement 3) An iterative statement Axiom of assignment: Assignments are central to procedural languages. The axiom of assignment is also central to the axiomatic approach. In fact, only for the assignment statement do we have an independent axiom; for the rest of the statements we have rules. Consider assignment statement of the form X: =f Where x is an identifier and f is an expression in the programming language without any side effects. Any assertion which is true about x after the assignment must be true of the expression f before the assignment. In other words, since after the assignment the variable x contains the value computed by the expression f, if a condition is true after the assignment is made; then the condition obtained by replacing x by f must be true before the assignment. This is the essence of the axiom of assignment, the axiom is stated below. Pxy {x: = f} P P is the post condition of the program segment containing only the assignment statement. The precondition is Pxy , which is an assertion obtained by substituting f for all occurrences of x in the assertion P.
213

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Rule of Composition: Let us first consider the rule for sequential composition, where two statements S1 and S2 are executed in sequence. This rule is called rule of composition, and is show below P {S1} Q, Q [S2] R P {S1; S2} R The explanation of this notation if that if what is stated in the numerator can be proved, then the denominator can be inferred. Using this rule, if we can prove P {S1} Q and Q {S2} R then we can claim that if before execution the precondition P holds then after execution of the program segment S1; S2 the post condition R will hold. In other words, to prove P {S1; S2} R we have to find some Q and prove that P {S1} Q and Q {S2} R Rule for Alternate Statement: Let us now consider the rules for an if statement. There are tow types of if statement, one with an else clause and one without. The rules for both of them are given below, P&B{S} Q, P&~B=>Q P {if B then S} Q Rules of Consequence: to be able to prove new theorems from the ones we have already proved using the axioms, we require some rules of inference. The simplest inference rule is that if the execution of a program ensures that an assertion Q is true after execution, then it also ensures that every assertion that is logically implied by Q is also true after execution. Rule of Iteration: Now let us consider iteration. Loops are the trickiest construct when dealing with program proofs. We will consider only the while loop of the form While B do S In execution this loop first the condition B is checked. If B is false S is not executed and the loop terminates. If B is true S is executed and B is tested again. This is repeated until B evaluates to false. We would like to be able to make an assertion that will be true when the loop terminates. 5.13 CODE REVIEWS AND WALKTHROUGHS The review process was started with the purpose of detecting defects in the code. Though design reviews substantially reduce defects in code, reviews are still very

Anna University Chennai

214

DSE 112

SOFTWARE ENGINEERING

useful and can considerably enhance reliability and reduce effort during testing. Code reviews are designed to detect defects that originate during the coding process, although they can detect defects in detailed design also. However, it is unlikely that code reviews will reveal errors in system design or requirements. Code reviews are usually held after code has been successfully, completed, and other forms of static tools have been applied, but before any testing has been performed. Therefore, activities like code reading, symbolic execution, and static analysis should be performed, and the defects found by these techniques corrected, before code reviews are held. The main motivation for this is to save human time and effort which would otherwise be spent in detecting errors that a compiler or a static analyzer can detect. The entry criterion for code review is that the code must compile successfully, and has been passed by other static analysis tools. The documentation to be distributed includes the code to be reviewed and the design document. The review team for code reviews should include the programmer, the designer, and the tester. As with any review, the review starts with the preparation for the review, and ends with a list of action items. The aim of reviews is to detect defects in the code. An obvious coding defect is that the code fails to implement the design. This can occur in many ways. The function implemented by a module may be different from the function actually defined in the design, of the interface of the modules, may not be the same as the interface specified in the design. In addition the input-output format assumed by a module may be inconsistent with the format specified in the design Other code defects can be divided into two broad categories Logic and control Data operations and computations In addition to defects, there is the quality issue, which the review also addresses. The first is efficiency. A module may be implemented in an obviously inefficient manner, and could be wasteful of memory of the computer time. The code could also be violating the local coding standards. Although non-adherence with coding standards cannot be classified as a defect, it is desirable to maintain the standard, A sample checklist: the following are the some of the items that can be included in a checklist for code reviews

NOTES

215

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

1. 2. 3. 4. 5.

Do data definitions exploit the typing capabilities of the language? Are the pointers are set NULL where needed? Are important data tested for validity? Are indexes properly initialized? Are all the branch conditions correct?

Q5.13 Questions 1. 2. 3. 4. 5. 6. 7. 8. 9. How is code verification carried out? Explain code reading. What is static analysis? Explain with an example. What are the uses of static analysis? What is the need for symbolic execution? What are path conditions? Explain loops and symbolic execution trees. Explain in detail about proving correctness with sufficient examples. Write a note in code reviews and walkthroughs.

5.14 UNIT TESTING


Unit testing deals with testing a unit as a whole. This would test the interaction of many functions but confine the test within one unit. The exact scope of a unit is left to interpretation. Supporting test code, sometimes called scaffolding, may be necessary to support an individual test. This type of testing is driven by the architecture and implementation teams. This focus is also called black-box testing because only the details of the interface are visible to the test. Limits that are global to a unit are tested here. In the construction industry, scaffolding is a temporary, easy to assemble and disassemble, frame placed around a building to facilitate the construction of the building. The construction workers first build the scaffolding and then the building. Later the scaffolding is removed, exposing the completed building. Similarly, in software testing, one particular test may need some supporting software. This software establishes an environment around the test. Only when this environment is established can a correct evaluation of the test take place. The scaffolding software may establish state and values for data structures as well as providing dummy external functions for the test. Different scaffolding software may be needed from one test to another test. Scaffolding software rarely is considered part of the system.
Anna University Chennai 216

DSE 112

SOFTWARE ENGINEERING

Sometimes the scaffolding software becomes larger than the system software being tested. Usually the scaffolding software is not of the same quality as the system software and frequently is quite fragile. A small change in the test may lead to much larger changes in the scaffolding. Internal and unit testing can be automated with the help of coverage tools. A coverage tool analyzes the source code and generates a test that will execute every alternative thread of execution. It is still up to the programmer to combine these tests into meaningful cases to validate the result of each thread of execution. Typically, the coverage tool is used in a slightly different way. First the coverage tool is used to augment the source by placing informational prints after each line of code. Then the testing suite is executed generating an audit trail. This audit trail is analyzed and reports the percent of the total system code executed during the test suite. If the coverage is high and the untested source lines are of low impact to the systems overall quality, then no more additional tests are required. A test is not a unit tests if: It talks to the database It communicates across the network It touches the file system It cant run at the same time as any of your other unit tests You have to do special things to your environment (such as editing config files) to run it.

NOTES

Tests that do these things arent bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes. Generally, unit tests are supposed to be small, they test a method or the interaction of a couple of methods. When you pull the database, sockets, or file system access into your unit tests, they arent really about those methods any more; they are about the integration of your code with that other software. If you write code in a way which separates your logic from OS and vendor services, you not only get faster unit tests, you get a binary chop that allows you to discover whether the problem is in your

217

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

logic or in the things are you interfacing with. If all the unit tests pass but the other tests (the ones not using mocks) dont, you are far closer to isolating the problem. Goal of Unit Test The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and show that the individual parts are correct and determine whether it behaves exactly as you expect. Each unit is tested separately before integrating them into modules to test the interfaces between modules. Unit testing has proven its value in that a large percentage of defects are identified during its use. A unit test provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits Approach of Unit Test The most common approach to unit testing requires drivers and stubs to be written. The driver simulates a calling unit and the stub simulates a called unit. The investment of developer time in this activity sometimes results in demoting unit testing to a lower level of priority and that is almost always a mistake. Even though the drivers and stubs cost time and money, unit testing provides some undeniable advantages. It allows for automation of the testing process, reduces difficulties of discovering errors contained in more complex pieces of the application, and test coverage is often enhanced because attention is given to each unit. For example, if you have two units and decide it would be more cost effective to glue them together and initially test them as an integrated unit, an error could occur in a variety of places:

Is the error due to a defect in unit 1? Is the error due to a defect in unit 2? Is the error due to defects in both units? Is the error due to a defect in the interface between the units? Is the error due to a defect in the test?

Finding the error (or errors) in the integrated module is much more complicated than first isolating the units, testing each, then integrating them and testing the whole. Drivers and stubs can be reused so the constant changes that occur during the development cycle can be retested frequently without writing large amounts of additional test code. In effect, this reduces the cost of writing the drivers and stubs on a per-use basis and the cost of retesting is better controlled.
Anna University Chennai 218

DSE 112

SOFTWARE ENGINEERING

Unit Test in programming In computer programming, unit testing is a procedure used to validate that individual units of source code are working properly. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure etc, while in object-oriented programming, the smallest unit is always a Class; which may be a base/super class, abstract class or derived/child class. Units are distinguished from modules in that modules are typically made up of units. Ideally, each test case is independent from the others; mock objects and test harnesses can be used to assist testing a module in isolation. Unit testing is typically done by the developers and not by end-users. Benefit The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. A unit test provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits. Facilitates change Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (i.e. regression testing). The procedure is to write test cases for all functions and methods so that whenever a change causes a fault, it can be quickly identified and fixed. Readily-available unit tests make it easy for the programmer to check whether a piece of code is still working properly. Good unit test design produces test cases that cover all paths through the unit with attention paid to loop conditions. In continuous unit testing environments, through the inherent practice of sustained maintenance, unit tests will continue to accurately reflect the intended use of the executable and code in the face of any change. Depending upon established development practices and unit test coverage, up-to-the-second accuracy can be maintained. Simplifies integration Unit testing helps to eliminate uncertainty in the units themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts, integration testing becomes much easier.

NOTES

219

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

A heavily debated matter exists in assessing the need to perform manual integration testing. While an elaborate hierarchy of unit tests may seem to have achieved integration testing, this presents a false sense of confidence since integration testing evaluates many other objectives that can only be proven through the human factor. Some argue that given a sufficient variety of test automation systems, integration testing by a human test group is unnecessary. Realistically, the actual need will ultimately depend upon the characteristics of the product being developed and its intended uses. Additionally, the human or manual testing will greatly depend on the availability of resources in the organization. Documentation Unit testing provides a sort of living document. Clients and other developers looking to learn how to use the module can look at the unit tests to determine how to use the module to fit their needs and gain a basic understanding of the API. Unit test cases embody characteristics that are critical to the success of the unit. These characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. A unit test case, in and of itself, documents these critical characteristics, although many software development environments do not rely solely upon code to document the product in development. On the other hand, ordinary narrative documentation is more susceptible to drifting from the implementation of the program and will thus become outdated (e.g. design changes, feature creep, relaxed practices to keep documents up to date). Separation of interface from implementation Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database: in order to test the class, the tester often writes code that interacts with the database. This is a mistake, because a unit test should never go outside of its own class boundary. As a result, the software developer abstracts an interface around the database connection, and then implements that interface with their own mock object. By abstracting this necessary attachment from the code (temporarily reducing the net effective coupling), the independent unit can be more thoroughly tested than may have been previously achieved. This results in a higher quality unit that is also more maintainable. In this manner, the benefits themselves begin returning dividends back to the programmer creating a seemingly perpetual upward cycle in quality.

Anna University Chennai

220

DSE 112

SOFTWARE ENGINEERING

Limitations of unit testing Unit testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems or any other system-wide issues. In addition, it may not be easy to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities. It is unrealistic to test all possible input combinations for any non-trivial piece of software. Like all forms of software testing, unit tests can only show the presence of errors; it cannot show the absence of errors. To obtain the intended benefits from unit-testing, a rigorous sense of discipline is needed throughout the software development process. It is essential to keep careful records, not only of the tests that have been performed, but also of all changes that have been made to the source-code of this or any other unit in the software. Use of a version control system is essential; If a later version of the unit fails a particular test that it had previously passed, the version-control software can provide list of the sourcecode changes (if any) that have been applied to the unit since that time. Applications Extreme Programming The cornerstone of Extreme Programming (XP) is the unit test. XP relies on an automated unit testing framework. This automated unit testing framework can be either third party, e.g. xUnit, or created within the development group. Extreme Programming uses the creation of unit tests for Test Driven Development. The developer writes a unit test that exposes either a software requirement or a defect. This test will fail because either the requirement isnt implemented yet, or because it intentionally exposes a defect in the existing code. Then, the developer writes the simplest code to make the test, along with other tests, pass. All classes in the system are unit tested. Developers release unit testing code to the code repository in conjunction with the code it tests. XPs thorough unit testing allows the benefits mentioned above, such as simpler and more confident code development and refactoring, simplified code integration, accurate documentation, and more modular designs. These unit tests are also constantly run as a form of regression test.
221

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Techniques Unit testing is commonly automated, but may still be performed manually. The IEEE does not favor one over the other. A manual approach to unit testing may employ a step-by-step instructional document. Nevertheless, the objective in unit testing is to isolate a unit and validate its correctness. Automation is efficient for achieving this, and enables the many benefits listed in this article. Conversely, if not planned carefully, a careless manual unit test case may execute as an integration test case that involves many software components, and thus preclude the achievement of most if not all of the goals established for unit testing. Under the automated approach, to fully realize the effect of isolation, the unit or code body subjected to the unit test is executed within a framework outside of its natural environment, that is, outside of the product or calling context for which it was originally created. Testing in an isolated manner has the benefit of revealing unnecessary dependencies between the code being tested and other units or data spaces in the product. These dependencies can then be eliminated. Using an automation framework, the developer codifies criteria into the test to verify the correctness of the unit. During execution of the test cases, the framework logs those that fail any criterion. Many frameworks will also automatically flag and report in a summary these failed test cases. Depending upon the severity of a failure, the framework may halt subsequent testing. As a consequence, unit testing is traditionally a motivator for programmers to create decoupled and cohesive code bodies. This practice promotes healthy habits in software development. Design patterns, unit testing, and re factoring often work together so that the most ideal solution may emerge. Unit testing frameworks Unit testing frameworks, which help simplify the process of unit testing, have been developed for a wide variety of languages. It is generally possible to perform unit testing without the support of specific framework by writing client code that exercises the units under test and uses assertion, exception, or early exit mechanisms to signal failure. This approach is valuable in that there is a negligible barrier to the adoption of unit testing. However, it is also limited in that many advanced features of a proper framework are missing or must be hand-coded. To address this issue the D programming language offers direct support for unit testing.

Anna University Chennai

222

DSE 112

SOFTWARE ENGINEERING

Charles Six Rules of Unit Testing 1. 2. 3. 4. 5. 6. Write the test first Never write a test that succeeds the first time Start with the null case, or something that doesnt work Dont be afraid of doing something trivial to make the test work Loose coupling and testability go hand in hand Use mock objects

NOTES

1. Write the test first This is the Extreme Programming maxim, and my experience is that it works. First you write the test, and enough application code that the test will compile (but no more!). Then you run the test to prove it fails (see point two, below). Then you write just enough code that the test is successful (see point four, below). Then you write another test. The benefits of this approach come from the way it makes you approach the code you are writing. Every bit of your code becomes goal-oriented. Why am I writing this line of code? Im writing it so that this test runs. What do I have to do to make the test run? I have to write this line of code. You are always writing something that pushes your program towards being fully functional. In addition, writing the test first means that you have to decide how to make your code testable before you start coding it. Because you cant write anything before youve got a test to cover it, you dont write any code that isnt testable. 2. Never write a test that succeeds the first time After youve written your test, run it immediately. It should fail. The essence of science is falsifiability. Writing a test that works first time proves nothing. It is not the green bar of success that proves your test; it is the process of the red bar turning green. Whenever I write a test that runs correctly the first time, I am suspicious of it. No code works right the first time. 3. Start with the null case, or something that doesnt work Where to start is often a stumbling point. When youre thinking of the first test to run on a method, pick something simple and trivial. Is there a circumstance in which the method should return null, or an empty collection, or an empty array? Test that case
223 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

first. Is your method looking up something in a database? Then test what happens if you look for something that isnt there. 4. Loose coupling and testability go hand in hand When youre testing a method, you want the test to only be testing that method. You dont want things to build up, or youll be left with a maintenance nightmare. For example, if you have a database-backed application then you have a set of unit tests that make sure your database-access layer works. So you move up a layer and start testing the code that talks to the access layer. You want to be able to control what the database layer is producing. You may want to simulate a database failure. So its best to write your application in self-contained, loosely coupled components, and have your tests be able to generate dummy components (see mock objects below) in order to tests the way each component talks to each other. This also allows you to write one part of the application and test it thoroughly, even when other parts that the component you are writing will depend on dont exist. Divide your application into components. Represent each component to the rest of the application as an interface, and limit the extent of that interface as much as possible. 5. Use mock objects A mock object is an object that pretends to be a particular type, but is really just a sink, recording the methods that have been called on it. It gives you more power when testing isolated components, because it gives you a clear view of what one component does to another when they interact.

5.14 TESTING METRICS


Metrics are a system of parameters or ways of quantitative and periodic assessment of a process that is to be measured, along with the procedures to carry out such measurement and the procedures for the interpretation of the assessment in the light of previous or comparable assessments. Metrics are usually specialized by the subject area, in which case they are valid only within a certain domain and cannot be directly benchmarked or interpreted outside it. Q5.14 Questions 1. Explain unit testing. Also, state its importance. 2. When will you say that a test is not a unit test?
Anna University Chennai 224

DSE 112

SOFTWARE ENGINEERING

3. What are the approaches of unit testing? 4. What is the goal of unit testing? 5. What are the limitations of unit testing? 6. Explain unit testing in detail. 7. Explain the rules of unit testing in detail.

NOTES

5.15 CODING METRICS


Metrics are the most important responsibility of the Test Team. Metrics allow for deeper understanding of the performance of the application and its behavior. The fine tuning of the application can be enhanced only with metrics. In a typical QA process, there are many metrics which provide information. The following can be regarded as the fundamental metric: 1. Functional or Test Coverage Metrics. 2. Software Release Metrics. 3. Software Maturity Metrics. 4. Reliability Metrics. a. Mean Time To First Failure (MTTFF). b. Mean Time Between Failures (MTBF). c. Mean Time to Repair (MTTR). Functional or Test Coverage Metric It can be used to measure test coverage prior to software delivery. It provides a measure of the percentage of the software tested at any point during testing. It is calculated as follows: Function Test Coverage = FE/FT where, FE is the number of test requirements that are covered by test cases that were executed against the software FT is the total number of test requirements Software Release Metrics The software is ready for release when:
225 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

1. It has been tested with a test suite that provides 100% functional coverage, 80% branch coverage, and 100% procedure coverage. 2. There are no level 1 or 2 severity defects. 3. The defect finding rate is less than 40 new defects per 1000 hours of testing 4. Stress testing, configuration testing, installation testing, Nave user testing, usability testing, and sanity testing have been completed Software Maturity Metric Software Maturity Index is that which can be used to determine the readiness for release of a software system. This index is especially useful for assessing release readiness when changes, additions, or deletions are made to existing software systems. It also provides an historical index of the impact of changes. It is calculated as follows: SMI = Mt - ( Fa + Fc + Fd)/Mt where SMI is the Software Maturity Index value Mt is the number of software functions/modules in the current release Fc is the number of func/modules that contain changes from the previous release Fa is the number of func/modules that contain additions to the previous release Fd is the number of func/modules that are deleted from the previous release Reliability Metrics Reliability is calculated as follows: Reliability = 1 - Number of errors (actual or predicted)/Total number of lines of executable code This reliability value is calculated for the number of errors during a specified time interval. Three other metrics can be calculated during extended testing or after the system is in production. They are: 1. MTTFF (Mean Time to First Failure) MTTFF = the number of time intervals the system is operable until its first failure (functional failure only). 2. MTBF (Mean Time between Failures) MTBF = Sum of the time intervals the system is operable

Anna University Chennai

226

DSE 112

SOFTWARE ENGINEERING

3. MTTR (Mean Time to Repair) MTTR = sum of the time intervals required to repair the system and the number of repairs during the time period In software development, a metric (noun) is the measurement of a particular characteristic of a programs performance or efficiency. Similarly in network routing, a metric is a measure used in calculating the next host to route a packet to. A metric is sometimes used directly and sometimes as an element in an algorithm. In programming, a benchmark includes metrics. Metric (adjective) pertains to anything based on the meter as a unit of spatial measurement. The first step in deciding what metrics to use is to specify clearly what results we want to achieve and what behaviors we want to encourage. In the context of developer testing, the results and behaviors that most organizations should target are the following:

NOTES

To start and grow a collection of self-sufficient and self-checking tests written by developers. To have high-quality, thorough, and effective tests. To increase the number of developers who are contributing actively and regularly to the collection of developer tests.

Now coming to the most frequently thought ideas in Metrics:


What Makes a Good Metric? Misusing Metrics Putting It All Together Refining Your Metrics

What Makes a Good Metric? Any metric you choose should be simple, positive, controllable, and automatable. The following gives the detail of these characteristic that the metric needs to possess. Simple: Most software systems are quite complex and the people who work on them are usually quite smart, so it seems both reasonable and workable to use complex metrics-butthisiswrong!Althoughcomplexmetricsmaybemoreaccuratethansimple
227 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

ones, and most developers will be able to understand them (if they are willing to put the time into it), I have found that the popularity, effectiveness, and usefulness of most metrics (software or otherwise) is inversely proportional to their complexity. I suggest that you start with the simplest metrics that will do the job and refine them over time if needed. The Dow Jones Industrial Average index is a good example of this effect. The DJIA is a very old metric and is necessarily simple because it was developed before computers could be used to calculate it and there werent as many public companies to track anyway. Today there are thousands more stocks that can be tracked and the DJIA still takes into account only 30 blue-chip stocks, but because its simple and it seems to track a portion of the stock market well enough, its still the most widely reported, understood, and followed market index. Positive: A metric is considered to be positive if the quantity it measures needs to go up. Code coverage is a positive metric because increases in code coverage are generally good. The number of test cases is a positive metric for the same reason. On the other hand, commonly used metrics based on bug counts (for example, number of bugs found, number of bugs outstanding, etc.) are considered negative metrics because those numbers need to be as low as possible. It is good to find bugs; it means that the tests are working, but they are bugs nonetheless. You should file them, track them, and set goals to prevent and reduce them, but they are not a good basis for developer testing targets. Controllable: You should tie the success of your developer testing program to metrics over which you have control. You can control the growth in code coverage and the number of test cases (that is, you can keep adding test code and test cases) but the number of bugs that will be found by the tests is much harder to control. Automatable: If calculating a metric requires manual effort it will quickly turn into a chore and it will not be tracked as frequently or as accurately as it should be. Make sure that whatever you decide to measure can be easily automated and will require little or no human effort to collect the data and calculate the result.

Anna University Chennai

228

DSE 112

SOFTWARE ENGINEERING

These criteria to come up with an initial set of metrics to measure the objectives we have listed. You can use the following list as is, or modify and extend it to match your specific needs and objectives. Objective: To start and grow a collection of self-sufficient and self-checking tests written by developers. The two simple metrics to get started are:

NOTES

Raw number of developer test programs. Percentage of total classes covered by developer tests.

Both metrics are simple, positive, controllable, and easy to automate (although youll need to use a code coverage tool for the second one - more about that later). Objective: To have high-quality, thorough, and effective tests. If you implement and start measuring the metrics for the previous objective you will soon have a growing set of developer tests. In my experience, however, the quality, thoroughness, and effectiveness of those tests can vary widely. Some of the tests will be well thought-out and thorough, while others will be written quickly, without much thought, and will provide minimal coverage. The latter type of tests can give you a false sense of security, so you should augment the first two metrics with additional measurements that can give you some indication of test quality. As you might suspect, this is not an easy task; this is one of the objectives where you will have plenty of opportunity for adding and refining metrics as you progress.1 But you have to start somewhere, and as a first step I suggest focusing on test thoroughness, which can be measured with some objectiveness using a code coverage tool. There are many code coverage metrics that you can use, but for the sake of simplicity picking three or four of them and then, to further simplify, combining them into a single index. The specific metrics will vary depending on the programming language(s) used in your code; the following are the suggestions for code written in Java. Basic code coverage metrics for Java:

Method coverage Outcome coverage Statement coverage Branch coverage


229 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Method coverage tells you whether a method has been called at least once by the tests, but does not tell you how thoroughly it has been exercised. Outcome coverage is seldom-used but very important test coverage metric. When a Java method is invoked it can either behave normally or throw one of several exceptions. To cover all possible behaviors of a method, a thorough test should trigger all possible outcomes or, at the very least, it should cause the method to execute normally at least once and throw each declared exception at least once. Statement coverage tells you what percentage of the statements in the code has been exercised. Branch coverage augments statement coverage by keeping track of whether all the possible branches in the code have been executed. Since we want to keep things as simple as possible, combining these four metrics into a single index. Lets call it the Test Coverage Index, or TCI for short. Invoking the principle of simplicity once more the following relatively simple formula in which each coverage metric is weighed equally: TCI = (MC/TM + OC/TO + SC/TS + BC/TB) * 25 Where: MC = methods covered OC = outcomes covered SC = statements covered BC = branches covered TM = total methods TO = total outcomes TS = total statements TB = total branches

Multiply the sum of the ratios (which will range between 0.0 and 4.0) by 25 in order to get a friendly, familiar, and intuitive TCI range of 0 to 100 (if you round it to the nearest integer, which I recommend). The TCI is a bit more involved than the previous metrics but it still meets our key criteria:

Its relatively simple to understand. Its a positive metric - the higher the TCI the better. Its controllable - developers can control the growth in code and write tests to keep up with it.
230

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

It can be calculated automatically with the help of a good code coverage tool somethingyoushouldhaveonhandanyway. Is the TCI perfect? No.

NOTES

Is it good enough to get your developer testing program started and effective in helping you achieve your initial objectives? You bet. Objective: To increase the number of developers who are contributing actively and regularly to the collection of developer tests. The terms actively and regularly are key components in this objective. Having each developer contribute a few tests at the beginning of a developer testing program is a great start, but it cannot end there. The ultimate objective is to make the body of tests match the body of code and to keep that up as the code base grows - when new code is checked in, it should be accompanied by a corresponding set of tests. Since we already have the TCI in our toolset, we can reuse it on a per-developer basis with the following metric:

Percentage of developers with a TCI > X for their classes.

Clearly, this metric only makes sense if there is a concept of class ownership, which I observed is the case in most development organizations. Typically, class ownership is extracted from your source control system (for example, the class owner is last developer who modified the code, or the one who created it, or worked on it the most-whatevermakesthemostsenseinyourorganization). Misusing Metrics Most metrics can be easily misused (either intentionally or unintentionally) both by managers and developers. Managers might misuse the metrics by setting unrealistic objectives, or focusing on these metrics at the expense of other important deliverables (for example, meeting schedules, implementing new functionality). We will discuss the best way to use these metrics in future articles, but for the time being we should remind ourselves that metrics are just tools that provide us with some data to help us make decisions. Since metrics cant incorporate all the necessary knowledge and facts, they should not replace common sense and intuition in decision making.

231

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Developers might misuse metrics by focusing too much on the numbers and too little on the intent behind the metric. To prevent unintentional misuse its important to communicate to the team the details and, more importantly, the intent behind the metric. Summary of Metrics The following table summarizes the developer testing metrics we have come up with so far: Results and Behaviors We Want To Achieve To start and grow a collection of selfsufficient and self-checking tests written by developers To have high-quality, thorough, and effective tests. Metrics To Drive Desirable Results and Behaviors
Raw number of developer test programs. Percentage of classes covered by

developer tests. Test Coverage Index (TCI) which summarizes:


Method Coverage Statement Coverage Branch Coverage Outcome Coverage

To increase the number of developers contributing to the developer testing effort.

Percentage of developers with a TCI > X for their classes.

If you already have a code coverage tool, a code management system, and an in-house developer whos handy with a scripting language, you should be able to automate the collection and reporting of these metrics. Below is an example of a very basic developer testing dashboard you can use for reporting purposes. Note that in this dashboard some non-developer-testing related metrics (the total number of classes and the total number of developers) to add some perspective to the metrics.

Anna University Chennai

232

DSE 112

SOFTWARE ENGINEERING

Developer Testing Dashboard Metric Total number of classes Total Number of developers Raw number of developer test programs Percentage of classes covered by developer tests Test Coverage Index (TCI) Percentage of developers with a TCI > 10 for their classes Value 1776 12 312 27% 16 50%

NOTES

This is a very simple dashboard to get you started, but if you get to this point you will have more information and insight about the breadth, depth, and adoption of your developer testing program than 99% of the software development organizations out there. Refining Your Metrics What we covered in this article is just a start. As your developer testing program evolves you will probably want to add, improve, or replace some of these metrics with others that better fit your needs and your organization. The most important thing to remember when developing your own metrics is to always start with a clear description of the results or behaviors that you want to achieve, and then to determine how those results and behaviors can be objectively measured. The next critical step is to try to keep all your metrics simple, positive, controllable, and automatable. This might not be possible in all cases, but it is essential to understand that your chance of success with any metric is highly dependent on these four properties. One possible measure of test effectiveness, for example, is the ability to catch bugs. You can get some idea of a tests ability to catch certain categories of bugs by using a technique called mutation testing. In mutation testing you introduce artificial defects into the code under test (for example, replace a >= with a >) then run the tests for that code to see if the mutation results in an error. If the test passes, it means that its not effective in catching that particular kind of error. In-process metrics for software testing In-process tracking and measurements play a critical role in software development, particularly for software testing. Although there are many discussions
233 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

and publications on this subject and numerous proposed metrics, few in-process metrics are presented with sufficient experiences of industry implementation to demonstrate their usefulness. This paper describes several in-process metrics whose usefulness has been proven with ample implementation experiences at the IBM Rochester AS/400 software development laboratory. For each metric, we discuss its purpose, data, interpretation, and use and present a graphic example with real-life data. We contend that most of these metrics, with appropriate tailoring as needed, are applicable to most software projects and should be an integral part of software testing. Measurement plays a critical role in effective software development. It provides the scientific basis for software engineering to become a true engineering discipline. As the discipline has been progressing toward maturity, the importance of measurement has been gaining acceptance and recognition. For example, in the highly regarded software development process assessment and improvement framework known as the Capability Maturity Model, developed by the Software Engineering Institute at Carnegie Mellon University, process measurement and analysis and utilizing quantitative methods for quality management are the two key process activities at the Level 4 maturity. In applying measurements to software engineering, several types of metrics are available, for example, process and project metrics versus product metrics, or metrics pertaining to the final product versus metrics used during the development of the product. From the standpoint of project management in software development, it is the latter type of metrics that is the most usefulthe in-process metrics. Effective use of good inprocess metrics can significantly enhance the success of the project, i.e., on-time delivery with desirable quality. Although there are numerous discussions and publications in the software industry on measurements and metrics, few in-process metrics are described with sufficient experiences of industry implementation to demonstrate their usefulness. In this paper, we intend to describe several in-process metrics pertaining to the test phases in the software development cycle for release and quality management. These metrics have gone through ample implementation experiences in the IBM Rochester AS/400* (Application System/400*) software development laboratory for a number of years, and some of them likely are used in other IBM development organizations as well. For those readers who may not be familiar with the AS/400, it is a midmarket server for ebusiness. To help meet the demands of enterprise e-commerce applications, the AS/ 400 features native support for key Web-enabling technologies. The AS/400 system

Anna University Chennai

234

DSE 112

SOFTWARE ENGINEERING

software includes microcode supporting the hardware, the Operating System/400* (OS/400*), and many licensed program products supporting the latest technologies. The size of the AS/400 system software is currently about 45 million lines of code. For each new release, the development effort involves about two to three million lines of new and changed code. It should be noted that the objective of this paper is not to research and propose new software metrics, although it may not be that all the metrics discussed are familiar to everyone. Rather, its purpose is to discuss the usage of implementation-proven metrics and address practical issues in the management of software testing. We confine our discussion to metrics that are relevant to software testing after the code is integrated into the system library. We do not include metrics pertaining to the front end of the development process such as design review, code inspection, or code integration and driver builds. For each metric, we discuss its purpose, data, interpretation and use, and where applicable, pros and cons. We also provide a graphic presentation where possible, based on real-life data. In a later section, we discuss in-process quality management vis--vis these metrics and a metrics framework that we call the effort/outcome paradigm. Q 5.15 Questions 1. Explain coding metrics in detail. 2. Explain the following terms. a. MTTFF b. MTBF c. MTTR 2. What are the characteristics of a good coding metric?

NOTES

5.16 INTEGRATION TESTING


Integration testing (sometimes called Integration and Testing, abbreviated I&T) is the phase of software testing in which individual software modules are combined and tested as a group. It follows package testing and precedes system testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
235 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Purpose of Integration Testing The purpose of integration testing is to verify functional, performance and reliability requirements placed on major design items. These design items, i.e. assemblages (or groups of units), are exercised through their interfaces using black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested and individual subsystems are exercised through their input interface. Test cases are constructed to test that all components within assemblages interact correctly, for example across procedure calls or process activations, and this is done after testing individual modules, i.e. unit testing. The overall idea is a building block approach, in which verified assemblages are added to a verified base which is then used to support the integration testing of further assemblages. The different types of integration testing are big bang, top-down, bottom-up, and back bone. 1. Big Bang Integration Testing: In this approach, all or most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. The Big Bang method is very effective for saving time in the integration testing process. However, if the test cases and their results are not recorded properly, the entire integration process will be more complicated and may prevent the testing team from achieving the goal of integration testing. 2. Bottom Up Integration Testing: The major category of integration testing is bottom up integration testing where an individual module is tested from a test harness. Once a set of individual modules have been tested they are then combined into a collection of modules, known as builds, which are then tested by a second test harness. This process can continue until the build consists of the entire application. All the bottom or low level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach

Anna University Chennai

236

DSE 112

SOFTWARE ENGINEERING

is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage. Integration testing can proceed in a number of different ways, which can be broadly characterized as top down or bottom up. 3. Top Down Integration Testing: In top down integration testing the high level control routines are tested first, possibly with the middle level control structures present only as stubs. Subprogram stubs as incomplete subprograms which are only present to allow the higher level control routines to be tested. Thus a menu driven program may have the major menu options initially only present as stubs, which merely announce that they have been successfully called, in order to allow the high level menu driver to be tested. Topdowntestingcanproceedinadepth-firstorabreadth-firstmanner.For depth-first integration each module is tested in increasing detail, replacing more and more levels of detail with actual code rather than stubs. Alternatively breadth-first would proceed by refining all the modules at the same level of control throughout the application. In practice a combination of the two techniques would be used. At the initial stages all the modules might be only partly functional, possibly being implemented only to deal with non-erroneous data. These would be tested in breadth-first manner, but over a period of time each would be replaced with successive refinements which were closer to the full functionality. This allows depth-first testing of a module to be performed simultaneously with breadth-first testing of all the modules. In practice a combination of top-down and bottom-up testing would be used. In a large software project being developed by a number of sub-teams, or a smaller project where different modules were being built by individuals. The sub-teams or individuals would conduct bottom-up testing of the modules which they were constructing before releasing them to an integration team which would assemble them together for top-down testing Limitations of Integration Testing: Any conditions not stated in specified integration tests, outside of the confirmation of the execution of design items, will generally not be tested. Integration tests can not include system-wide (end-to-end) change testing.
237

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Integration Testing Strategies One of the most significant aspects of a software development project is the integration strategy. Integration may be performed all at once, top-down, bottom-up, critical piece first, or by first integrating functional subsystems and then integrating the subsystems in separate phases using any of the basic strategies. In general, the larger the project, the more important the integration strategy. Very small systems are often assembled and tested in one phase. For most real systems, this is impractical for two major reasons. First, the system would fail in so many places at once that the debugging and retesting effort would be impractical. Second, satisfying any white box testing criterion would be very difficult, because of the vast amount of detail separating the input data from the individual code modules. In fact, most integration testing has been traditionally limited to black box techniques. Large systems may require many integration phases, beginning with assembling modules into low-level subsystems, then assembling subsystems into larger subsystems, and finally assembling the highest level subsystems into the complete system. To be most effective, an integration testing technique should fit well with the overall integration strategy. In a multi-phase integration, testing at each phase helps detect errors early and keep the system under control. Performing only cursory testing at early integration phases and then applying a more rigorous criterion for the final stage is really just a variant of the high-risk big bang approach. However, performing rigorous testing of the entire software involved in each integration phase involves a lot of wasteful duplication of effort across phases. The key is to leverage the overall integration structure to allow rigorous testing at each phase while minimizing duplication of effort. It is important to understand the relationship between module testing and integration testing. In one view, modules are rigorously tested in isolation using stubs and drivers before any integration is attempted. Then, integration testing concentrates entirely on module interactions, assuming that the details within each module are accurate. At the other extreme, module and integration testing can be combined, verifying the details of each modules implementation in an integration context. Many projects compromise, combining module testing with the lowest level of subsystem integration testing, and then performing pure integration testing at higher levels. Each of these views of integration testing may be appropriate for any given project, so an integration testing method should be flexible enough to accommodate them all. The rest of this section

Anna University Chennai

238

DSE 112

SOFTWARE ENGINEERING

describes the integration-level structured testing techniques, first for some special cases and then in full generality. Combining module testing and integration testing The simplest application of structured testing to integration is to combine module testing with integration testing so that a basis set of paths through each module is executed in an integration context. This means that the techniques of section 5 can be used without modification to measure the level of testing. However, this method is only suitable for a subset of integration strategies. The most obvious combined strategy is pure big bang integration, in which the entire system is assembled and tested in one step without even prior module testing. As discussed earlier, this strategy is not practical for most real systems. However, at least in theory, it makes efficient use of testing resources. First, there is no overhead associated with constructing stubs and drivers to perform module testing or partial integration. Second, no additional integration-specific tests are required beyond the module tests as determined by structured testing. Thus, despite its impracticality, this strategy clarifies the benefits of combining module testing with integration testing to the greatest feasible extent. It is also possible to combine module and integration testing with the bottomup integration strategy. In this strategy, using test drivers but not stubs, begin by performing module-level structured testing on the lowest-level modules using test drivers. Then, perform module-level structured testing in a similar fashion at each successive level of the design hierarchy, using test drivers for each new module being tested in integration with all lower-level modules. The figure illustrates the technique. First, the lowest-level modules B and C are tested with drivers. Next, the higher-level module A is tested with a driver in integration with modules B and C. Finally, integration could continue until the top-level module of the program is tested (with real input data) in integration with the entire program. As shown in Figure, the total number of tests required by this technique is the sum of the cyclomatic complexities of all modules being integrated. As expected, this is the same number of tests that would be required to perform structured testing on each module in isolation using stubs and drivers.

NOTES

239

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Figure 5.1: shows the combined module testing with bottom-up integration. Generalization of module testing criteria Module testing criteria can often be generalized in several possible ways to support integration testing. As discussed in the previous subsection, the most obvious generalization is to satisfy the module testing criterion in an integration context, in effect using the entire program as a test driver environment for each module. However, this trivial kind of generalization does not take advantage of the differences between module and integration testing. Applying it to each phase of a multi-phase integration strategy, for example, leads to an excessive amount of redundant testing. More useful generalizations adapt the module testing criterion to focus on interactions between modules rather than attempting to test all of the details of each modules implementation in an integration context. The statement coverage module testing criterion, in which each statement is required to be exercised during module testing, can be generalized to require each module call statement to be exercised during integration testing. Although the specifics of the generalization of structured testing are more detailed, the approach is the same. Since structured testing at the module level requires that all the decision logic in a modules control flow graph be tested independently, the appropriate generalization to the integration level requires that just the decision logic involved with calls to other modules be tested independently. The following subsections explore this approach in detail.

Anna University Chennai

240

DSE 112

SOFTWARE ENGINEERING

Incremental integration Hierarchical system design limits each stage of development to a manageable effort, and it is important to limit the corresponding stages of testing as well. Hierarchical design is most effective when the coupling among sibling components decreases as the component size increases, which simplifies the derivation of data sets that test interactions among components. The remainder of this section extends the integration testing techniques of structured testing to handle the general case of incremental integration, including support for hierarchical design. The key principle is to test just the interaction among components at each integration stage, avoiding redundant testing of previously integrated sub-components. As a simple example of the approach, recall the statement coverage module testing criterion and its integration-level variant from section 7.2 that all module call statements should be exercised during integration. Although this criterion is certainly not as rigorous as structured testing, its simplicity makes it easy to extend to support incremental integration. Although the generalization of structured testing is more detailed, the basic approach is the same. To extend statement coverage to support incremental integration, it is required that all module call statements from one component into a different component be exercised at each integration stage. To form a completely flexible statement testing criterion, it is required that each statement be executed during the first phase (which may be anything from single modules to the entire program), and that at each integration phase all call statements that cross the boundaries of previously integrated components are tested. Given hierarchical integration stages with good cohesive partitioning properties, this limits the testing effort to a small fraction of the effort to cover each statement of the system at each integration phase. Structured testing can be extended to cover the fully general case of incremental integration in a similar manner. The key is to perform design reduction at each integration phase using just the module call nodes that cross component boundaries, yielding component-reduced graphs, and exclude from consideration all modules that do not contain any cross-component calls. Integration tests are derived from the reduced graphs using the techniques of sections 7.4 and 7.5. The complete testing method is to test a basis set of paths through each module at the first phase (which can be either single modules, subsystems, or the entire program, depending on the underlying integration strategy), and then test a basis set of paths through each component-reduced graph at each successive integration phase. As discussed in section 7.5, the most rigorous
241

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

approach is to execute a complete basis set of component integration tests at each stage. However, for incremental integration, the integration complexity formula may not give the precise number of independent tests. The reason is that the modules with cross-component calls may not be connected in the design structure, so it is not necessarily the case that one path through each module is a result of exercising a path in its caller. However, at most one additional test per module is required, so using the S1 formula still gives a reasonable approximation to the testing effort at each phase. Integration testing is a logical extension of unit testing. In its simplest form, two units that have already been tested are combined into a component and the interface between them is tested. A component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic scenario, many units are combined into components, which are in turn aggregated into even larger parts of the program. The idea is to test combinations of pieces and eventually expand the process to test your modules with those of other groups. Eventually all the modules making up a process are tested together. Beyond that, if the program is composed of more than one process, they should be tested in pairs rather than all at once. Integration testing identifies problems that occur when units are combined. By using a test plan that requires you to test each unit and ensure the viability of each before combining units, you know that any errors discovered when combining units are likely related to the interface between units. This method reduces the number of possibilities to a far simpler level of analysis. You can do integration testing in a variety of ways but the following are three common strategies:

The top-down approach to integration testing requires the highest-level modules be test and integrated first. This allows high-level logic and data flow to be tested early in the process and it tends to minimize the need for drivers. However, the need for stubs complicates test management and low-level utilities are tested relatively late in the development cycle. Another disadvantage of top-down integration testing is its poor support for early release of limited functionality. The bottom-up approach requires the lowest-level units be tested and integrated first. These units are frequently referred to as utility modules. By using this approach, utility modules are tested early in the development process and the need for stubs is minimized. The downside, however, is that the need for drivers complicates test
242

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

management and high-level logic and data flow are tested late. Like the top-down approach, the bottom-up approach also provides poor support for early release of limited functionality.

NOTES

The third approach, sometimes referred to as the umbrella approach, requires testing along functional data and control-flow paths. First, the inputs for functions are integrated in the bottom-up pattern discussed above. The outputs for each function are then integrated in the top-down manner. The primary advantage of this approach is the degree of support for early release of limited functionality. It also helps minimize the need for stubs and drivers. The potential weaknesses of this approach are significant, however, in that it can be less systematic than the other two approaches, leading to the need for more regression testing.

5.17 TESTING FUNDAMENTALS


Testing types There are several types of testing that should be done on a large software system. Each type of test has a specification that defines the correct behavior the test is examining so that incorrect behavior (an observed failure) can be identified. The six types and the origin of specification involved in the test type are now discussed. 1. Unit Testing Type: White box testing Specification: Low-level design and/or code structure Unit testing is the testing of individual hardware or software units or groups of related units 2. Integration testing Type: Black- and white-box testing Specification: Low- and high-level design Integration test is testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them 3. Functional and System testing Type: Black-box testing Specification: high-level design, requirements specification
243 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Functional testing involves ensuring that the functionality specified in the requirement specification works. System testing involves putting the new program in many different environments to ensure the program works in typical customer environments with various versions and types of operating systems and/or applications. Stress testing testing conducted to evaluate a system or component at or beyond the limits of its specification or requirement Performance testing testing conducted to evaluate the compliance of a system or component with specified performance requirements Usability testing testing conducted to evaluate the extent to which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component. 4. Acceptance testing Type: Black-box testing Specification: requirements specification Acceptance testing is formal testing conducted to determine whether or not a system satisfies its acceptance criteria (the criteria the system must satisfy to be accepted by a ustomer) and to enable the customer to determine whether or not to accept the system 5. Regression testing Type: Black- and white-box testing Specification: Any changed documentation, high-level design Regression testing is selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements 6. Beta testing Type: Black-box testing Specification: None. When an advanced partial or full version of a software package is available, the development organization can offer it free to one or more (and sometimes thousands) potential users or beta testers.

Anna University Chennai

244

DSE 112

SOFTWARE ENGINEERING

5.18 FUNCTIONAL VS. STRUCTURAL TESTING


Two types of testing can be taken into consideration. 1. Functional or Black Box Testing. 2. Structural or White Box Testing. Black Box Testing or Functional Testing Black box testing, also called functional testing and behavioral testing, focuses on determining whether or not a program does what it is supposed to do based on its functional requirements. Black box testing attempts to find errors in the external behavior of the code in the following categories (1) incorrect or missing functionality (2) interface errors (3) errors in data structures used by interfaces (4) behavior or performance errors (5) initialization and termination errors. Through this testing, we can determine if the functions appear to work according to specifications. However, it is important to note that no amount of testing can unequivocally demonstrate the absence of errors and defects in your code. It is best if the person who plans and executes black box tests is not the programmer of the code and does not know anything about the structure of the code. The programmers of the code are innately biased and are likely to test that the program does what they programmed it to do. What are needed are tests to make sure that the program does what the customer wants it to do. As a result, most organizations have independent testing groups to perform black box testing. These testers are not the developers and are often referred to as third-party testers. Testers should just be able to understand and specify what the desired output should be for a given input into the program, Functional testing covers how well the system executes the functions it is supposed to executeincluding user commands, data manipulation, searches and business processes, user screens, and integrations. Functional testing covers the obvious surface type of functions, as well as the back-end operations (such as security and how upgrades affect the system).
245

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Although functional testing is often done toward the end of the development cycle, it canand should, say expertsbe started much earlier. Individual components and processes can be tested early on, even before its possible to do functional testing on the entire system. Also called as Black box testing takes an external perspective of the test object to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid input and determines the correct output. There is no knowledge of the test objects internal structure. This method of test design is applicable to all levels of software testing: unit, integration, functional testing, system and acceptance. The higher the level, and hence the bigger and more complex the box, the more one is forced to use black box testing to simplify. While this method can uncover unimplemented parts of the specification, one cannot be sure that all existent paths are tested. Black-box test design treats the system as a black-box, so it doesnt explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and Closed-box. Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the legal inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. For this testing, test groups are often used, Test groups are sometimes called professional idiots...people who are good at designing incorrect data. 1 Also, do to the nature of black box testing, the test planning can begin as soon as the specifications are written. The opposite of this would be glass box testing, where test data are derived from direct examination of the code to be tested. For glass box testing, the test cases cannot be determined until the code has actually been written. Both of these testing techniques have advantages and disadvantages, but when combined, they help to ensure thorough testing of the product.

Anna University Chennai

246

DSE 112

SOFTWARE ENGINEERING

Techniques of Black box (Functional) Testing Requirements - System performs as specified Eg. Prove system requirements Regression - Verifies that anything unchanged still performs correctly Eg. Unchanged system segments function. Error - Handling Errors can be prevented or detected and then corrected Eg. Error introduced into the test. Manual - Support the people-computer interaction works. Eg. Manual procedures developed. Inter Systems - Data is correctly passed from system to system. Eg. Intersystem parameters changed Control - Controls reduce system risk to an acceptable leve Eg. File reconciliation procedures work Parallel - Old systems and new system are run and the results compared to detect unplanned differences Eg. Old and new system can reconcile. Advantages of Black Box Testing 1. more effective on larger units of code than glass box testing 2. tester needs no knowledge of implementation, including specific programming languages 3. tester and programmer are independent of each other 4. tests are done from a users point of view 5. will help to expose any ambiguities or inconsistencies in the specifications 6. test cases can be designed as soon as the specifications are complete Disadvantages of Black Box Testing 1. only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever 2. without clear and concise specifications, test cases are hard to design 3. there may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried
247

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

4. may leave many program paths untested 5. cannot be directed toward specific segments of code which may be very complex (and therefore more error prone) 6. most testing related research has been directed toward glass box testing Test design techniques Typical black box test design techniques include: 1. 2. 3. 4. 5. 6. 7. Equivalence partitioning Boundary value analysis Decision table testing Pair wise testing State transition tables Use case testing Cross-functional testing

User input validation User input must be validated to conform to expected values. For example, if the software program is requesting input on the price of an item, and is expecting a value such as 3.99, the software must check to make sure all invalid cases are handled. A user could enter the price as -1 and achieve results contrary to the design of the program. Other examples of entries that be entered and cause a failure in the software include: 1.20.35, Abc, 0.000001, and 999999999. These are possible test scenarios that should be entered for each point of user input. Other domains, such as text input, need to restrict the length of the characters that can be entered. If a program allocates 30 characters of memory space for a name, and the user enters 50 characters, a buffer overflow condition can occur. Typically when invalid user input occurs, the program will either correct it automatically, or display a message to the user that their input needs to be corrected before proceeding. Hardware Functional testing devices like power supplies, amplifiers, and many other simple function electrical devices is common in the electronics industry. Automated functional testing of specified characteristics is used for production testing, and part of design validation.
Anna University Chennai 248

DSE 112

SOFTWARE ENGINEERING

Functional testing ensures that the requirements are properly satisfied by the application system. The functions are those tasks that the system is designed to accomplish. Structural testing ensures sufficient testing of the implementation of a function White-box test design allows one to peek inside the box, and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box. While black-box and white-box are terms that are still in popular use, many people prefer the terms behavioral and structural. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isnt strictly forbidden, but its still discouraged. In practice, it hasnt proven useful to use a single test design method. One has to use a mixture of different methods so that they arent hindered by the limitations of a particular one. Some call this gray-box or translucentbox test design, but others wish wed stop talking about boxes altogether. It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once theyre implemented. Note that any level of testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is usually associated with structural test design, but this is because testers usually dont have well-defined requirements at the unit level to validate. White Box Testing or Structural Testing White box testing is concerned only with testing the software product; it cannot guarantee that the complete specification has been implemented. Black box testing is concerned only with testing the specification; it cannot guarantee that all parts of the implementation have been tested. Thus black box testing is testing against the specification and will discover faultsofomission, indicating that part of the specification has not been fulfilled. White box testing is testing against the implementation and will discover faultsofcommission, indicating that part of the implementation is faulty. In order to fully test a software product both black and white box testing are required. White box testing is much more expensive than black box testing. It requires the source code to be produced before the tests can be planned and is much more laborious in the determination of suitable input data and the determination if the software is or is not correct. The advice given is to start test planning with a black box test approach as soon as the specification is available. White box planning should commence as soon as all black box tests have been successfully passed, with the production of
249

NOTES

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

flow graphs and determination of paths. The paths should then be checked against the black box test plan and any additional required test runs determined and applied. The consequences of test failure at this stage may be very expensive. A failure of a white box test may result in a change which requires all black box testing to be repeated and the re-determination of the white box paths. The cheaper option is to regard the process of testing as one of qualityassurance rather than qualitycontrol. The intention is that sufficient quality will be put into all previous design and production stages so that it can be expected that testing will confirm that there are very few faults present, qualityassurance, rather than testing being relied upon to discover any faults in the software, qualitycontrol. A combination of black box and white box test considerations is still not a completely adequate test rationale. The Advantages of White Box Testing: 1. The test is unbiased because the designer and the tester are independent of each other. 2. The tester does not need knowledge of any specific programming languages. 3. The test is done from the point of view of the user, not the designer. 4. Test cases can be designed as soon as the specifications are complete. The Disadvantages of White Box Testing: 1. The test can be redundant if the software designer has already run a test case. 2. The test cases are difficult to design. 3. Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested. Techniques of White Box (Structural) Testing Stress - Determine system performance with expected volumes Eg. Sufficient disk space allocated Execution - System achieves desired level of proficiency. Eg. Transaction turnaround time adequate Recovery - System can be returned to an operational status after a failure Eg. Evaluate adequacy of backup data Operations - System can be executed in a normal operational status. Eg. - Determine systems can run using document

Anna University Chennai

250

DSE 112

SOFTWARE ENGINEERING

Compliance - System is developed in accordance with standards and procedures. Eg. Standards follow Security - System is protected in accordance with importance to organization Eg. Access denied. White-box test design allows one to peek inside the box, and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box. While black-box and white-box are terms that are still in popular use, many people prefer the terms behavioral and structural. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isnt strictly forbidden, but its still discouraged. In practice, it hasnt proven useful to use a single test design method. One has to use a mixture of different methods so that they arent hindered by the limitations of a particular one. Some call this gray-box or translucent-box test design, but others wish we would stop talking about boxes altogether. It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once theyre implemented. Note that any level of testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is usually associated with structural test design, but this is because testers usually dont have well-defined requirements at the unit level to validate. Q5.18 Questions 1. 2. 3. 4. 5. 6. Explain Integration testing in detail. Explain the types of testing in detail. Write a note on the testing strategies. Bring out the differences between functional and structural testing. Explain black box testing in detail. Explain white box testing in detail.

NOTES

5.19 SOFTWARE RELIABILITY ESTIMATION - BASIC CONCEPTS AND DEFINITIONS


Software Reliability is the probability of failure-free software operation for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability. It differs from hardware reliability in that it reflects the design perfection, rather than manufacturing perfection. The high complexity
251 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

of software is the major contributing factor of Software Reliability problems. Software Reliability is not a function of time - although researchers have come up with models relating the two. The modeling technique for Software Reliability is reaching its prosperity, but before using the technique, we must carefully select the appropriate model that can best suit our case. Measurement in software is still in its infancy. No good quantitative methods have been developed to represent Software Reliability without excessive limitations. Various approaches can be used to improve the reliability of software, however, it is hard to balance development time and budget with software reliability. According to ANSI, Software Reliability is defined as the probability of failurefree software operation for a specified period of time in a specified environment. Although Software Reliability is defined as a probabilistic function, and comes with the notion of time, we must note that, different from traditional Hardware Reliability, Software Reliability is not a direct function of time. Electronic and mechanical parts may become old and wear out with time and usage, but software will not rust or wear-out during its life cycle. Software will not change over time unless intentionally changed or upgraded. Software Reliability is an important to attribute of software quality, together with functionality, usability, performance, serviceability, capability, installability, maintainability, and documentation. Software Reliability is hard to achieve, because the complexity of software tends to be high. While any system with a high degree of complexity, including software, will be hard to reach a certain level of reliability, system developers tend to push complexity into the software layer, with the rapid growth of system size and ease of doing so by upgrading the software. For example, large nextgeneration aircraft will have over one million source lines of software on-board; nextgeneration air traffic control systems will contain between one and two million lines; the upcoming international Space Station will have over two million lines on-board and over ten million lines of ground support software; several major life-critical defense systems will have over five million source lines of software. [Rook90] While the complexity of software is inversely related to software reliability, it is directly related to other important factors in software quality, especially functionality, capability, etc. Emphasizing these features will tend to add more complexity to software. Software failure mechanisms Software failures may be due to errors, ambiguities, oversights or misinterpretation of the specification that the software is supposed to satisfy, carelessness or incompetence in writing code, inadequate testing, incorrect or unexpected usage of

Anna University Chennai

252

DSE 112

SOFTWARE ENGINEERING

the software or other unforeseen problems. While it is tempting to draw an analogy between Software Reliability and Hardware Reliability, software and hardware have basic differences that make them different in failure mechanisms. Hardware faults are mostly physical faults, while software faults are design faults, which are harder to visualize, classify, detect, and correct. Design faults are closely related to fuzzy human factors and the design process, which we dont have a solid understanding. In hardware, design faults may also exist, but physical faults usually dominate. In software, we can hardly find a strict corresponding counterpart for manufacturing as hardware manufacturing process, if the simple action of uploading software modules into place does not count. Therefore, the quality of software will not change once it is uploaded into the storage and start running. Trying to achieve higher reliability by simply duplicating the same software modules will not work, because design faults can not be masked off by voting. A partial list of the distinct characteristics of software compared to hardware is listed below:

NOTES

Failure cause: Software defects are mainly design defects. Wear-out: Software does not have energy related wear-out phase. Errors can occur without warning. Repairable system concept: Periodic restarts can help fix software problems. Time dependency and life cycle: Software reliability is not a function of operational time. Environmental factors: Do not affect Software reliability, except it might affect program inputs. Reliability prediction: Software reliability can not be predicted from any physical basis, since it depends completely on human factors in design. Redundancy: Can not improve Software reliability if identical software components are used. Interfaces: Software interfaces are purely conceptual other than visual. Failure rate motivators: Usually not predictable from analyses of separate statements.

253

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

Built with standard components: Well-understood and extensively-tested standard parts will help improve maintainability and reliability. But in software industry, we have not observed this trend. Code reuse has been around for some time, but to a very limited extent. Strictly speaking there are no standard parts for software, except some standardized logic structures.

The software test data has been analyzed to estimate software reliability can be estimated although initial test planning did not follow accepted Software Reliability guidelines. The software testing is from a console-based system where the sequence of execution paths closely resembles testing to the operational profile. The actual deviation from the operational profile is unknown but the deviation is been assumed to result in errors under. Notations The following notations are used: I: instantaneous failure rate, or error rate c: cumulative failure rate after some number of faults, j are detected j: the number of faults removed by time T T: test time during which j faults occur _ constant of proportionality between and j Pr: probability c Number of paths affected by a fault M: total number of paths. Digression on Software vs. Hardware Reliability Hardware reliability engineering started in the 1940s when it was observed that electronic equipment that passed qualification tests and quality inspections often did not last long in service, i.e., had a low MTBF. For electronic reliability, the measure of complexity is in the number and type of electrical components and the stresses imposed on them and these relate to its failure rate, a value which may be measured by regression analysis. Some approaches to electronic reliability assume that all failures involve wearout mechanisms in the components related to the fatigue life of their materials under their imposed mechanical stresses. For software, the measure of complexity is related primarily to LOC (lines of code), ELOC (executable lines of code) or SLOC (source lines of code). Structural complexity, related to the data structures used (if statements, records, etc.) is a better measure, however most metrics have been tabulated in terms of SLOC. Hardware reliability requirements provided an impetus to provide for safety margins in the mechanical stresses, reduced variability gain tolerances, input impedance,
Anna University Chennai 254

DSE 112

SOFTWARE ENGINEERING

breakdown voltage, etc. Reliability engineering brought on a proliferation of design guidelines, statistical tests, etc., to address the problems of hardware complexity. Complexity do not mean as much in software because it does not wear out or fatigue and time is not the best measure of its reliability because software doesnt really have x failures per million [processor] operating hours, it has x failures per million unique executions. Unique because once a process has been successfully executed, it is not going to fail in the future. But executions are hard to keep track of so test time is the usual metric for monitoring failure rate. Not only is that, but wall clock time, not processor time, the best that is generally available. For the testing that produced the data for this project, the number of eight-hour work shifts/day was all that was known so that became the time basis for calculating failure rate. The assumption was made that, on average, the number of executions per work shift stayed the same throughout the test period. Thus, the metric used for failure rate was failures (detected faults)/eighthour work shift.

NOTES

5.20 SOFTWARE RELIABILITY ESTIMATION


Reliability of software used in telecommunications networks is a crucial determinant of network performance. Software reliability (SR) estimation is an important element of a network products reliability management. In particular, SR estimation can guide the products system testing process and reliability decisions. SR estimation is performed using an appropriate SR estimation model. However, the art of SR estimation is still evolving. There are many available SR estimation models to select from, with different models being appropriate for different applications. Although there is no ultimate and universal SR model on the horizon (and there may not be one in the foreseeable future), methods have been developed in recent years for selecting a trustworthy SR model for each application. We have been analyzing and adapting these methods for applicability to network software. Our results indicate that there already exist methods for SR model selection which are practical to use for telecommunications software. If utilized, these methods can promote significant improvements in SR management. This paper presents our results to date. Software is a crucial element of present-day telecommunications network systems. Many network functions, which decades ago were performed by hardware, are nowadays performed by software. For example, a present-day digital switching system is just a specialized large computer. Software also forms an important element of private branch exchanges (PBXs) and of operations systems. Moreover, software
255 Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

has recently begun to penetrate the transport part of telecommunications indicate whether or not the SR objective has been reached, how much additional testing should be performed (if the SR objective has not been reached), and what product reliability can be expected in the customers operational environment after the products release. Fig. 1 illustrates these uses of SR estimation schematically SR estimation frequently presents difficulties because many SR estimation models are available for performing the required SR calculations. Not all of these models are appropriate for each application, however. A particular model may provide accurate SR estimates for one application, but will provide inaccurate estimates for a different application. These difficulties have been alleviated in the last several years. New and powerful statistical methods have been introduced to facilitate the process of selecting the most accurate (and trustworthy) SR estimation models for each application. Software tools have been introduced to automate the required calculations. Additional software tools can be expected to appear in the near future. We have been investigating the applicability of the available state-of-the-art SR estimation methods to telecommunications software. This paper reports the results of our experience with, and adaptation of, some of these SR estimation methods to data communications networks. A software failure in these network systems can result in I. SR Methodology loss or degradation of service to customers and financial loss to the telecommunications companies. Because of network softwares crucial role, high software reliability is of great importance to the telephone companies Suppliers of network software components are concerned with software reliability (SR) issues as well, and have been setting up SR engineering management programs to assure the reliability needs and requirements of the telephone companies. SR estimation (prediction) is an important element of a sound SR engineering program. It should be used to guide system testing and reliability decisions for the software products. Calculated during the early part of a products system testing, SR estimation can be used to indicate how long testing should continue in order to reach the products SR objective. Effects of Software Structure and Test Methodology Tractenberg simulated software with errors spaced throughout the code in six different patterns and tested this simulation in four different ways. He defined the following fundamental terms: an error site is a mistake made in the requirements, design or coding of software which, if executed, results in undesirable processing;

Anna University Chennai

256

DSE 112

SOFTWARE ENGINEERING

error rate is the rate of detecting new error sites during system testing or operation; uniform testing is a condition wherein, during equal periods of testing, every instruction in a software system is tested a constant amount and has the same probability of being tested

NOTES

The results of his simulation testing showed that the error rates were linearly proportional to the number of remaining error sites when all error sites have an equal detection probability. The result would be a plot of failure rate that would decrease linearly as errors were corrected. An example of nonlinear testing examined by Tractenberg was the common practice of function testing wherein each function is exhaustively tested, one at a time. Another non-linear method he examined was testing to the operational profile, or biased testing. The resulting plot of function testing is an error rate that is flat over time (or executions). With regard to the use of Musas linear model where the testing was to the operational profile, Tractenberg stated: As for the applicability of the linear model to operational environments, these simulation results indicate that the model can be used (linear correlation coefficient>0.90) where the least used functions in a system are run at least 25% as often as the most used functions. Effects of Biased Testing and Fault Density Downs also investigated the issues associated with random vs. biased testing. He stated that an ideal approach to structuring tests for software reliability would take the following into consideration: The execution of software takes the form of execution of a sequence of paths; c, the actual number of paths affected by an arbitrary fault, is unknown and can be treated as a random variable; Not all paths are equally likely to be executed in a randomly selected execution [operational] profile. In the operational phase of many large software systems, some sections of code are executed much more frequently than others are. In addition, Faults located in heavily used sections of code are much more likely to be detected early.

257

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

These two facts indicate that the step increases in the failure rate functions should be large in the initial stages of testing and gradually decrease as testing proceeds. If a plot of failure rate is made VS the number of faults, a convex curve should be obtained. This paper is concerned with the problem of estimating the reliability of software during a structured test environment, and then predicting what it would be during an actual use environment. Because of this, the testing must: resemble the way the software will finally be used, and the predictions must include the effects of all corrective actions implemented, not just be a cumulative measure of test results.

A pure approach for a structured test environment for uniform testing would assume the following: Each time a path is selected for testing; all paths are equally likely to be selected. The actual number of paths affected by an arbitrary fault is a constant.

Such uniform testing would most quickly reduce the number of errors in the software but would not be efficient in reducing operational failure rate. But our testing will be biased, not uniform. Test data will be collected at the system level, since lower level tests, although important in debugging software; do not indicate all the interaction problems which become evident at the system level. Software would be exercised by a sequence of unique test vectors and the results measured and estimates of MTTF (Mean Time to Failure) can be made from the data. The number of failures per execution (failure rate) will be plotted on the x axis. The total number of faults will be plotted on the y-axis. Initially, the plot may indicate an increasing failure rate VS faults but eventually, the failure rate should follow a straight line constant decreasing path that points to the estimation of the total number of faults, N, on the y axis. The execution profile defines the probabilities with which individual paths are selected for execution. This study will assume the following: The input data that is supplied to the software system is governed by an execution profile which remains invariant in the intervals between fault removals The number of paths affected by each fault is a constant. Downs showed that the error introduced by the second approximation is insignificant in most real software systems.
258

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

Downs derived the following Lemma: If the execution profile of a software system is invariant in the intervals between fault removals, then the software failure rate in any such interval is given by the following formula: ~@ = r log [Pr {a path selected for execution is fault free}] (1) Where r = the number of paths executed over unit time. Removal of faults from the software affects the distribution of the number of faults in a path. The manner in which this distribution is affected depends upon the way in which faults are removed from paths containing more than one fault. If, for instance, it is assumed that execution of a path containing more than one fault leads to the detection and removal of only one fault, then the distribution of the number of faults in a path will cease to be binomial after the removal of the first fault. This is because under such an assumption, considering that all paths are equally likely to be selected, those faults occurring in paths containing more than one fault are less likely to be eliminated than those occurring in paths containing one fault only. If, on the other hand, it is assumed that execution of a path containing more than one fault leads to detection and removal of all faults in that path, then all faults have an equal likelihood of removal and the distribution of the number of faults in a path will remain binomial. Fortunately, in relation to software systems which are large enough for models of the type Downs developed, the discussion contained in the above paragraph has little relevance. This follows from the fact that, in large software systems, the number of logic paths, M, is an extremely large number. Q5.20 Questions 1. What is software reliability? 2. List out the characteristics of a software. 3. Write in detail on software reliability estimation. REFERENCES 1. Software Engineering A Practitioners Approach, By Roger. S.Pressman, Mc Graw Hill International 6th edition, 2005. 2. http://www.onestoptesting.com/ 3. http://www.sqa.net/ 4. http://www.softwareqatest.com/

NOTES

259

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

NOTES

Anna University Chennai

260

DSE 112

SOFTWARE ENGINEERING

NOTES NOTES

261

Anna University Chennai

DSE 112

SOFTWARE ENGINEERING

NOTES

NOTES

Anna University Chennai

262

Potrebbero piacerti anche