Sei sulla pagina 1di 136

SCS 421 SOFTWARE DEVELOPMENT II (NEW)

COURSE DESCRIPTION Using application programmer interfaces (APIs): API programming; class browsers and related tools; programming by example; debugging in the API environment; component-based computing. Human-centered software evaluation: Setting goals for evaluation; evaluation strategies, software development: Approaches, characteristics, and overview of process; prototyping techniques and tools. Software development techniques: Objectoriented analysis and design; component-level design; software requirements and specifications; prototyping; characteristics of maintainable software; software reuse.

Application programming interface

An application programming interface (API) is a particular set of rules ('code') and specifications that software programs can follow to communicate with each other. It serves as an interface between different software programs and facilitates their interaction, similar to the way the user interface facilitates interaction between humans and computers. An API can be created for applications, libraries, operating systems, etc., as a way of defining their "vocabularies" and resources request conventions (e.g. function-calling conventions). It may include specifications for routines, data structures, object classes, and protocols used to communicate between the consumer program and the implementer program of the API. Concept An API can be: General, the full set of an API that is bundled in the libraries of a programming language, e.g. Standard Template Library in C++ or Java API. Specific, meant to address a specific problem, e.g. Google Maps API or Java API for XML Web Services. Language-dependent, meaning it is only available by using the syntax and elements of a particular language, which makes the API more convenient to use. Language-independent, written so that it can be called from several programming languages. This is a desirable feature for a service-oriented API that is not bound to a specific process or system and may be provided as remote procedure calls or web services. For example, a website that allows users to review local restaurants is able to layer their reviews over maps taken from Google Maps, because Google Maps has an API that facilitates this functionality. Google Maps' API controls what information a third-party site can use and how they can use it. The term API may be used to refer to a complete interface, a single function, or even a set of APIs provided by an organization. Thus, the scope of meaning is usually determined by the context of usage.

Advanced explanation An API may describe the ways in which a particular task is performed. In procedural languages like C language the action is usually mediated by a function call. Hence the API includes usually a description of all the functions/routines it provides. For instance: the math.h include file for the C language contains the definition of the function prototypes of the mathematical functions available in the C language library for mathematical processing (usually called libm). This file describes how to use the functions included in the given library: the function prototype is a signature that describes the number and types of the parameters to be passed to the functions and the type of the return value. The behavior of the functions is usually described in more details in a human readable format in printed books or in electronic formats like the man pages: e.g. on UNIX systems the command man 3 sqrt will present the signature of the function sqrt in the form: SYNOPSIS #include <math.h> double sqrt(double X); float sqrtf(float X); DESCRIPTION DESCRIPTION sqrt computes the positive square root of the argument. ... RETURNS On success, the square root is returned. If X is real and positive... That means that the function returns the square root of a positive floating point number (single or double precision) as another floating point number. Hence the API in this case can be interpreted as the collection of the included files used by the C language and its human readable description provided by the man pages. API in modern languages Most of the modern programming languages provide the documentation associated with an API in some digital format that makes it easy to consult on a computer. E.g. perl comes with the tool perldoc: $ perldoc -f sqrt sqrt EXPR sqrt #Return the square root of EXPR. If EXPR is omitted, returns #square root of $_. Only works on non-negative operands, unless #you've loaded the standard Math::Complex module. python comes with the tool pydoc: $ pydoc math.sqrt Help on built-in function sqrt in math:

math.sqrt = sqrt(...) sqrt(x) Return the square root of x. ruby comes with the tool ri: $ ri Math::sqrt ------------------------------------------------------------- Math::sqrt Math.sqrt(numeric) => float -----------------------------------------------------------------------Returns the non-negative square root of _numeric_. Java comes with the documentation organized in html pages (JavaDoc format), while Microsoft distributes the API documentation for its languages (Visual C+ +, C#, Visual Basic, F#, etc...) embedded in Visual Studio's help system. API in object-oriented languages In object oriented languages, an API usually includes a description of a set of class definitions, with a set of behaviors associated with those classes. A behavior is the set of rules for how an object, derived from that class, will act in a given circumstance. This abstract concept is associated with the real functionalities exposed, or made available, by the classes that are implemented in terms of class methods (or more generally by all its public components hence all public methods, but also possibly including public field, constants, nested objects). The API in this case can be conceived as the totality of all the methods publicly exposed by the classes (usually called the class interface). This means that the API prescribes the methods by which one interacts with/handles the objects derived from the class definitions. More generally, one can see the API as the collection of all the kinds of objects one can derive from the class definitions, and their associated possible behaviors. Again: the use is mediated by the public methods, but in this interpretation, the methods are seen as a technical detail of how the behavior is implemented. For instance: a class representing a Stack can simply expose publicly two methods push() (to add a new item to the stack), and pop() (to extract the last item, ideally placed on top of the stack). In this case the API can be interpreted as the two methods pop() and push(), or, more generally, as the idea that one can use an item of type Stack that implements the behavior of a stack: a pile exposing its top to add/remove elements. This concept can be carried to the point where a class interface in an API has no methods at all, but only behaviors associated with it. For instance, the Java language and Lisp API include the interface Serializable, which requires that each class that implements it should behave in a serialized fashion. This does not require to have any public method, but rather requires that any class that implements it to have a representation that can be saved (serialized) at any time (this is typically true for any class containing simple data and no link to

external resources, like an open connection to a file, a remote system, or an external device). Similarly the behavior of an object in a concurrent (multi threaded) environment is not necessarily determined by specific methods, belonging to the interface implemented, but still belongs to the API for that Class of objects, and should be described in the documentation[3] . In this sense, in object oriented languages, the API defines a set of object behaviors, possibly mediated by a set of class methods. In such languages, the API is still distributed as a library. For example, the Java language libraries include a set of APIs that are provided in the form of the JDK used by the developers to build new Java programs. The JDK includes the documentation of the API in JavaDoc notation. The quality of the documentation associated with an API is often a factor determining its success in terms of ease of use. API libraries and frameworks An API usually is related to a software library: the API describes and prescribes the expected behavior while the library is an actual implementation of this set of rules. A single API can have multiple implementations in the form of different libraries that share the same programming interface. An API can also be related to a software framework: a framework can be based on several libraries implementing several APIs, but unlike the normal use of an API, the access to the behavior built into the framework is mediated by extending its content with new classes plugged into the framework itself. Moreover the overall program flow of control can be out of the control of the caller, and in the hands of the framework via inversion of control or similar mechanisms. API and protocols An API can also be an implementation of a protocol. In general the difference between an API and a protocol is that the protocol defines a standard way to exchange requests and responses based on a common transport and agreeing on a data/message exchange format, while an API is usually implemented as a library to be used directly: hence there can be no transport involved (no information physically transferred from/to some remote machine), but rather only simple information exchange via function calls (local to the machine where the elaboration takes place) and data is exchanged in formats expressed in a specific language . When an API implements a protocol it can be based on proxy methods for remote invocations that underneath rely on the communication protocol. The role of the API can be exactly to hide the detail of the transport protocol. E.g.: RMI is an API that implements the JRMP protocol or the IIOP as RMI-IIOP. Protocols are usually shared between different technologies (system based on given computer programming languages in a given operating system) and usually allow the different technologies to exchange information, acting as an abstraction/mediation level between the two worlds. While APIs are specific to a given technology: hence the

APIs of a given language cannot be used in other languages, unless the function calls are wrapped with specific adaptation libraries. Object API and protocols An object API can prescribe a specific object exchange format, an object exchange protocol can define a way to transfer the same kind of information in a message. When a message is exchanged via a protocol between two different platforms using objects on both sides, the object in a programming language can be transformed (marshaled and unmarshalled) in an object in a remote and different language: so, e.g., a program written in Java invokes a service via SOAP or IIOP written in C# both programs use APIs for remote invocation (each locally to the machine where they are working) to (remotely) exchange information that they both convert from/to an object in local memory. Instead when a similar object is exchanged via an API local to a single machine the object is effectively exchanged (or a reference to it) in memory. API sharing and reuse via virtual machine Some languages like those running in a virtual machine (e.g. CLI compliant languages in the Common Language Runtime and JVM compliant languages in the Java Virtual Machine) can share APIs. In this case the virtual machine enables the language interoperation thanks to the common denominator of the virtual machine that abstracts from the specific language using an intermediate byte code. Hence this approach maximizes the code reuse potential for all the existing libraries and related APIs. Fluent API and DSL An object oriented API is said to be fluent when it aims to provide for more readable code. Fluent APIs can be used to develop Domain-specific languages [6] . Web APIs When used in the context of web development, an API is typically a defined set of Hypertext Transfer Protocol (HTTP) request messages, along with a definition of the structure of response messages, which is usually in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format. While "Web API" is virtually a synonym for web service, the recent trend (so-called Web 2.0) has been moving away from Simple Object Access Protocol (SOAP) based services towards more direct Representational State Transfer (REST) style communications. Web APIs allow the combination of multiple services into new applications known as mashups. Use of APIs to share content The practice of publishing APIs has allowed web communities to create an open architecture for sharing content and data between communities and applications. In this way, content that is created in one place can be dynamically posted and updated in multiple locations on the web.

1. Photos can be shared from sites like Flicker and Photobucket to social network sites like Face book and MySpace. 2. Content can be embedded, e.g. embedding a presentation from SlideShare on a LinkedIn profile. 3. Content can be dynamically posted. Sharing live comments made on Twitter with a Facebook account, for example, is enabled by their APIs. 4. Video content can be embedded on sites which are served by another host. 5. User information can be shared from web communities to outside applications, delivering new functionality to the web community that shares its user data via an open API. One of the best examples of this is the Facebook Application platform. Another is the Open Social platform. Implementations The POSIX standard defines an API that allows a wide range of common computing functions to be written in a way such that they may operate on many different systems (Mac OS X, and various Berkeley Software Distributions (BSDs) implement this interface); however, making use of this requires recompiling for each platform. A compatible API, on the other hand, allows compiled object code to function without any changes to the system implementing that API. This is beneficial to both software providers (where they may distribute existing software on new systems without producing and distributing upgrades) and users (where they may install older software on their new systems without purchasing upgrades), although this generally requires that various software libraries implement the necessary APIs as well. Microsoft has shown a strong commitment to a backward compatible API, particularly within their Windows API (Win32) library, such that older applications may run on newer versions of Windows using an executablespecific setting called "Compatibility Mode". Apple Inc. has shown less concern, breaking compatibility or implementing an API in a slower "emulation mode"; this allows greater freedom in development, at the cost of making older software obsolete. Among Unix-like operating systems, there are many related but incompatible operating systems running on a common hardware platform (particularly Intel 80386-compatible systems). There have been several attempts to standardize the API such that software vendors may distribute one binary application for all these systems; however, to date, none of these have met with much success. The Linux Standard Base is attempting to do this for the Linux platform, while many of the BSD Unixes, such as FreeBSD, NetBSD, and OpenBSD, implement various levels of API compatibility for both backward compatibility (allowing programs written for older versions to run on newer distributions of the system) and crossplatform compatibility (allowing execution of foreign code without recompiling). Release policies The two options for releasing API are:

1. Protecting information on APIs from the general public. For example, Sony used to make its official PlayStation 2. API available only to licensed PlayStation developers. This enabled Sony to control who wrote PlayStation 2 games. This gives companies quality control privileges and can provide them with potential licensing revenue streams. 3. Making APIs freely available. For example, Microsoft makes the Microsoft Windows API public, and Apple releases its APIs Carbon and Cocoa, so that software can be written for their platforms. A mix of the two behaviors can be used as well. ABIs The related term application binary interface (ABI) is a lower level definition concerning details at the assembly language level. For example, the Linux Standard Base is an ABI, while POSIX is an API. API examples ASPI for SCSI device interfacing Carbon and Cocoa for the Macintosh DirectX for Microsoft Windows EHLLAPI Java APIs OpenGL cross-platform graphics API OpenAL cross-platform sound API OpenCL cross-platform API for general-purpose computing for CPUs & GPUs OpenMP API that supports multi-platform shared memory multiprocessing programming in C, C++ and FORTRAN on much architecture, including UNIX and Microsoft Windows platforms. Simple DirectMedia Layer (SDL) Talend integrates its data management with BPM from Bonita Open Solution Windows API Language bindings and interface generators APIs that are intended to be used by more than one high-level programming language often provide, or are augmented with, facilities to automatically map the API to features (syntactic or semantic) that are more natural in those languages. This is known as language binding, and is itself an API. The aim is to encapsulate most of the required functionality of the API, leaving a "thin" layer appropriate to each language. Below are listed some interface generator tools which bind languages to APIs at compile time. SWIG opensource interfaces bindings generator from many languages to many languages (Typically Compiled->Scripted) F2PY: FORTRAN to Python interface generator. SUMMARY

An application programming interface (API) is the interface that a computer system, library or application provides in order to allow requests for service to be made of it by other computer programs, and/or to allow data to be exchanged between them. Description One of the primary purposes of an API is to describe how to access a set of functions - for example, an API might describe how to draw windows or icons on the screen using a library that has been written for that purpose. APIs, like most interfaces, are abstract. Software that may be accessed via a particular API is said to implement that API. For instance, a computer program can (and often must) use its operating system's API to allocate memory and access files. Many types of systems and applications implement APIs, such as graphics systems, databases, networks, web services, and even some computer games. In many instances, an API is often a part of a Software development kit (SDK). An SDK may include an API as well as other tools and perhaps even some hardware, so the two terms are not strictly interchangeable. There are various design models for APIs. Interfaces intended for the fastest execution often consist of sets of functions, procedures, variables and data structures. However, other models exist as well, such as the interpreter used to evaluate expressions in ECMAScript/JavaScript or in the abstraction layer, which relieves the programmer from needing to know how the functions of the API relate to the lower levels of abstraction. This makes it possible to redesign or improve the functions within the API without breaking code that relies on it. Two general lines of policies exist regarding publishing APIs:
1. Some companies guard their APIs zealously. For example, Sony used to

make its official PlayStation 2 API available only to licensed PlayStation developers. This is because Sony wanted to restrict how many people could write a PlayStation 2 game, and wanted to profit from them as much as possible. This is typical of companies who do not profit from the sale of API implementations (in this case, Sony broke-even on the sale of PlayStation 2 consoles and even took a loss on marketing, instead making it up through game royalties created by API licensing). However, PlayStation 3 is based entirely on open and publicly available APIs. 2. Other companies propagate their APIs freely. For example, Microsoft deliberately makes most of its API information public, so that software will be written for the Windows platform. The sale of the third-party software sells copies of Microsoft Windows. This is typical of companies

who profit from the sale of API implementations (in this case, Microsoft Windows, which is sold at a gain for Microsoft). Some APIs, such as the ones standard to an operating system, are implemented as separate code libraries that are distributed with the operating system. Others require software publishers to integrate the API functionality directly into the application. This forms another distinction in the examples above. Microsoft Windows APIs come with the operating system for anyone to use. Software for embedded systems such as video game consoles generally falls into the application-integrated category. While an official PlayStation API document may be interesting to read, it is of little use without its corresponding implementation, in the form of a separate library or software development kit. An API that does not require royalties for access and usage is called "open." The APIs provided by Free software (such as all software distributed under the GNU General Public License), are open by definition, since anyone can look into the source of the software and figure out the API. Although usually authoritative "reference implementations" exist for an API (such as Microsoft Windows for the Win32 API), there's nothing that prevents the creation of additional implementations. For example, most of the Win32 API can be provided under a UNIX system using software called Wine. It is generally lawful to analyze API implementations in order to produce a compatible one. This technique is called reverse engineering for the purposes of interoperability. However, the legal situation is often ambiguous, so that care and legal counsel should be taken before the reverse engineering is carried out. For example, while APIs usually do not have an obvious legal status, they might include patents that may not be used until the patent holder gives permission.

CLASS BROWSER History of Class Browsers Most modern class browsers owe their origins to Smalltalk, one of the earliest object-oriented languages. The Smalltalk browser was a series of horizontallyabutting panes at the top of a text editor window that listed the class hierarchy of the Smalltalk system. A class selected in one pane would list the subclasses of that class in the next pane to the right. For leaf classes, the farthest left pane would list the class instance variables and allow them to be edited. Most succeeding object-oriented languages differed from Smalltalk in that they were compiled and executed in a discrete runtime environment, rather than being dynamically integrated into a monolithic system like the early Smalltalk

environments. Nevertheless, the concept of a table-like or graphic browser to navigate a class hierarchy. With the popularity of C++ starting in the late-1980s, modern IDEs added class browsers, at first to simply navigate class hierarchies, and later to aid in the creation of new classes. With the introduction of Java in the mid-1990s class browsers became an expected part of any graphic development environment. Class Browsing in Modern IDEs All major development environments supply some manner of class browser, including

CodeWarrior for Microsoft Windows, Mac OS, and embedded systems Microsoft Visual Studio Eclipse Borland JBuilder IntelliJ IDEA IBM WebSphere Sun Microsystems Java Studio Creator Apple Xcode for Mac OS X Step Ahead Software's Javelin NetBeans Zeus IDE KDevelop ParcPlace Smalltalk .NET Reflector

Modern class browsers fall into three general categories: the columnar browsers, the outline browsers, and the diagram browsers.

Columnar Browsers Continuing the Smalltalk tradition, columnar browsers display the class hierarchy from left to right in a series of columns. Often the rightmost column is reserved for the instance methods or variables of the leaf class. Outline Browsers Systems with roots in Microsoft Windows tend to use an outline-form browser, often with colorful (if cryptic) icons to denote classes and their attributes. Diagram Browsers In the early years of the 21st century class browsers began to morph into modeling tools, where programmers could not only visualize their class hierarchy as a diagram, but also add classes to their code by adding them to the diagram. Most of these visualization systems have been based on some form of the Unified Modeling Language. Refactoring Class Browsers As development environments add refactoring features, many of these features have been implemented in the class browser as well as in text editors. A refactoring browser can allow a programmer to move an instance variable from one class to another simply by dragging it in the graphic user interface, or to combine or separate classes using mouse gestures rather than a large number of text editor commands. DEBUGGING Finding and fixing bugs, or "debugging", has always been a major part of computer programming. Maurice Wilkes, an early computing pioneer, described his realization in the late 1940s that much of the rest of his life would be spent finding mistakes in his own programs. As computer programs grow more complex, bugs become more common and difficult to fix. Often programmers spend more time and effort finding and fixing bugs than writing new code. Software testers are professionals whose primary task is to find bugs, or write code to support testing. On some projects, more resources can be spent on testing than in developing the program Usually, the most difficult part of debugging is finding the bug in the source code. Once it is found, correcting it is usually relatively easy. Programs known as debuggers exist to help programmers locate bugs by executing code line by line, watching variable values, and other features to observe program behavior. Without a debugger, code can be added so that messages or values

can be written to a console (for example with printf in the c language) or to a window or log file to trace program execution or show values. However, even with the aid of a debugger, locating bugs is something of an art. It is not uncommon for a bug in one section of a program to cause failures in a completely different section, thus making it especially difficult to track (for example, an error in a graphics rendering routine causing a file I/O routine to fail), in an apparently unrelated part of the system. Sometimes, a bug is not an isolated flaw, but represents an error of thinking or planning on the part of the programmer. Such logic errors require a section of the program to be overhauled or rewritten. As a part of Code review, stepping through the code modeling the execution process in one's head or on paper can often find these errors without ever needing to reproduce the bug as such, if it can be shown there is some faulty logic in its implementation. But more typically, the first step in locating a bug is to reproduce it reliably Once the bug is reproduced, the programmer can use a debugger or some other tool to monitor the execution of the program in the faulty region, and find the point at which the program went astray. It is not always easy to reproduce bugs. Some are triggered by inputs to the program which may be difficult for the programmer to re-create. One cause of the Therac-25 radiation machine deaths was a bug (specifically, a race condition) that occurred only when the machine operator very rapidly entered a treatment plan; it took days of practice to become able to do this, so the bug did not manifesting testing or when the manufacturer attempted to duplicate it. Other bugs may disappear when the program is run with a debugger; these are heisenbugs (humorously named after the Heisenberg uncertainty principle.) Debugging is still a tedious task requiring considerable effort. Since the 1990s, particularly following the Ariane 5 Flight 501 disaster, there has been a renewed interest in the development of effective automated aids to debugging. For instance, methods of static code analysis by abstract interpretation have already made significant achievements, while still remaining much of a work in progress. As with any creative act, sometimes a flash of inspiration will show a solution, but this is rare and, by definition, cannot be relied on. There are also classes of bugs that have nothing to do with the code itself. If, for example, one relies on faulty documentation or hardware, the code may be written perfectly properly to what the documentation says, but the bug truly lies in the documentation or hardware, not the code. However, it is common to change the code instead of the other parts of the system, as the cost and time to change it is generally less. Embedded systems frequently have workarounds for hardware bugs, since to make a new version of a ROM is much cheaper than remanufacturing the hardware, especially if they are commodity items. Bug management

It is common practice for software to be released with known bugs that are considered non-critical, that is, that do not affect most users' main experience with the product. While software products may, by definition, contain any number of unknown bugs, measurements during testing can provide an estimate of the number of likely bugs remaining; this becomes more reliable the longer a product is tested and developed ("if we had 200 bugs last week, we should have 100 this week") Most big software projects maintain two lists of "known bugs" those known to the software team, and those to be told to users. This is not dissimulation, but users are not concerned with the internal workings of the product. The second list informs users about bugs that are not fixed in the current release, or not fixed at all, and a workaround may be offered. There are various reasons for not fixing bugs: The developers often don't have time or it is not economical to fix all non-severe bugs. The bug could be fixed in a new version or patch that is not yet released. The changes to the code required to fix the bug could be large, expensive, or delay finishing the project. Even seemingly simple fixes bring the chance of introducing new unknown bugs into the system. At the end of a test/fix cycle some managers may only allow the most critical bugs to be fixed. Users may be relying on the undocumented, buggy behavior, especially if scripts or macros rely on a behavior; it may introduce a breaking change. It's "not a bug". A misunderstanding has arisen between expected and provided behavior Given the above, it is often considered impossible to write completely bug-free software of any real complexity. So bugs are categorized by severity, and lowseverity non-critical bugs are tolerated, as they do not affect the proper operation of the system for most users. NASA's SATC managed to reduce the number of errors to fewer than 0.1 per 1000 lines of code (SLOC) but this was not felt to be feasible for any real world projects. The severity of a bug is not the same as its importance for fixing, and the two should be measured and managed separately. On a Microsoft Windows system a blue screen of death is rather severe, but if it only occurs in extreme circumstances, especially if they are well diagnosed and avoidable, it may be less important to fix than an icon not representing its function well, which though purely aesthetic may confuse thousands of users every single day. This balance, of course, depends on many factors; expert users have different expectations from novices, a niche market is different from a general consumer market, and so on. A school of thought popularized by Eric S. Raymond as Linus's Law says that popular open-source software has more chance of having few or no bugs than other software, because "given enough eyeballs, all bugs are shallow".[12]

This assertion has been disputed, however: computer security specialist Elias Levy wrote that "it is easy to hide vulnerabilities in complex, little understood and undocumented source code," because, "even if people are reviewing the code, that doesn't mean they're qualified to do so." Like any other part of engineering management, bug management must be conducted carefully and intelligently because "what gets measured gets done" and managing purely by bug counts can have unintended consequences. If, for example, developers are rewarded by the number of bugs they fix, they will naturally fix the easiest bugs first leaving the hardest, and probably most risky or critical, to the last possible moment ("I only have one bug on my list but it says "Make sun rise in West"). If the management ethos is to reward the number of bugs fixed, then some developers may quickly write sloppy code knowing they can fix the bugs later and be rewarded for it, whereas careful, perhaps "slower" developers do not get rewarded for the bugs that were never there. Security vulnerabilities Malicious software may attempt to exploit known vulnerabilities in a system which may or may not be bugs. Viruses are not bugs in themselves they are typically programs that are doing precisely what they were designed to do. However, viruses are occasionally referred to as such in the popular press. Common types of computer bugs Conceptual error (code is syntactically correct, but the programmer or designer intended it to do something else) Maths bugs Division by zero Arithmetic overflow or underflow Loss of arithmetic precision due to rounding or numerically unstable algorithms Logic bugs Infinite loops and infinite recursion Logic bugs Infinite loops and infinite recursion Syntax bugs Use of the wrong operator, such as performing assignment instead of equality test. In simple cases often warned by the compiler; in many languages, deliberately guarded against by language syntax Resource bugs Null pointer dereference Using an uninitialized variable Off by one error, counting one too many or too few when looping Access violations Resource leaks, where a finite system resource such as memory or file handles are exhausted by repeated allocation without release. Buffer overflow, in which a program tries to store data past the end of allocated storage. This may or may not lead to an access violation. These bugs can form security vulnerability.

Excessive recursion which though logically valid causes stack overflow Co-programming bugs Deadlock Race condition Concurrency errors in Critical sections, Mutual exclusions and other features of concurrent processing. Time-of-check-to-time-of-use (TOCTOU) is a form of unprotected critical section. Team working bugs Unpropagated updates; e.g. programmer changes "myAdd" but forgets to change "mySubtract", which uses the same algorithm. These errors are mitigated by the Don't Repeat Yourself philosophy. Comments out of date or incorrect: many programmers assume the comments accurately describe the code Differences between documentation and the actual product Need for debugging Once errors are identified in a program code, it is necessary to first identify the precise program statements responsible for the errors and then to fix them. Identifying errors in a program code and then fix them up are known as debugging. Debugging approaches The following are some of the approaches popularly adopted by programmers for debugging. Brute Force Method: This is the most common method of debugging but is the least efficient method. In this approach, the program is loaded with print statements to print the intermediate values with the hope that some of the printed values will help to identify the statement in error. This approach becomes more systematic with the use of a symbolic debugger (also called a source code debugger), because values of different variables can be easily checked and break points and watch points can be easily set to test the values of variables effortlessly. Backtracking: This is also a fairly common approach. In this approach, beginning from the statement at which an error symptom has been observed, the source code is traced backwards until the error is discovered. Unfortunately, as the number of source lines to be traced back increases, the number of potential backward paths increases and may become unmanageably large thus limiting the use of this approach. Cause Elimination Method: In this approach, a list of causes which could possibly have contributed to the error symptom is developed and tests are conducted to eliminate each. A related technique of identification of the error from the error symptom is the software fault tree analysis. Program Slicing:

This technique is similar to back tracking. Here the search space is reduced by defining slices. A slice of a program for a particular variable at a particular statement is the set of source lines preceding this statement that can influence the value of that variable [Mund2002]. Debugging guidelines Debugging is often carried out by programmers based on their ingenuity. The following are some general guidelines for effective debugging: Many times debugging requires a thorough understanding of the program design. Trying to debug based on a partial understanding of the system design and implementation may require an inordinate amount of effort to be put into debugging even simple problems. Debugging may sometimes even require full redesign of the system. In such cases, a common mistake that novice programmers often make is attempting not to fix the error but its symptoms. One must be beware of the possibility that an error correction may introduce new errors. Therefore after every round of error-fixing, regression testing must be carried out. Program analysis tools A program analysis tool means an automated tool that takes the source code or the executable code of a program as input and produces reports regarding several important characteristics of the program, such as its size, complexity, adequacy of commenting, adherence to programming standards, etc. We can classify these into two broad categories of program analysis tools: Static Analysis tools Dynamic Analysis tools Static program analysis tools Static analysis tool is also a program analysis tool. It assesses and computes various characteristics of a software product without executing it. Typically, static analysis tools analyze some structural representation of a program to arrive at certain analytical conclusions, e.g. that some structural properties hold. The structural properties that are usually analyzed are: Whether the coding standards have been adhered to? Certain programming errors such as uninitialized variables and mismatch between actual and formal parameters, variables that are declared but never used are also checked. Code walk throughs and code inspections might be considered as static analysis methods. But, the term static program analysis is used to denote automated analysis tools. So, a compiler can be considered to be a static program analysis tool. Dynamic program analysis tools Dynamic program analysis techniques require the program to be executed and its actual behavior recorded. A dynamic analyzer usually instruments the code (i.e. adds additional statements in the source code to collect program

execution traces). The instrumented code when executed allows us to record the behavior of the software for different test cases. After the software has been tested with its full test suite and its behavior recorded, the dynamic analysis tool caries out a post execution analysis and produces reports which describe the structural coverage that has been achieved by the complete test suite for the program. For example, the post execution dynamic analysis report might provide data on extent statement, branch and path coverage achieved. Normally the dynamic analysis results are reported in the form of a histogram or a pie chart to describe the structural coverage achieved for different modules of the program. The output of a dynamic analysis tool can be stored and printed easily and provides evidence that thorough testing has been done. The dynamic analysis results the extent of testing performed in white-box mode. If the testing coverage is not satisfactory more test cases can be designed and added to the test suite. Further, dynamic analysis results can help to eliminate redundant test cases from the test suite. Integration testing The primary objective of integration testing is to test the module interfaces, i.e. there are no errors in the parameter passing, when one module invokes another module. During integration testing, different modules of a system are integrated in a planned manner using an integration plan. The integration plan specifies the steps and the order in which modules are combined to realize the full system. After each integration step, the partially integrated system is tested. An important factor that guides the integration plan is the module dependency graph. The structure chart (or module dependency graph) denotes the order in which different modules call each other. By examining the structure chart the integration plan can be developed. Integration test approaches There are four types of integration testing approaches. Any one (or a mixture) of the following approaches can be used to develop the integration test plan. Those approaches are the following: Big bang approach Top-down approach Bottom-up approach Mixed-approach Big-Bang Integration Testing It is the simplest integration testing approach, where all the modules making up a system are integrated in a single step. In simple words, all the modules of the system are simply put together and tested. However, this technique is practicable only for very small systems. The main problem with this approach is that once an error is found during the integration testing, it is very difficult to localize the error as the error may potentially belong to any of the modules

being integrated. Therefore, debugging errors reported during big bang integration testing are very expensive to fix. Bottom-Up Integration Testing In bottom-up testing, each subsystem is tested separately and then the full system is tested. A subsystem might consist of many modules which communicate among each other through well-defined interfaces. The primary purpose of testing each subsystem is to test the interfaces among various modules making up the subsystem. Both control and data interfaces are tested. The test cases must be carefully chosen to exercise the interfaces in all possible manners. Large software systems normally require several levels of subsystem testing; lower-level subsystems are successively combined to form higher-level subsystems. A principal advantage of bottom-up integration testing is that several disjoint subsystems can be tested simultaneously. In a pure bottom-up testing no stubs are required, only test-drivers are required. A disadvantage of bottom-up testing is the complexity that occurs when the system is made up of a large number of small subsystems. The extreme case corresponds to the big-bang approach. Top-Down Integration Testing Top-down integration testing starts with the main routine and one or two subordinate routines in the system. After the top-level skeleton has been tested, the immediately subroutines of the skeleton are combined with it and tested. Top-down integration testing approach requires the use of program stubs to simulate the effect of lower-level routines that are called by the routines under test. Pure top-down integration does not require any driver routines. A disadvantage of the top-down integration testing approach is that in the absence of lower-level routines, many times it may become difficult to exercise the top-level routines in the desired manner since the lower-level routines perform several low-level functions such as I/O. Mixed Integration Testing A mixed (also called sandwiched) integration testing follows a combination of top-down and bottom-up testing approaches. In top-down approach, testing can start only after the top-level modules have been coded and unit tested. Similarly, bottom-up testing can start only after the bottom level modules are ready. The mixed approach overcomes this shortcoming of the top-down and bottom-up approaches. In the mixed testing approaches, testing can start as and when modules become available. Therefore, this is one of the most commonly used integration testing approaches. Phased vs. incremental testing The different integration testing strategies are either phased or incremental. A comparison of these two strategies is as follows: In incremental integration testing, only one new module is added to the partial system each time.

In phased integration, a group of related modules are added to the partial system each time. Phased integration requires less number of integration steps compared to the incremental integration approach. However, when failures are detected, it is easier to debug the system in the incremental testing approach since it is known that the error is caused by addition of a single module. In fact, big bang testing is a degenerate case of the phased integration testing approach. System testing System tests are designed to validate a fully developed system to assure that it meets its requirements. There are essentially three main kinds of system testing: Alpha Testing. Alpha testing refers to the system testing carried out by the test team within the developing organization. Beta testing. Beta testing is the system testing performed by a select group of friendly customers. Acceptance Testing. Acceptance testing is the system testing performed by the customer to determine whether he should accept the delivery of the system. In each of the above types of tests, various kinds of test cases are designed by referring to the SRS document. Broadly, these tests can be classified into functionality and performance tests. The functionality tests test the functionality of the software to check whether it satisfies the functional requirements as documented in the SRS document. The performance tests test the conformance of the system with the nonfunctional requirements of the system. Performance testing Performance testing is carried out to check whether the system needs the non-functional requirements identified in the SRS document. There are several types of performance testing. Among of them nine types are discussed below. The types of performance testing to be carried out on a system depend on the different non-functional requirements of the system documented in the SRS document. All performance tests can be considered as black-box tests. Stress testing Volume testing Configuration testing Compatibility testing Regression testing Recovery testing Maintenance testing Documentation testing Usability testing Stress Testing

Stress testing is also known as endurance testing. Stress testing evaluates system performance when it is stressed for short periods of time. Stress tests are black box tests which are designed to impose a range of abnormal and even illegal input conditions so as to stress the capabilities of the software. Input data volume, input data rate, processing time, utilization of memory, etc. are tested beyond the designed capacity. For example, suppose an operating system is supposed to support 15 multiprogrammed jobs, the system is stressed by attempting to run 15 or more jobs simultaneously. A real-time system might be tested to determine the effect of simultaneous arrival of several high-priorities interrupts. Stress testing is especially important for systems that usually operate below the maximum capacity but are severely stressed at some peak demand hours. For example, if the non-functional requirement specification states that the response time should not be more than 20 secs per transaction when 60 concurrent users are working, then during the stress testing the response time is checked with 60 users working simultaneously. Volume Testing It is especially important to check whether the data structures (arrays, queues, stacks, etc.) have been designed to successfully extraordinary situations. For example, a compiler might be tested to check whether the symbol table overflows when a very large program is compiled. Configuration Testing This is used to analyze system behavior in various hardware and software configurations specified in the requirements. Sometimes systems are built in variable configurations for different users. For instance, we might define a minimal system to serve a single user, and other extension configurations to serve additional users. The system is configured in each of the required configurations and it is checked if the system behaves correctly in all required configurations. Compatibility Testing This type of testing is required when the system interfaces with other types of systems. Compatibility aims to check whether the interface functions perform as required. For instance, if the system needs to communicate with a large database system to retrieve information, compatibility testing is required to test the speed and accuracy of data retrieval. Regression Testing This type of testing is required when the system being tested is an upgradation of an already existing system to fix some bugs or enhance functionality, performance, etc. Regression testing is the practice of running an old test suite after each change to the system or after each bug fix to ensure that no new bug has been introduced due to the change or the bug fix. However, if only a few statements are changed, then the entire test suite need not be run - only those test cases that test the functions that are likely to be affected by the change need to be run. Recovery Testing

Recovery testing tests the response of the system to the presence of faults, or loss of power, devices, services, data, etc. The system is subjected to the loss of the mentioned resources (as applicable and discussed in the SRS document) and it is checked if the system recovers satisfactorily. For example, the printer can be disconnected to check if the system hangs. Or, the power may be shut down to check the extent of data loss and corruption. Maintenance Testing This testing addresses the diagnostic programs, and other procedures that are required to be developed to help maintenance of the system. It is verified that the artifacts exist and they perform properly. Documentation Testing It is checked that the required user manual, maintenance manuals, and technical manuals exist and are consistent. If the requirements specify the types of audience for which a specific manual should be designed, then the manual is checked for compliance. Usability Testing Usability testing concerns checking the user interface to see if it meets all user requirements concerning the user interface. During usability testing, the display screens, report formats, and other aspects relating to the user interface requirements are tested. Error seeding Sometimes the customer might specify the maximum number of allowable errors that may be present in the delivered system. These are often expressed in terms of maximum number of allowable errors per line of source code. Error seed can be used to estimate the number of residual errors in a system. Error seeding, as the name implies, seeds the code with some known errors. In other words, some artificial errors are introduced into the program artificially. The number of these seeded errors detected in the course of the standard testing procedure is determined. These values in conjunction with the number of unseeded errors detected can be used to predict: The number of errors remaining in the product. The effectiveness of the testing strategy. Let N be the total number of defects in the system and let n of these defects be found by testing. Let S be the total number of seeded defects, and let s of these defects be found during testing. n s = or N S N = S n s

Defects still remaining after testing = N n = n

( S s)
s

Error seeding works satisfactorily only if the kind of seeded errors matches closely with the kind of defects that actually exist. However, it is difficult to predict the types of errors that exist in software. To some extent, the different categories of errors that remain can be estimated to a first approximation by analyzing historical data of similar projects. Due to the shortcoming that the types of seeded errors should match closely with the types of errors actually existing in the code, error seeding is useful only to a moderate extent. Regression testing Regression testing does not belong to either unit test, integration test, or system testing. Instead, it is a separate dimension to these three forms of testing. The functionality of regression testing has been discussed earlier. The following questions have been designed to test the objectives identified for this module: 1. What are the different ways of documenting program code? Which of these usually the most useful while understands a piece of code? 2. What is a coding standard? Identify the problems that might occur if the engineers of an organization do not adhere to any coding standard. 3. What is the difference between coding standards and coding guidelines? Why are these considered as important in a software development organization? 4. Write down five important coding standards. 5. Write down five important coding guidelines. 6. What do you mean by side effects of a function call? Why are obscure side effects undesirable? 7. What is meant by code review? Why is it required to be completed before performing integration and system testing? 8. Identify the type of errors that can be detected during code walk throughs. 9. Identify the type of errors that can be detected during code inspection. 10. 11. What is clean room testing? Why is it important to properly document a software product?

12. Differentiate between the external and internal documentation of a software product. 13. Identify the necessity of testing of a software product. 14. Distinguish between error and failure. Testing detects which of these two? Justify it. 15. Differentiate between verification and validation in the contest of software testing. 16. Is random selection of test cases effective? Justify.

17. Write down major differences between functional testing and structural testing. 18. Do you agree with the statement: The effectiveness of a testing suite in detecting errors in a system can be determined by examining the number of test cases in the suite. Justify your answer. 19. What are driver and stub modules in the context of unit testing of a software product? 20. Given a software and its requirements specification document, how can black-box test suites for this software be designed? 21. Identify two guidelines for the design of equivalence classes for a problem. 22. Explain why boundary value analysis is so important for the design of black box test suite for a problem. 23. Compare the features of stronger testing with the features of complementary testing. 24. Which is strongest structural testing technique among statement coverage-based testing, branch coverage-based testing, and condition coverage-based testing? Why? 25. Discuss how does control flow graph (CFG) of a problem help in understanding of path coverage based testing strategy. 26. Draw the control flow graph for the following function named findmaximum. From the control flow graph, determines its Cyclomatic complexity. int find-maximum(int i, int j, int k) { int max; if(i>j) then if(i>k) then max = i; else max = k; else if(j>k) max = j; else max = k; return(max); } 27. What is the difference between path and linearly independent path in terms of control flow graph (CFG) of a problem? 28. Define a metric form which the upper bound for the number of linearly independent paths of a program can be computed. 29. Consider the following C function named bin-search: /* num is the number the function searches in a presorted integer array arr */ int bin_search(int num) { int min, max; min = 0; max = 100; while(min!=max) { if (arr[(min+max)/2]>num) max = (min+max)/2; else if(arr[(min+max)/2]<num) min = (min+max)/2;

else return((min+max)/2); } return(-1); } Determine the cyclomatic complexity of the above problem. COMPONENT BASED TECHNOLOGY/TECHNOLOGY Component-based software development directly addresses the cost dimension, in that it tries to regard software construction more in terms of the traditional engineering disciplines in which the assembly of systems from readily available prefabricated parts is the norm. This is in contrast with the traditional way of developing software in which most parts are customdesigned from scratch. There are many motivations for why people allocate more and more effort toward introducing and applying component-based software construction technologies. Some will expect increased return on investment, because the development costs of components are amortized over many uses. Others will put forward increased productivity as an argument, because software reuse through assembly and interfacing enables the construction of larger and more complex systems in shorter development cycles than would otherwise be feasible. In addition, increased software quality is a major expectation that people are looking for under this subject. These are all valid anticipations, and to some extent they have yet to be assessed and evaluated. Component Definition The fundamental building block of component-based software development is a component. On first thought it seems quite clear what a software component is supposed to be: it is a building block. But, on a second thought, by looking at the many different contemporary component technologies, and how they treat the term component, this initial clarity can easily give way to confusion. People have come up with quite a number of diverse definitions for the term component, and the following one is my personal favorite: A component is a reusable unit of composition with explicitly specified provided and required interfaces and quality attributes that denotes a single abstraction and can be composed without modification. This is based on the well-known definition of the 1996 European Conference on Object-Oriented Programming that defines a component in the following way: A component is a unit of composition, with contractually specified interfaces and context dependencies only, that can be deployed independently and is subject to composition by third parties. A component is a logically cohesive, loosely coupled module. A component is a modular, deployable, and replaceable part of a system that encapsulates implementation and exposes a set of interfaces. But there are many other definitions that all focus on more or less similar properties of a component, for example:

A software component is an independently deliverable piece of functionality providing access to its services through interfaces. A software component is a software element that conforms to a component model and can be independently deployed and composed without modification according to a composition standard. From the component definitions, we can actually derive a number of important properties for software components: Composability is the primary property of software components as the term implies, and it can be applied recursively: components make up components, which make up components, and so on. Reusability is the second key concept in component-based software development. Development for reuse on the one hand is concerned with how components are designed and developed by a component provider. Development with reuse on the other hand is concerned with how such existing components may be integrated into a customers component framework. Having a unique identity requires that a component should be uniquely identifiable within its development environment as well as its runtime environment. Modularity and encapsulation refers to the scoping property of a component as an assembly of services that are related through common data. Modularity is not defined through similar functionality as is the case under the traditional development paradigms (i.e., module as entity with functional cohesion), but through access to the same data (i.e., data cohesion). Interaction through interface contracts; encapsulation and information hiding require an access mechanism to the internal workings of a component. Interfaces are the only means for accessing the services of a component, and these are based on mutual agreements on how to use the services, that is, on a contract. Core Principles of Component-Based Development Component Composition Components are reusable units for composition. This statement captures the very fundamental concept of component-based development, that an application is made up and composed of a number of individual parts, and that these parts are specifically designed for integration in a number of different applications. It also captures the idea that one component may be part of another component [6], or part of a sub-system or system, both of which represent components in their own right. As a graphical representation, composition maps components into trees with one component as the root of the parts from which it is composed, as shown in Fig. 1.1.

Fig. 1.1. Composition represented by a component nesting tree, or a so-called component containment hierarchy
Component Clientship

An additional important concept that is related to component composition and assembly is clientship. It is borrowed from object technology and subsumed by the component concept. Clientship or client/server relationship is a much more fundamental concept than component composition. It represents the basic form of interaction between two objects in object-oriented systems. Without clientship, there is no concept of composition and no interaction between components. Clientship means that a client object invokes the public operations of an associated server object. Such an interaction is unidirectional, so that the client instance has knowledge of the server instance, typically through some reference value, but the server instance needs no knowledge of the client instance. A clientship relation defines a contract between the client and the server. A contract determines the services that the server promises to provide to the client if the client promises to use the server in its expected way. If one of the two parties fails to deliver the promised properties, it breaks the contract, and the relation fails. This typically leads to an error in the clientship association. A composite is usually the client of its parts. A graphical representation of clientship forms arbitrary graphs, since clientship is not dependent on composition. This is indicated through the _acquires_relationship in Fig. 1.1. It means that Sub-subcomponent 2.1 acquires the services of Sub-subcomponent 2.2, thereby establishing the client/server relationship. Clientship between contained components in a containment hierarchy is represented by the anchor symbols in Fig. 1.1. The minimal contract between two such entities is that the client, or the containing component, at least needs to invoke the constructor of its servers, or the contained components.

Component Interfaces A components syntax and semantics are determined through its provided and required interfaces. The provided interface is a collection of functionality and behavior that collectively define the services that a component provides to its associated clients. It may be seen as the entry point for controlling the component, and it determines what a component can do. The required interface is a collection of functionality and behavior that the component expects to get from its environment to support its own implementation. Without correct support from its servers at its required interface, the component cannot guarantee correct support of its clients at its provided interface. If we look at a component from the point of view of its provided interface, it takes the role of a server. If we look at it from the point of view of its required interface, the component takes the role of a client. Provided and required interfaces define a components provided and required contracts. These concepts are illustrated in Fig. 1.2.

Fig. 1.2. UML-style representation of components with provided and required interfaces Quality Attributes Quality attributes have the same meaning for the non-functional aspects of a component that interfaces have for the functional and behavioral aspects of the component. Quality attributes define additional requirements of components, such as dependability and performance. Quality Documentation The documentation can be seen as part of a components specification, or a refinement of its specification. Sometimes, a pure specification may be too abstract, so that it is difficult for users to see and understand how a components operations may be called or can be applied. It is particularly useful in order to document how sequences and combinations of operation invocations add to the overall behavior. A documentation provides a deeper

insight into how a component may be used in typical contexts and for typical usage profiles that the provider of the component had anticipated. Persistent Component State An additional requirement of Szyperskis component definition that is often cited in the literature is that a component may not have a persistent state. It means that whenever a component is integrated in a new application, it is not supposed to have any distinct internal variable settings that result from previous operation invocations by clients of another context. This requires that a runtime component, a so-called component instance, will always be created and initialized before it is used. However, this is not practical for highly dynamic component systems such as Web services, which may be assembled and composed of already existing component instances that are acquired during runtime. It is not a fact that because components have persistent states they cannot be integrated into a running system. It is a fact that they must have a well-defined and expected persistent state so that they can be incorporated into a running application. The fact that a component may already have a state must be defined a priori, and it is therefore a fundamental part of the underlying clientship relation between two components. The invocation of a constructor operation, for example, represents a transition into the initial state of a component. This is also a welldefined situation that the client must know about in order to cooperate with the component correctly. In this respect, it may also be seen as a persistent state that the client must be aware of. Component Meta-model The previous paragraphs have briefly described the basic properties of a component and component-based development. The following paragraphs summarize the items that make up a component and draw a more complete picture of component concepts. I also introduce the notion of a UML component metamodel, which will be extended over the course of this book, to illustrate the relations between these concepts. Figure 1.3 summarizes the concepts of a component and their relations in the form of a UML meta-model. It is a meta-model, a model of a model, because it does not represent or describe a physical component but only the concepts from which physical components are composed. The diagram defines a component as having at most one provided interface and one required interface. These two interfaces entirely distinguish this component from any other particular component. The provided interface represents everything that the component is providing to its environment (its clients) in terms of services, and the required interface represents everything that the component expects to get from its environment in order to offer its services. This expectation is represented by the other associated (sub-) components that the subject component depends upon, or by the underlying runtime environment.

Provided and required interfaces must be public, as indicated through the UML stereotype public_. Otherwise we cannot sensibly integrate the component into an application, because we do not know how the component will be connected with other components in its environment. Provided and required interfaces are also referred to as export and import interfaces.

Fig. 1.3. Component meta-model We define a software component as: a software item with a discrete structure, for which a separate specification is available. It has a defined, precise behaviour, it is a black box with a defined and documented service interface, and it may have a specific usage context (domain) Software components may be used by various mechanisms, e.g.: Cut and paste code; possible subsequent modification affects component. Subroutine libraries; textual importation, linking or calling of components, usually without subsequent modification (black-box usage). Object-Oriented technology (inheritance!); usage of objects, classes and frameworks. **** The differences between component and application testing can be clarified further by referring to the following classification of test activities.

1) Unit testing (or module testing) generally will be part of component development. Unit testing during component development should be based on relevant requirements and conventions that are provided by domain analysis. 2) Integration testing If a component consists of more than one unit, their interaction should obviously be tested as a part of component testing. But also the interaction with other components in the component base (library) should be part of component testing. For this purpose, domain analysis should also provide a clear execution model of components in the domain. Furthermore, during application development integration testing should be conducted regarding units that do not originate from component development. 3) Functional testing is not relevant for software components. The environment, its requirements specifications (and contracts) that are necessary to test the complete functionality are not known. The principles of testing and the practical experiences in the field of testing can only be applied partly in component development. This means that software testing has to be redefined within the software development process when a component policy is implemented. In the next sections we will redefine component testing as component evaluation. **** The needs of software developers with respect to reusable components are represented by the following global requirements: the function and behaviour of the component must be perfectly clear, the operation of the component must be dependable, and sufficiently verified, the components behaviour should not be a burden to performance and resource utilisation. Subsequently, our objective should be to elaborate these rather vague requirements into a complete and consistent set of detailed, concrete and measurable requirements for component testing.

Quality characteristics of software components The international standard ISO 9126 states that the quality of software products is determined by six quality characteristics: functionality, reliability,

usability, efficiency, maintainability and portability. For these characteristics also subcharacteristics are specified. Now we will discuss the various characteristics and subcharacteristics (printed in bold italics) with respect to software components. Functionality (the provided service) is directly applicable to components, and clearly essential to the components reusability. Domain analysis may provide criteria for evaluation to some extent. General (and a priori assessable) evaluation criteria may be determined for accuracy, interoperability (interpreted as referring to the interaction between components, and assessable by standardised API requirements) and especially for compliance (conformity to normative material). Since the component user, the application developer, will ultimately decide whether the functionality (or in fact the complete quality spectrum) suits his needs, suitability evaluation will mainly be limited to verification and validation of the specification. Security is an example of a subcharacteristic that does NOT generally apply to a component; only specific components may contribute to the security functions of an application. Therefore, security only has to be evaluated if the components service is aimed at security. Security assessment should be left to application testing. Reliability is also directly applicable to components, and crucial to the components reusability. Reliability should be adequately demonstrated by sufficient testing, usually indirectly, through conformity to a relevant set of requirements and conventions. Direct, dynamic testing of reliability will usually not be feasible for components. This consequently applies to maturity as well. The subcharacteristic fault tolerance is of particular importance to component testing, and its assessment may be a fair replacement for direct maturity testing (e.g. MTTF). Recoverability is hardly interpretable at the component level, and should only be evaluated (as functionality) if the components functionality (directly) contributes to recoverability at application level. Generally, recoverability assessment should be left to application testing. Usability is perhaps the best example of a characteristic that has a complete different meaning for a component. A component is not used by the enduser (the ultimate customer), but by the application developer (the immediate customer). Only incidentally may a component contribute to the usability of the application (e.g. an element of the user-interface), but this should be regarded as the components functionality and consequently evaluated as such! The actual usability of a component should be interpreted

as the capability of the component to be used by the application developer to construct an application. And that is how we already have defined as the aim of our component reusability evaluation as a whole! It will make no sense to include some usability assessment in our component (re)usability evaluation, which would result in internal recursivity. The usability subcharacteristics understandability, learnability and operability consequently have a different meaning too when referring to a component. Efficiency can be a critical characteristic, with respect to time or resource constraints for the applications. For instance in embedded software resource behaviour (e.g. memory seizure) may be a bottleneck, or time behaviour (e.g. response-time) in online systems. On the other hand, with the absence of the surrounding application, we cannot decide whether our component fits in the complete puzzle. Therefore, at component evaluation time we can only require these characteristics to be adequately described, to enable the application developer to make that decision. Maintainability would at first sight generally not be a very critical characteristic for components: one of the main principles of our component policy is the availability of components that are of high quality, and can be confidently reused without preceding modifications. On the other hand, the domain characteristics (e.g. a very dynamic environment) may still require frequent adjustments to the components to follow the changes in real world. And moreover, good maintainability usually relates strongly to high quality; particularly the subcharacteristics analysability, changeability and testability can be associated also with the clarity and reliability of the component. The subcharacteristic stability seems to be more appropriate for the application level, and is expected to be enhanced by component-based software manufacturing.

Portability may or may not be important to a software developing company, depending on their market strategy. If applications are sold for various platforms, portability of components will be important. And of course, the scope of reusability will be broadened by portability: the subcharacteristics adaptability and conformance both relate directly to reusability in this sense. Replaceability is also an appropriate subcharacteristic to components, while installability seems to apply to the application-level only.

The discussion above shows that software quality characteristics, as defined in ISO 9126, have to be re-interpreted and have to be adapted (to some extent) to make them applicable in the context of software component development and evaluation. Consequently appropriate metrics and measurement can be determined to carry out the component evaluation.

1. Define software component. A software component is a system element offering a predefined serviceable to communicate with other components. 2. Specify the characteristics of object Object is a unit of instantiation, it has a unique identity. It may have state and this can be externally observable. 3. What is prototype object? The object may be implicitly available in the form of an object that already exists. Such a preexisting object is called a prototype object. 4. What is factory object and methods? Factory objects: It can be an object of its own. Factory methods: Methods on objects that return freshly created other objects are another variation. 5. What are modules? Modules do not have a concept of instantiation whereas classes do. Modules can be used and always and have been used to package multiple entities. 6. Specify the non-technical aspects that are need in interfaces. A component has multiple interfaces, each representing a service that the component offers. Redundant introductions of similar interfaces need to be minimized. Requires a small number of widely accepted unique naming schemes. 7. Define callback. Callbacks are a common feature in procedural libraries that has to handle asynchronous events. 8. What is component architecture? Component architecture is the pivotal basis of any large-scale software technology and is of utmost importance for component based systems. 9. Specify some cornerstones of component architecture. Interaction between components and their environment is regulated. The roles of components are defined. Tool interfaces are standardized. 10. Specify the roles of architecture. Architecture needs to create simultaneously the basis for independence and cooperation. An Architecture defines overall invariants. It needs to be based on the principal considerations of overall functionality

It prescribes proper frameworks for all involved mechanisms. 11. What is the use of conceptual level? A component framework is a dedicated and focused architecture usually around a few key mechanisms and a fixed set of policies for mechanisms at the component level. 12. Define component framework. A component framework is a dedicated and focused architecture usually around a few key mechanisms and a fixed set of policies for mechanisms at the component level. 13. What is a resource? A resource is a frozen collection of typed items. 14. Define middleware. Middleware is a name for the set of software that sits between various operating systems and a higher distributed programming platform. 15. Categorize the middleware. Message oriented middleware (MOM). Object oriented middleware (OOM). 16. What is generative programming? Generative programming aims at transformations approach to the construction of software. 17. Specify the areas used in generative approaches. Used to produce individual components. Used to enhance composed systems. 18. Specify the criterion that is used to fulfill the software definition. Multiple uses Non-context specific. Composable with other component. Encapsulated. 19. Specify the fundamental properties of component technology. If a component fails to function it must not violate system-wide rules. Software development processes that do not depending on testing Performance of a component system is affected in non-trivial ways by the actual composition.

User Interface Interface types Computer user interfaces o Menu systems o Command languages o Forms o Natural language o Direct manipulation o Graphical user interfaces

Touchscreen Voice Information display static displays motion displays interactive displays XML considerations Telephony interfaces o Voice o Touch-tone Human factors Cognitive principles o Perception o Memory o Problem solving Understanding the user Designing for humans o Affordances o Conceptual models o Feedback o Constraints o Mapping o Stages of action Ergonomics Human-centered software evaluation Setting goals for evaluation Evaluation without users o Walkthroughs o Keystroke Level Model analysis (KLM) o Guidelines o Standards International Operating system Accessibility o Style guides o Heuristics Evaluation with users o Usability testing o Interview o Survey o Experiment Human-centered software development Approaches, characteristics, and overview of process User centered design methods Functionality and usability o Task analysis
o o o

Scenarios Use cases o Interviews o Surveys o Matching interface elements to user requirements Specifying interaction and presentation Prototyping techniques and tools o Paper storyboards o Inheritance and dynamic dispatch o Prototyping languages and GUI builders Understanding the users o Profiles o Personas o Understanding the user experience Human interaction styles Localization Globalization Accessibility requirements * Americans with Disabilities Act (ADA) * Designing for aging population Graphical user-interface design Choosing interaction styles and interaction techniques HCI aspects of common widgets HCI aspects of screen design: layout, color, fonts, labeling o Layout o Color o Fonts o Labeling o Consistency Handling human failure Beyond simple screen design o Visualization o Representation o Metaphor o Anchoring Multi-modal interaction o Graphics o Sound o Auditory feedback o Haptics 3D interaction and virtual reality; artificial and augmented realities Graphical user-interface programming User Interface Management Systems (UIMS) Dialogue independence and level of analysis Seeheim model Widget classes Event management and user interaction

Geometry management GUI builders and UI programming environments Cross-platform design HCI aspects of multimedia systems Categorization and architectures of information: hierarchies, hypermedia Information retrieval and human performance o Web search o Usability of database query languages o Graphics o Sound Methodologies and techniques Modeling Signal analysis, synthesis, and processing HCI design of multimedia information systems o Animations o Hypertext Hypertext architectures Hypertext navigation and maps o Audio o Video Speech recognition and natural language processing Information appliances and mobile computing??? HCI aspects of collaboration and communication Groupware to support specialized tasks o Document preparation o Multi-player games o Computer-supported collaborative work Asynchronous group communication o E-mail o Bulletin boards Synchronous group communication o Chat rooms o Conferencing Online communities: MUDs/MOOs Software characters and intelligent agents Interface technologies Graphics output devices and their properties Graphics primitives and their properties Graphics software systems; general graphics standards Architecture of window managers and user interfaces Architecture of toolboxes and programming support environments Representation of graphic data and sound

CRITERIA FOR A GOOD SPECIFICATION - The four problem categories above can be used to develop criteria for specifications that will enhance

communication of important design information between suppliers and integrators. Clarity Requirements must be unambiguous. System engineers, software engineers, domain experts, and managers should all be capable of reading the specification and deriving the same understanding of what the software component is to do. 2. Correctness A requirements specification should accurately describe the software to be built. Initially, this means that the specification should lend itself to validation techniques such as simulation, analysis, and review. Later, this means that changes to the requirements should be easy to record in the requirements specification, not simply applied to the software and forgotten. 3. Completeness Incomplete requirements are a repeated cause of incidents and accidents. In this context, a software specification is complete when a reader can distinguish between acceptable and unacceptable software implementations. 4. Conciseness Software requirements specifications should contain only as much information as necessary to describe the relationship between inputs to the software and outputs the software produces. Systems engineers may think of this as describing the transfer function for the software. A completely black box view of the behavior of the software allows software developers the freedom to meet project goals. Additional information about the design of the software hampers safety analysis efforts.
1.

A specification that meets the above criteria will enhance communication between system integrators and the suppliers of their software. Safety constraints on the software design are far easier for software suppliers to use in a specification with the above properties. Such a specification also makes it easier to verify that the software developed enforces the safety constraints. These same characteristics are helpful in other situations as well. If system integration and software development are within the same organization, clarity, correctness, completeness, and conciseness in the software specification will still benefit the project. Specifications that can be read and understood across engineering disciplines facilitate communication between software developers, system engineers, and safety engineers. A high quality specification is also an asset when a system is passed from research to production, leading to reduced costs and faster time to market. Evaluation Of Software Architecture How can you be sure whether the architecture chosen for your software is the right one? How can you be sure that it won't lead to calamity but instead will pave the way through a smooth development and successful product?

It's not an easy question, and a lot rides on the outcome. The foundation for any software system is its architecture. The architecture will allow or preclude just about all of a system's quality attributes. Modifiability, performance, security, availability, reliabilityall of these are precast once the architecture is laid down. No amount of tuning or clever implementation tricks will wring any of these qualities out of a poorly architected system. To put it bluntly, an architecture is a bet, a wager on the success of a system. Wouldn't it be nice to know in advance if you've placed your bet on a winner, as opposed to waiting until the system is mostly completed before knowing whether it will meet its requirements or not? If you're buying a system or paying for its development, wouldn't you like to have some assurance that it's started off down the right path? If you're the architect yourself, wouldn't you like to have a good way to validate your intuitions and experience, so that you can sleep at night knowing that the trust placed in your design is well founded? Until recently, there were almost no methods of general utility to validate a software architecture. If performed at all, the approaches were spotty, ad hoc, and not repeatable. Because of that, they weren't particularly trustworthy. We can do better than that. This is a guidebook of software architecture evaluation. It is built around a suite of three methods, all developed at the Software Engineering Institute, that can be applied to any software-intensive system:

ATAM: Architecture Tradeoff Analysis Method SAAM: Software Architecture Analysis Method ARID: Active Reviews for Intermediate Designs

The methods as a group have a solid pedigree, having been applied for years on dozens of projects of all sizes and in a wide variety of domains. With these methods, the time has come to include software architecture evaluation as a standard step of any development paradigm. Evaluations represent a wise risk-mitigation effort and are relatively inexpensive. They pay for themselves in terms of costly errors and sleepless nights avoided. Whereas the previous chapter introduced the concept of software architecture, this chapter lays the conceptual groundwork for architectural evaluation. It defines what we mean by software architecture and explains the kinds of properties for which an architecture can (and cannot) be evaluated. First, let's restate what it is we're evaluating:

The software architecture of a program or computing system is the structure or structures of the system, which comprise software components, the externally visible properties of those components, and the relationships among them. [Bass 98] By "externally visible" properties, we are referring to those assumptions other components can make of a component, such as its provided services, performance characteristics, fault handling, shared resource usage, and so on. The intent of this definition is that a software architecture must abstract some information about the system (otherwise there is no point looking at the architecturewe are simply viewing the entire system) and yet provide enough information to be a basis for analysis, decision making, and hence risk reduction (see the sidebar What's Architectural?). The architecture defines the components (such as modules, objects, processes, subsystems, compilation units, and so forth) and the relevant relations (such as calls, sends-data-to, synchronizes-with, uses, depends-on, instantiates, and many more) among them. The architecture is the result of early design decisions that are necessary before a group of people can collaboratively build a software system. The larger or more distributed the group, the more vital the architecture is (and the group doesn't have to be very large before the architecture is vital). One of the insights about architecture from Chapter 1 that you must fully embrace before you can understand architecture evaluation is this: Architectures allow or preclude nearly all of the system's quality attributes. What's Architectural? Sooner or later everyone asks the question: "What's architectural?" Some people ask out of intellectual curiosity, but people who are evaluating architectures have a pressing need to understand what information is in and out of their realm of concern. Maybe you didn't ask the question exactly that way. Perhaps you asked it in one of the following ways: What is the difference between an architecture and a high-level design? Are details such as priorities of processes architectural? Why should implementation considerations such as buffer overflows be treated as architectural? Are interfaces to components part of the architecture? If I have class diagrams, do I need anything else? Is architecture concerned with run-time behavior or static structure?

Is the operating system part of the architecture? Is the programming language? If I'm constrained to use a particular commercial product, is that architectural? If I'm free to choose from a wide range of commercial products, is that architectural?

Let's think about this in two ways. First, consider the definition of architecture that we quoted in Chapter 1 of this book. Paraphrasing: A software architecture concerns the gross organization of a system described in terms of its components, their externally visible properties, and the relationships among them. True enough, but it fails to explicitly address the notion of context. If the scope of my concern is confined to a subsystem within a system that is part of a system of systems, then what I consider to be architectural will be different than what the architect of the system of systems considers to be architectural. Therefore, context influences what's architectural. Second, let's ask, what is not architectural? It has been said that algorithms are not architectural; data structures are not architectural; details of data flow are not architectural. Well, again these statements are only partially true. Some properties of algorithms, such as their complexity, might have a dramatic effect on performance. Some properties of data structures, such as whether they need to support concurrent access, directly impact performance and reliability. Some of the details of data flow, such as how components depend on specific message types or which components are allowed access to which data types, impact modifiability and security, respectively. So is there a principle that we can use in determining what is architectural? Let's appeal to what architecture is used for to formulate our principle. Our criterion for something to be architectural is this: It must be a component, or a relationship between components, or a property (of components or relationships) that needs to be externally visible in order to reason about the ability of the system to meet its quality requirements or to support decomposition of the system into independently implementable pieces. Here are some corollaries of this principle: Architecture describes what is in your system. When you have determined your context, you have determined a boundary that describes what is in and what is out of your system (which might be someone else's subsystem). Architecture describes the part that is in. An architecture is an abstract depiction of your system. The information in an architecture is the most abstract and yet meaningful depiction of that aspect of the system. Given your architectural specification, there should not be a need for a more abstract description. That is not to say

that all aspects of architecture are abstract, nor is it to say that there is an abstraction threshold that needs to be exceeded before a piece of design information can be considered architectural. You shouldn't worry if your architecture encroaches on what others might consider to be a more detailed design. What's architectural should be critical for reasoning about critical requirements. The architecture bridges the gap between requirements and the rest of the design. If you feel that some information is critical for reasoning about how your system will meet its requirements then it is architectural. You, as the architect, are the best judge. On the other hand, if you can eliminate some details and still compose a forceful argument through models, simulation, walk-throughs, and so on about how your architecture will satisfy key requirements then those details do not belong. However, if you put too much detail into your architecture then it might not satisfy the next principle. An architectural specification needs to be graspable. The whole point of a gross-level system depiction is that you can understand it and reason about it. Too much detail will defeat this purpose. An architecture is constraining. It imposes requirements on all lowerlevel design specifications. I like to distinguish between when a decision is made and when it is realized. For example, I might determine a process prioritization strategy, a component redundancy strategy, or aset of encapsulation rules when designing an architecture; but I might not actually make priority assignments, determine the algorithm for a redun-dant calculation, or specify the details of an interface until much later. In a nutshell: To be architectural is to be the most abstract depiction of the system that enables reasoning about critical requirements and constrains all subsequent refinements. If it sounds like finding all those aspects of your system that are architectural is difficult, that is true. It is unlikely that you will discover everything that is architectural up front, nor should you try. An architectural specification will evolve over time as you continually apply these principles in determining what's architectural. 2.1 Why Evaluate an Architecture? The earlier you find a problem in a software project, the better off you are. The cost to fix an error found during requirements or early design phases is orders of magnitudes less to correct than the same error found during testing. Architecture is the product of the early design phase, and its effect on the system and the project is profound.

An unsuitable architecture will precipitate disaster on a project. Performance goals will not be met. Security goals will fall by the wayside. The customer will grow impatient because the right functionality is not available, and the system is too hard to change to add it. Schedules and budgets will be blown out of the water as the team scrambles to back-fit and hack their way through the problems. Months or years later, changes that could have been anticipated and planned for will be rejected because they are too costly. Plagues and pestilence cannot be too far behind. Architecture also determines the structure of the project: configuration control libraries, schedules and budgets, performance goals, team structure, documentation organization, and testing and maintenance activities all are organized around the architecture. If it changes midstream because of some deficiency discovered late, the entire project can be thrown into chaos. It is much better to change the architecture before it has been frozen into existence by the establishment of downstream artifacts based on it. Architecture evaluation is a cheap way to avoid disaster. The methods in this book are meant to be applied while the architecture is a paper specification (of course, they can be applied later as well), and so they involve running a series of simple thought experiments. They each require assembling relevant stakeholders for a structured session of brainstorming, presentation, and analysis. All told, the average architecture evaluation adds no more than a few days to the project schedule. To put it another way, if you were building a house, you wouldn't think of proceeding without carefully looking at the blueprints before construction began. You would happily spend the small amount of extra time because you know it's much better to discover a missing bedroom while the architecture is just a blueprint, rather than on moving day. 2.2 When Can an Architecture Be Evaluated? The classical application of architecture evaluation occurs when the architecture has been specified but before implementation has begun. Users of iterative or incremental life-cycle models can evaluate the architectural decisions made during the most recent cycle. However, one of the appealing aspects of architecture evaluation is that it can be applied at any stage of an architecture's lifetime, and there are two useful variations from the classical: early and late. Early. Evaluation need not wait until an architecture is fully specified. It can be used at any stage in the architecture creation process to examine those architectural decisions already made and choose among architectural

options that are pending. That is, it is equally adept at evaluating architectural decisions that have already been made and those that are being considered. Of course, the completeness and fidelity of the evaluation will be a direct function of the completeness and fidelity of the architectural description brought to the table by the architect. And in practice, the expense and logistical burden of convening a full-blown evaluation is seldom undertaken when unwarranted by the state of the architecture. It is just not going to be very rewarding to assemble a dozen or two stakeholders and analysts to evaluate the architect's early back-of-the-napkin sketches, even though such sketches will in fact reveal a number of significant architecture paths chosen and paths not taken. Some organizations recommend what they call a discovery review, which is a very early mini-evaluation whose purpose is as much to iron out and prioritize troublesome requirements as analyzing whatever "protoarchitecture" may have been crafted by that point. For a discovery review, the stakeholder group is smaller but must include people empowered to make requirements decisions. The purpose of this meeting is to raise any concerns that the architect may have about the feasibility of any architecture to meet the combined quality and behavioral requirements that are being levied while there is still time to relax the most troubling or least important ones. The output of a discovery review is a much stronger set of requirements and an initial approach to satisfying them. That approach, when fleshed out, can be the subject of a full evaluation later. We do not cover discovery reviews in detail because they are a straightforward variation of an architecture evaluation. If you hold a discovery review, make sure to Hold it before the requirements are frozen and when the architect has a good idea about how to approach the problem Include in the stakeholder group someone empowered to make requirements decisions Include a prioritized set of requirements in the output, in case there is no apparent way to meet all of them

Finally, in a discovery review, remember the words of the gifted aircraft designer Willy Messerschmitt, himself no stranger to the burden of requirements, who said: You can have any combination of features the Air Ministry desires, so long as you do not also require that the resulting airplane fly.

Late. The second variation takes place when not only the architecture is nailed down but the implementation is complete as well. This case occurs when an organization inherits some sort of legacy system. Perhaps it has been purchased on the open market, or perhaps it is being excavated from the organization's own archives. The techniques for evaluating a legacy architecture are the same as those for one that is newborn. An evaluation is a useful thing to do because it will help the new owners understand the legacy system, and let them know whether the system can be counted on to meet its quality and behavioral requirements. In general, when can an architectural evaluation be held? As soon as there is enough of an architecture to justify it. Different organizations may measure that justification differently, but a good rule of thumb is this: Hold an evaluation when development teams start to make decisions that depend on the architecture and the cost of undoing those decisions would outweigh the cost of holding an evaluation. 2.3 Who's Involved? There are two groups of people involved in an architecture evaluation. 1. Evaluation team. These are the people who will conduct the evaluation and perform the analysis. The team members and their precise roles will be defined later, but for now simply realize that they represent one of the classes of participants. 2. Stakeholders. Stakeholders are people who have a vested interest in the architecture and the system that will be built from it. The three evaluation methods in this book all use stakeholders to articulate the specific requirements that are levied on the architecture, above and beyond the requirements that state what functionality the system is supposed to exhibit. Some, but not all, of the stakeholders will be members of the development team: coders, integrators, testers, maintainers, and so forth. A special kind of stakeholder is a project decision maker. These are people who are interested in the outcome of the evaluation and have the power to make decisions that affect the future of the project. They include the architect, the designers of components, and the project's management. Management will have to make decisions about how to respond to the issues raised by the evaluation. In some settings (particularly government acquisitions), the customer or sponsor may be a project decision maker as well. Whereas an arbitrary stakeholder says what he or she wants to be true about the architecture, a decision maker has the power to expend

resources to make it true. So a project manager might say (as a stakeholder), "I would like the architecture to be reusable on a related project that I'm managing," but as a decision maker he or she might say, "I see that the changes you've identified as necessary to reuse this architecture on my other project are too expensive, and I won't pay for them." Another difference is that a project decision maker has the power to speak authoritatively for the project, and some of the steps of the ATAM method, for example, ask them to do precisely that. A garden-variety stakeholder, on the other hand, can only hope to influence (but not control) the project. For more on stakeholders, see the sidebar Stakeholders on page 63 in Chapter 3. The client for an architecture evaluation will usually be a project decision maker, with a vested interest in the outcome of the evaluation and holding some power over the project. Sometimes the evaluation team is drawn from the project staff, in which case they are also stakeholders. This is not recommended because they will lack the objectivity to view the architecture in a dispassionate way. 2.4 What Result Does an Architecture Evaluation Produce? In concrete terms, an architecture evaluation produces a report, the form and content of which vary according to the method used. Primarily, though, an architecture evaluation produces information. In particular, it produces answers to two kinds of questions. 1. Is this architecture suitable for the system for which it was designed? 2. Which of two or more competing architectures is the most suitable one for the system at hand? Suitability for a given task, then, is what we seek to investigate. We say that an architecture is suitable if it meets two criteria. 1. The system that results from it will meet its quality goals. That is, the system will run predictably and fast enough to meet its performance (timing) requirements. It will be modifiable in planned ways. It will meet its security constraints. It will provide the required behavioral function. Not every quality property of a system is a direct result of its architecture, but many are, and for those that are, the architecture is suitable if it provides the blueprint for building a system that achieves those properties. 2. The system can be built using the resources at hand: the staff, the budget, the legacy software (if any), and the time allotted before delivery. That is, the architecture is buildable.

This concept of suitability will set the stage for all of the material that follows. It has a couple of important implications. First, suitability is only relevant in the context of specific (and specifically articulated) goals for the architecture and the system it spawns. An architecture designed with high-speed performance as the primary design goal might lead to a system that runs like the wind but requires hordes of programmers working for months to make any kind of modification to it. If modifiability were more important than performance for that system, then that architecture would be unsuitable for that system (but might be just the ticket for another one). In Alice in Wonderland, Alice encounters the Cheshire Cat and asks for directions. The cat responds that it depends upon where she wishes to go. Alice says she doesn't know, whereupon the cat tells her it doesn't matter which way she walks. So If the sponsor of a system cannot tell you what any of the quality goals are for the system, then any architecture will do. An overarching part of an architecture evaluation is to capture and prioritize specific goals that the architecture must meet in order to be considered suitable. In a perfect world, these would all be captured in a requirements document, but this notion fails for two reasons: (1) Complete and up-to-date requirements documents don't always exist, and (2) requirements documents express the requirements for a system. There are additional requirements levied on an architecture besides just enabling the system's requirements to be met. (Buildability is an example.) I Believe You? Frequently when we embark on an evaluation we are outsiders. We have been called in by a project leader or a manager or a customer to evaluate a project. Perhaps this is seen as an audit, or perhaps it is just part of an attempt to improve an organization's software engineering practice. Whatever the reason, unless the evaluation is part of a long-term relationship, we typically don't personally know the architect, or we don't know the major stakeholders. Sometimes this distance is not a problemthe stakeholders are receptive and enthusiastic, eager to learn and to improve their architecture. But on other occasions we meet with resistance and perhaps even fear. The major players sit there with their arms folded across their chests, clearly annoyed that they have been taken away from their real work, that of architecting, to pursue this silly management-directed evaluation. At other times the stakeholders are friendly and even receptive, but they are skeptical. After all, they are the experts in their domains and they have been working in the area, and maybe even on this system, for years.

In either case their attitudes, whether friendly or unfriendly, indicate a substantial amount of skepticism over the prospect that the evaluation can actually help. They are in effect saying, "What could a bunch of outsiders possibly have to tell us about our system that we don't already know?" You will probably have to face this kind of opposition or resistance at some point in your tenure as an architecture evaluator. There are two things that you need to know and do to counteract this opposition. First of all, you need to counteract the fear. So keep calm. If you are friendly and let them know that the point of the meeting is to learn about and improve the architecture (rather than pointing a finger of blame) then you will find that resistance melts away quickly. Most people actually enjoy the evaluation process and see the benefits very quickly. Second, you need to counteract the skepticism. Of course they are the experts in the domain. You know this and they know this, and you should acknowledge this up front. But you are the architecture and quality attribute expert. No matter what the domain, architectural approaches for dealing with and analyzing quality attributes don't vary much. There are relatively few ways to approach performance or availability or security on an architectural level. As an experienced evaluator (and with the help of the insight from the quality attribute communities) you have seen these before, and they don't change much from domain to domain. Furthermore, as an outsider you bring a "fresh set of eyes," and this alone can often bring new insights into a project. Finally, you are following a process that has been refined over dozens of evaluations covering dozens of different domains. It has been refined to make use of the expertise of many people, to elicit, document, and cross-check quality attribute requirements and architectural information. This alone will bring benefit to your project we have seen it over and over again. The process works! The second implication of evaluating for suitability is that the answer that comes out of the evaluation is not going to be the sort of scalar result you may be used to when evaluating other kinds of software artifacts. Unlike code metrics, for example, in which the answer might be 7.2 and anything over 6.5 is deemed unacceptable, an architecture evaluation is going to produce a more thoughtful result. We are not interested in precisely characterizing any quality attribute (using measures such as mean time to failure or end-to-end average latency). That would be pointless at an early stage of design because the actual parameters that determine these values (such as the actual execution time of a component) are often implementation dependent. What we are interested in doingin the spirit of a risk-mitigation activityis learning where an attribute of interest is affected by architectural design decisions, so that we can reason carefully about those decisions, model them more completely in

subsequent analyses, and devote more of our design, analysis, and prototyping energies to such decisions. An architectural evaluation will tell you that the architecture has been found suitable with respect to one set of goals and problematic with respect to another set of goals. Sometimes the goals will be in conflict with each other, or at the very least, some goals will be more important than other ones. And so the manager of the project will have a decision to make if the architecture evaluates well in some areas and not so well in others. Can the manager live with the areas of weakness? Can the architecture be strengthened in those areas? Or is it time for a wholesale restart? The evaluation will help reveal where an architecture is weak, but weighing the cost against benefit to the project of strengthening the architecture is solely a function of project context and is in the realm of management. So An architecture evaluation doesn't tell you "yes" or "no," "good" or "bad," or "6.75 out of 10." It tells you where you are at risk. Architecture evaluation can be applied to a single architecture or to a group of competing architectures. In the latter case, it can reveal the strengths and weaknesses of each one. Of course, you can bet that no architecture will evaluate better than all others in all areas. Instead, one will outperform others in some areas but underperform in other areas. The evaluation will first identify what the areas of interest are and then highlight the strengths and weaknesses of each architecture in those areas. Management must decide which (if any) of the competing architectures should be selected or improved or whether none of the candidates is acceptable and a new architecture should be designed.1 2.5 For What Qualities Can We Evaluate an Architecture? In this section, we say more precisely what suitability means. It isn't quite true that we can tell from looking at an architecture whether the ensuing system will meet all of its quality goals. For one thing, an implementation might diverge from the architectural plan in ways that subvert the quality plans. But for another, architecture does not strictly determine all of a system's qualities. Usability is a good example. Usability is the measure of a user's ability to utilize a system effectively. Usability is an important quality goal for many systems, but usability is largely a function of the user interface. In modern systems design, particular aspects of the user interface tend to be encapsulated within small areas of the architecture. Getting data to and from the user interface and making it flow around the system so that the necessary work is done to support the user is certainly an architectural issue, as is the ability to change the user interface should that be required.

However, many aspects of the user interfacewhether the user sees red or blue backgrounds, a radio button or a dialog boxare by and large not architectural since those decisions are generally confined to a limited area of the system. But other quality attributes lie squarely in the realm of architecture. For instance, the ATAM concentrates on evaluating an architecture for suitability in terms of imbuing a system with the following quality attributes. (Definitions are based on Bass et al. [Bass 98]) Performance: Performance refers to the responsiveness of the system the time required responding to stimuli (events) or the number of events processed in some interval of time. Performance qualities are often expressed by the number of transactions per unit time or by the amount of time it takes to complete a transaction with the system. Performance measures are often cited using benchmarks, which are specific transaction sets or workload conditions under which the performance is measured. Reliability: Reliability is the ability of the system to keep operating over time. Reliability is usually measured by mean time to failure. Availability: Availability is the proportion of time the system is up and running. It is measured by the length of time between failures as well as how quickly the system is able to resume operation in the event of failure. Security: Security is a measure of the system's ability to resist unauthorized attempts at usage and denial of service while still providing its services to legitimate users. Security is categorized in terms of the types of threats that might be made to the system. Modifiability: Modifiability is the ability to make changes to a system quickly and cost effectively. It is measured by using specific changes as benchmarks and recording how expensive those changes are to make. Portability: Portability is the ability of the system to run under different computing environments. These environments can be hardware, software, or a combination of the two. A system is portable to the extent that all of the assumptions about any particular computing environment are confined to one component (or at worst, a small number of easily changed components). If porting to a new system requires change, then portability is simply a special kind of modifiability. Functionality: Functionality is the ability of the system to do the work for which it was intended. Performing a task requires that many or most of the system's components work in a coordinated manner to complete the job. Variability: Variability is how well the architecture can be expanded or modified to produce new architectures that differ in specific, preplanned ways. Variability mechanisms may be run-time (such as negotiating on the fly protocols), compile-time (such as setting compilation parameters to bind certain variables), build-time (such as including or excluding various components or choosing different versions of a component), or code-time mechanisms (such as coding a device driver for a new device). Variability is

important when the architecture is going to serve as the foundation for a whole family of related products, as in a product line. Subsetability: This is the ability to support the production of a subset of the system. While this may seem like an odd property of an architecture, it is actually one of the most useful and most overlooked. Subsetability can spell the difference between being able to deliver nothing when schedules slip versus being able to deliver a substantial part of the product. Subsetability also enables incremental development, a powerful development paradigm in which a minimal system is made to run early on and functions are added to it over time until the whole system is ready. Subsetability is a special kind of variability, mentioned above. Conceptual integrity: Conceptual integrity is the underlying theme or vision that unifies the design of the system at all levels. The architecture should do similar things in similar ways. Conceptual integrity is exemplified in an architecture that exhibits consistency, has a small number of data and control mechanisms, and uses a small number of patterns throughout to get the job done. By contrast, the SAAM concentrates on modifiability in its various forms (such as portability, subsetability, and variability) and functionality. The ARID method provides insights about the suitability of a portion of the architecture to be used by developers to complete their tasks. If some other quality than the ones mentioned above is important to you, the methods still apply. The ATAM, for example, is structured in steps, some of which are dependent upon the quality being investigated, and others of which are not. Early steps of the ATAM allow you to define new quality attributes by explicitly describing the properties of interest. The ATAM can easily accommodate new quality-dependent analysis. When we introduce the method, you'll see where to do this. For now, though, the qualities in the list above form the basis for the methods' capabilities, and they also cover most of what people tend to be concerned about when evaluating an architecture. 2.6 Why Are Quality Attributes Too Vague for Analysis? Quality attributes form the basis for architectural evaluation, but simply naming the attributes by themselves is not a sufficient basis on which to judge an architecture for suitability. Often, requirements statements like the following are written:

"The "The "The "The

system shall system shall system shall system shall

be robust." be highly modifiable." be secure from unauthorized break-in." exhibit acceptable performance."

Without elaboration, each of these statements is subject to interpretation and misunderstanding. What you might think of as robust, your customer might consider barely adequateor vice versa. Perhaps the system can easily adopt a new database but cannot adapt to a new operating system. Is that system maintainable or not? Perhaps the system uses passwords for security, which prevents a whole class of unauthorized users from breaking in, but has no virus protection mechanisms. Is that system secure from intrusion or not? The point here is that quality attributes are not absolute quantities; they exist in the context of specific goals. In particular: A system is modifiable (or not) with respect to a specific kind of change. A system is secure (or not) with respect to a specific kind of threat. A system is reliable (or not) with respect to a specific kind of fault occurrence. A system performs well (or not) with respect to specific performance criteria. A system is suitable (or not) for a product line with respect to a specific set or range of envisioned products in the product line (that is, with respect to a specific product line scope). An architecture is buildable (or not) with respect to specific time and budget constraints.

If this doesn't seem reasonable, consider that no system can ever be, for example, completely reliable under all circumstances. (Think power failure, tornado, or disgruntled system operator with a sledgehammer.) Given that, it is incumbent upon the architect to understand under exactly what circumstances the system should be reliable in order to be deemed acceptable. In a perfect world, the quality requirements for a system would be completely and unambiguously specified in a requirements document. Most of us do not live in such a world. Requirements documents are not written, or are written poorly, or are not finished when it is time to begin the architecture. Also, architectures have goals of their own that are not enumerated in a requirements document for the system: They must be built using resources at hand, they should exhibit conceptual integrity, and so on. And so the first job of an architecture evaluation is to elicit the specific quality goals against which the architecture will be judged. If all of these goals are specifically, unambiguously articulated, that's wonderful. Otherwise, we ask the stakeholders to help us write them down during an evaluation. The mechanism we use is the scenario. A scenario is a short statement describing an interaction of one of the stakeholders with the

system. A user would describe using the system to perform some task; these scenarios would very much resemble use cases in object-oriented parlance. A maintenance stakeholder would describe making a change to the system, such as upgrading the operating system in a particular way or adding a specific new function. A developer's scenario might involve using the architecture to build the system or predict its performance. A customer's scenario might describe the architecture reused for a second product in a product line or might assert that the system is buildable given certain resources. Each scenario, then, is associated with a particular stakeholder (although different stakeholders might well be interested in the same scenario). Each scenario also addresses a particular quality, but in specific terms. Scenarios are discussed more fully in Chapter 3. 2.7 What Are the Outputs of an Architecture Evaluation? 2.7.1 Outputs from the ATAM, the SAAM, and ARID An architecture evaluation results in information and insights about the architecture. The ATAM, the SAAM, and the ARID method all produce the outputs described below. Prioritized Statement of Quality Attribute Requirements An architecture evaluation can proceed only if the criteria for suitability are known. Thus, elicitation of quality attribute requirements against which the architecture is evaluated constitutes a major portion of the work. But no architecture can meet an unbounded list of quality attributes, and so the methods use a consensus-based prioritization. Having a prioritized statement of the quality attributes serves as an excellent documentation record to accompany any architecture and guide it through its evolution. All three methods produce this in the form of a set of quality attribute scenarios. Mapping of Approaches to Quality Attributes The answers to the analysis questions produce a mapping that shows how the architectural approaches achieve (or fail to achieve) the desired quality attributes. This mapping makes a splendid rationale for the architecture. Rationale is something that every architect should record, and most wish they had time to construct. The mapping of approaches to attributes can constitute the bulk of such a description.

Risks and Nonrisks Risks are potentially problematic architectural decisions. Nonrisks are good decisions that rely on assumptions that are frequently implicit in the architecture. Both should be understood and explicitly recorded.2 Documenting of risks and nonrisks consists of An architectural decision (or a decision that has not been made) A specific quality attribute response that is being addressed by that decision along with the consequences of the predicted level of the response A rationale for the positive or negative effect that decision has on meeting the quality attribute requirement

An example of a risk is The rules for writing business logic modules in the second tier of your three-tier client-server style are not clearly articulated (a decision that has not been made). This could result in replication of functionality, thereby compromising modifiability of the third tier (a quality attribute response and its consequences). Unarticulated rules for writing the business logic can result in unintended and undesired coupling of components (rationale for the negative effect). An example of a nonrisk is Assuming message arrival rates of once per second, a processing time of less than 30 milliseconds, and the existence of one higher priority process (the architectural decisions), a one-second soft deadline seems reasonable (the quality attribute response and its consequences) since the arrival rate is bounded and the preemptive effects of higher priority processes are known and can be accommodated (the rationale). For a nonrisk to remain a nonrisk the assumptions must not change (or at least if they change, the designation of nonrisk will need to be rejustified). For example, if the message arrival rate, the processing time, or the number of higher priority processes changes in the example above, the designation of nonrisk could change. 2.7.2 Outputs Only from the ATAM In addition to the preceding information, the ATAM produces an additional set of results described below.

Catalog of Architectural Approaches Used Every architect adopts certain design strategies and approaches to solve the problems at hand. Sometimes these approaches are well known and part of the common knowledge of the field; sometimes they are unique and innovative to the system being built. In either case, they are the key to understanding whether the architecture will meet its goals and requirements. The ATAM includes a step in which the approaches used are catalogued, and this catalog can later serve as an introduction to the architecture for people who need to familiarize themselves with it, such as future architects and maintainers for the system. Approach- and Quality-Attribute-Specific Analysis Questions The ATAM poses analysis questions that are based on the attributes being sought and the approaches selected by the architect. As the architecture evolves, these questions can be used in future mini-evaluations to make sure that the evolution is not taking the architecture in the wrong direction. Sensitivity Points and Tradeoff Points We term key architectural decisions sensitivity points and tradeoff points. A sensitivity point is a property of one or more components (and/or component relationships) that is critical for achieving a particular quality attribute response. For example: The level of confidentiality in a virtual private network might be sensitive to the number of bits of encryption. The latency for processing an important message might be sensitive to the priority of the lowest priority process involved in handling the message. The average number of person-days of effort it takes to maintain a system might be sensitive to the degree of encapsulation of its communication protocols and file formats.

Sensitivity points tell a designer or analyst where to focus attention when trying to understand the achievement of a quality goal. They serve as yellow flags: "Use caution when changing this property of the architecture." Particular values of sensitivity points may become risks when realized in architecture. Consider the examples above. A particular value in the encryption levelsay, 32-bit encryptionmay present a risk in the architecture. Or having a very low priority process in a pipeline that processes an important message may become a risk in the architecture. A tradeoff point is a property that affects more than one attribute and is a sensitivity point for more than one attribute. For example, changing the level

of encryption could have a significant impact on both security and performance. Increasing the level of encryption improves the predicted security but requires more processing time. If the processing of a confidential message has a hard real-time latency requirement then the level of encryption could be a tradeoff point. Tradeoff points are the most critical decisions that one can make in an architecture, which is why we focus on them so carefully. Finally, it is not uncommon for an architect to answer an elicitation question by saying, "We haven't made that decision yet." In this case you cannot point to a component or property in the architecture and call it out as a sensitivity point because the component or property might not exist yet. However, it is important to flag key decisions that have been made as well as key decisions that have not yet been made. 2.8 What Are the Benefits and Costs of Performing an Architecture Evaluation? The main, and obvious, benefit of architecture evaluation is, of course, that it uncovers problems that if left undiscovered would be orders of magnitude more expensive to correct later. In short, architecture evaluation produces better architectures. Even if the evaluation uncovers no problems that warrant attention, it will increase everyone's level of confidence in the architecture. But there are other benefits as well. Some of them are hard to measure, but they all contribute to a successful project and a more mature organization. You may not experience all of these on every evaluation, but the following is a list of the benefits we've often observed. Puts Stakeholders in the Same Room An architecture evaluation is often the first time that many of the stakeholders have ever met each other; sometimes it's the first time the architect has met them. A group dynamic emerges in which stakeholders see each other as all wanting the same thing: a successful system. Whereas before, their goals may have been in conflict with each other (and in fact, still may be), now they are able to explain their goals and motivations so that they begin to understand each other. In this atmosphere, compromises can be brokered or innovative solutions proposed in the face of greater understanding. It is almost always the case that stakeholders trade phone numbers and e-mail addresses and open channels of communication that last beyond the evaluation itself.

Forces an Articulation of Specific Quality Goals The role of the stakeholders is to articulate the quality goals that the architecture should meet in order to be deemed successful. These goals are often not captured in any requirements document, or at least not captured in an unambiguous fashion beyond vague platitudes about reliability and modifiability. Scenarios provide explicit quality benchmarks. Results in the Prioritization of Conflicting Goals Conflicts that might arise among the goals expressed by the different stakeholders will be aired. Each method includes a step in which the goals are prioritized by the group. If the architect cannot satisfy all of the conflicting goals, he or she will receive clear and explicit guidance about which ones are considered most important. (Of course, project management can step in and veto or adjust the group-derived prioritiesperhaps they perceive some stakeholders and their goals as "more equal" than others but not unless the conflicting goals are aired.) Forces a Clear Explication of the Architecture The architect is compelled to make a group of people not privy to the architecture's creation understand it, in detail, in an unambiguous way. Among other things, this will serve as a dress rehearsal for explaining it to the other designers, component developers, and testers. The project benefits by forcing this explication early. Improves the Quality of Architectural Documentation Often, an evaluation will call for documentation that has not yet been prepared. For example, an inquiry along performance lines will reveal the need for documentation that shows how the architecture handles the interaction of run-time tasks or processes. If the evaluation requires it, then it's an odds-on bet that somebody on the project team (in this case, the performance engineer) will need it also. Again, the project benefits because it enters development better prepared. Uncovers Opportunities for Cross-Project Reuse Stakeholders and the evaluation team come from outside the development project, but often work on or are familiar with other projects within the same parent organization. As such, both are in a good position either to spot components that can be reused on other projects or to know of components (or other assets) that already exists and perhaps could be imported into the current project.

Results in Improved Architecture Practices Organizations that practice architecture evaluation as a standard part of their development process report an improvement in the quality of the architectures that are evaluated. As development organizations learn to anticipate the kinds of questions that will be asked, the kinds of issues that will be raised, and the kinds of documentation that will be required for evaluations, they naturally preposition themselves to maximize their performance on the evaluations. Architecture evaluations result in better architectures not only after the fact but before the fact as well. Over time, an organization develops a culture that promotes good architectural design. Now, not all of these benefits may resonate with you. If your organization is small, maybe all of the stakeholders know each other and talk regularly. Perhaps your organization is very mature when it comes to working out the requirements for a system, and by the time the finishing touches are put on the architecture the requirements are no longer an issue because everyone is completely clear what they are. If so, congratulations. But many of the organizations in which we have carried out architecture evaluations are not quite so sophisticated, and there have always been requirements issues that were raised (and resolved) when the architecture was put on the table. There are also benefits to future projects in the same organization. A critical part of the ATAM consists of probing the architecture using a set of qualityspecific analysis questions, and neither the method nor the list of questions is a secret. The architect is perfectly free to arm her- or himself before the evaluation by making sure that the architecture is up to snuff with respect to the relevant questions. This is rather like scoring well on a test whose questions you've already seen, but in this case it isn't cheating: it's professionalism. The costs of architecture evaluation are all personnel costs and opportunity costs related to those personnel participating in the evaluation instead of something else. They're easy enough to calculate. An example using the cost of an ATAM-based evaluation is shown in Table 2.1. The left-most column names the phases of the ATAM (which will be described in subsequent chapters). The other columns split the cost among the participant groups. Similar tables can easily be constructed for other methods. Table 2.1 shows figures for what we would consider a medium-size evaluation effort. While 70 person-days sounds like a substantial sum, in actuality it may not be so daunting. For one reason, the calendar time added to the project is minimal. The schedule should not be impacted by the preparation at all, nor the follow-up. These activities can be carried out

behind the scenes, as it were. The middle phases consume actual project days, usually three or so. Second, the project normally does not have to pay for all 70 staff days. Many of the stakeholders work for other cost centers, if not other organizations, than the development group. Stakeholders by definition have a vested interest in the system, and they are often more than willing to contribute their time to help produce a quality product. Table 2.1 Evaluation Participant Group ATAM Phase Approximate Evaluation (assume members) Cost of a Medium-Size ATAM-Based

Team Stakeholders 5 Project Decision Other

Stakeholders Makers (assume (assume 8) architect, project manager, customer) 0 0

Phase 0: 1 person-day Preparation team leader Phase 1: Initial 5 person-days evaluation (1 day) Phase Complete evaluation (3 days) Phase Follow-up TOTAL 3: 15 person-days 2: 15 person-days

by 1 person-day 3 person-days

9 person-days +

16 person-days (most stakeholders present 2 person-days to only for 2 days) prepare 3 person-days to 0 read and respond to report 18 person-days 16 person-days

36 person-days

It is certainly easy to imagine larger and smaller efforts than the one characterized by Table 2.1. As we will see, all of the methods are flexible, structured to iteratively spiral down into as much detail as the evaluators and evaluation client feel is warranted. Cursory evaluations can be done in a day; excruciatingly detailed evaluations could take weeks. However, the numbers in Table 2.2 represent what we would call nominal applications of the ATAM. For smaller projects, Table 2.2 shows how those numbers can be halved.

If your group evaluates many systems in the same domain or with the same architectural goals, then there is another way that the cost of evaluation can be reduced. Collect and record the scenarios used in each evaluation. Over time, you will find that the scenario sets will begin to resemble each other. After you have performed several of these almost-alike evaluations, you can produce a "canonical" set of scenarios based on past experience. At this point, the scenarios have in essence graduated to become a checklist, and you can dispense with the bulk of the scenario-generation part of the exercise. This saves about a day. Since scenario generation is the primary duty of the stakeholders, the bulk of their time can also be done away with, lowering the cost still further. Table 2.2 Approximate Cost of a Small ATAM-Based evaluation Participant Group Evaluation Stakeholders team (assume 2 Project Decision Other Stakeholders ATAM Phase members) Makers (assume (assume 3) architect, project manager) Phase Preparation Phase 1: evaluation (1 day) Phase Complete evaluation (2 days) Phase 3: Follow- 8 person-days up TOTAL 2 person-days to 0 read and respond to report 6 person-days 2: 4 person-days 4 person-days + 2 person-days prepare to 6 person-days 0: 1 person-day by 1 person-day team leader Initial 2 person-days 2 person-days 0 0

15 person-days 11 person-days

Table 2.3 Approximate Cost of a Medium-Size Checklist-based ATAMBased Evaluation Participant Group ATAM Phase Evaluation Stakeholders Team (assume Project Decision Other 4 members) Makers (assume (assume architect, manager, customer) Stakeholders the customer project validates the checklist)

Phase 0: 1 person-day by 1 person-day Preparation team leader Phase 1: 4 person-days Initial evaluation (1 day) Phase 2: 8 person-days Complete evaluation (2 days) Phase Follow-up TOTAL 6 person-days 3 person-days

0 0

2 person-days

3: 12 person-days 3 person-days to 0 read and respond to report 25 person-days 13 person-days 2 person-days

(You still may want to have a few key stakeholders, including the customer, to validate the applicability of your checklist to the new system.) The team size can be reduced, since no one is needed to record scenarios. The architect's preparation time should be minimal since the checklist will be publicly available even when he or she begins the architecture task. Table 2.3 shows the cost of a medium-size checklist-based evaluation using the ATAM, which comes in at about 47 of the cost of the scenario-based evaluation of Table 2.1. The next chapter will introduce the first of the three architecture evaluation methods in this book: the Architecture Tradeoff Analysis Method. 2.9 For Further Reading

The For Further Reading list of Chapter 9 (Comparing Software Architecture Evaluation Methods) lists good references on various architecture evaluation methods. Zhao has assembled a nice collection of literature resources dealing with software architecture analysis [Zhao 99]. Once an architecture evaluation has identified changes that should be made to architecture, how do you prioritize them? Work is emerging to help an architect or project manager assign quantitative cost and benefit information to architectural decisions [Kazman 01]. 2.10 Discussion Questions 1. How does your organization currently decide whether proposed software architecture should be adopted or not? How does it decide when software architecture has outlived its usefulness and should be discarded in favor of another? 2. Make a business case, specific to your organization, that tells whether or not conducting a software architecture evaluation would pay off. Assume the cost estimates given in this chapter if you like, or use your own. 3. Do you know of a case where a flawed software architecture led to the failure or delay of a software system or project? Discuss what caused the problem and whether a software architecture evaluation might have prevented the calamity. 4. Which quality attributes tend to be the most important to systems in your organization? How are those attributes specified? How does the architect know what they are, what they mean, and what precise levels of each are required? 5. For each quality attribute discussed in this chapteror for each that you named in answer to the previous questionhypothesize three different architectural decisions that would have an effect on that attribute. For example, the decision to maintain a backup database would probably increase a system's availability. 6. Choose three or four pairs of quality attributes. For each pair (think about tradeoffs), hypothesize an architectural decision that would increase the first quality attribute at the expense of the second. Now hypothesize a different architectural decision that would raise the second but lower the first.

How to Evaluate Open Source Software / Free Software (OSS/FS) Programs 1. Introduction The amount of effort you should spend evaluating software is strongly dependent on how complex and important the software is to you. The whole evaluation process might take 5 minutes for a small program, or many months when considering a mammoth change to a major enterprise. The general process is the same; what is different is the amount of effort in each step. You should have a basic idea of what you need. If you don't, you'll need to first determine what your basic needs are. Usually you will refine your understanding of what your needs are as you evaluate, since you're likely to learn of capabilities you hadn't considered before. Try to be flexible in comparing needs to products, though; a product that meets 80% of your needs may have other advantages that make it better than a product that meets 100% of your originally-posited needs. However, you cannot reasonably evaluate products if you don't know what you want them to do for you. 2. Identify candidates The first step is to find out what your options are. You should use a combination of techniques to make sure you don't miss something important. An obvious way is to ask friends and co-workers, particularly if they also need or have used such a program. If they have experience with it, ask for their critique; this will be useful as input for the next step, obtaining reviews. Look at lists of OSS/FS programs, including any list of "generally recognized as mature" (GRAM) or "generally recognized as safe" (GRAS) OSS/FS programs. After all, some OSS/FS products are so well-known that it would a terrible mistake to not consider them. For example, anyone who needed a web server and failed to at least consider Apache would be making a terrible mistake; Apache is the market leader and is extremely capable. Here are a few such lists: 1. My OSS/FS Generally Recognized as Mature (GRAM) list. 2. The Interchange of Data between Administrations (IDA) programme is managed by the European Commission, with a mission to "coordinate the establishment of Trans-European telematic networks between administrations." IDA has developed The IDA Open Source Migration Guidelines to describe how to migrate from proprietary programs to OSS/FS programs. This paper includes a list of suggested OSS/FS programs, emphasizing mature products. 3. The table of equivalents / replacements / analogs of Windows software in Linux lists "equivalent" OSS/FS programs to common proprietary programs. Note that not all OSS/FS programs in this table of equivalents/ replacements/ analogs are mature, and that not all programs in the table are OSS/FS. You should certainly run some searches, and there are several different kinds of search systems you should try:

1. Search using specialized sites which try to track OSS/FS programs.

Freshmeat has a lengthy list. Icewalkers maintains a list, but note that it only tracks programs that run on Unix/Linux. The Free Software Foundation's "Free Software Directory" is somewhat smaller, but they work hard to make sure their information is accurate (in particular, they check licenses carefully). If you're searching for a program to run on Microsoft Windows, OSSwin tracks many OSS/FS programs that run on Windows and the OpenCD project gives away a CD of respected OSS/FS programs that includes a good Windows installer. If you're a software developer looking for particular reusable components, consider using Koders.com, which specifically tracks OSS/FS software components so they can be reused. 2. Search using sites which host or include many OSS/FS projects, such as SourceForge and Savannah. 3. Use a good general-purpose Internet search engine, and search for the kind of product you're looking for. One good search engine is Google; other good search engines include Teoma, Alltheweb, and AltaVista. 4. Use a search engine whose focus or options might aid you. For example, Google's specialized searches for Linux and BSD are more likely to help you find OSS/FS programs, even if you're looking for something to run on Microsoft Windows. 5. Look at Linux distributions and see what they include. Debian includes an especially large set of OSS/FS projects in its distribution, for example, because since they are Internet-based it's easy for them to include a package for nearly any project. You may also find it helpful to search software documentation for a particular capability, especially if your search criteria is so complex that traditional search systems can't help you. In that case, set up a computer with a big hard drive, install a typical OSS/FS distribution with "everything", and then use the command "man -k" and so on to find plausible programs. Here are some tips about searching for OSS/FS programs and components: 1. Avoid search engines with obvious conflicts of interest, e.g., a search engine owned by a maker of one product may not help you learn about their competitors. Some search engines (like Google) accept payment but place paid results separately from the unpaid results - this is fine, and the paid articles can certainly help you identify options, but be sure to review the top unpaid articles too. 2. Try several variations of what you're searching for. Identify a few key words that would likely be in a description of what you're looking for. Others may not use the same naming conventions you do, so you'll need to try variations. 3. If you know the name of an existing well-known product, search for that name plus words like "compete", "competitor", or "compatible" to find its competition.

4. Once you know the names of several products, search for the combination of names so you can find pages that list the products of that type or contrast the products (hopefully with yet more products). 5. If there's a naming convention for the kind of program you're looking for, exploit that convention while searching. For example, programs that translate one data format into another often follow the naming convention "x2y", where x and y are the filename extensions. Thus, "gif2png" is a likely name for a program to convert the GIF format to the PNG format. Also try "to" instead of "2", and if that doesn't work, search for alternative data formats you can easily convert a format to as an intermediate step (e.g., try "rtf" if "doc" doesn't work). If all else fails, ask others. Find somewhat similar or related programs, and ask for what you're looking for on their mailing lists. Ask only a few of the most relevant lists; no one wants to see the same question in 50 different lists. You can also use general systems to make requests, such as Google answers, where you pay a fee to get an answer. And of course, you can always hire someone to do a more detailed search. 3. Read existing reviews After you've identified your options, read existing evaluations about the alternatives. It's far more efficient to first learn about a program's strengths and weaknesses from a few reviews than to try to discern that information just from project websites. The simplest way to find these reviews is to use a search engine (like Google) and search for an article containing the names of all the candidates you've identified. Also, search for web sites that try to cover that market or functional area (e.g., by searching for the general name of that type of product, as you should have already done), and see if they've published reviews. In the process, you may even identify plausible candidates you missed earlier. I cannot possibly list all reviews here; that's a never-ending task. But here are reviews of OSS/FS programs in especially complicated areas that I happen to be aware of: 1. There are so many OSS/FS Content Management Systems (CMSs) that it can be hard to figure out where to start. "Content Management Problems and Open Source Solutions" by Seth Gottlieb (23 Jan 2006) reviews a number of OSS/FS content management systems (CMS); it only covers a small part of the space, actually, but it's definitely a useful place to start. 2. There are a huge number of very good OSS/FS Software Configuration Management (SCM) programs, too. In that case, take a look at my own review paper, Comments on Open Source Software / Free Software (OSS/FS) Software Configuration Management (SCM) Systems. Both of these areas (CMS and SCM) are fundamentally about using software to help people collaborate. Since OSS/FS projects often involve collaboration of many people, it's not surprising that these areas have a very large and rich set of different OSS/FS products.

It's critical to remember that many evaluations are biased or not particularly relevant to your circumstance. Most magazines are supported by advertizing, and they're a lot less likely to bite the hands that feed them. Systems that allow multiple people to comment (like Freshmeat's "rating" value) can be easily biased by someone intent on biasing them. Still, it's worth hearing a few opinions from multiple sources. In particular, evaluations often identify important information about the programs that you might not have noticed otherwise. An important though indirect "review" of a product is the product's popularity, also known as market share. Generally you should always try to include the most popular products in any evaluation. Products with large market share are likely to be sufficient for many needs, are often easier to support and interoperate, and so on. OSS/FS projects are easier to sustain once they have many users; many developers are originally users, so if a small percentage of users become developers, having more users often translates into having more developers. Also, developers do not want their work wasted, so they will want to work with projects perceived to be successful. Conversely, a product rapidly losing market share has a greater risk, because presumably people are leaving it for a reason (be sure to consider whatever its replacement is!). Market share is extremely hard to measure for most OSS/FS products, because anyone can just download and install them without registering with anyone. However, market share data is available for some common products (such as operating systems and web browsers). This is especially possible with programs that provide Internet services, because programs can be used to sample the Internet to see what's running. Download counts and "popularity" values (e.g., from Freshmeat and SourceForge) can also hint at market share, but again these are easy to bias. Just searching for references to the program name are usually misleading, since many names aren't unique to a particular project. For OSS/FS projects, a partial proxy for market share is how often people link to its project page. Web search engines can often tell you how many links there are to a given project home page (under Google, select Advanced search and then use "find pages that link to the page"). A "link popularity" contest can at least suggest which OSS/FS project is more popular than others. Note that link popularity may only show widespread interest (e.g., it's an interesting project), not that the product is widely used or ready for use. An interesting indirect measure of a product is whether or not it's included in "picky" Linux distributions. Some distributions, such as Red Hat Linux, intentionally try to keep the number of components low to reduce the number of CD-ROMs in their distribution, and evaluate products first to see which ones to include. Thus, if the product is included, it's likely to be one of the best OSS/FS products available, because its inclusion reflects an evaluation by someone else. 4. Briefly compare the leading programs' attributes to your needs Once you've read other reviews and identified the leading OSS/FS contenders, you can begin to briefly examine them to see which best meet your needs.

The goal is to winnow down the list of realistic alternatives to a few "most likely" candidates. Note that you need to do this after reading a few reviews, because the reviews may have identified some important attributes you might have forgotten or not realized were important. This doesn't need to be a lengthy process; you can often quickly eliminate all but a few candidates. The first step is to find the OSS/FS project's web site. Practically every OSS/FS project has a project web site; by this point you should have addresses of those web sites, but if not, a search engine should easily find them. An OSS/FS project's web site doesn't just provide a copy of its OSS/FS program; it also provides a wealth of information that you can use to evaluate the program it's created. For example, project web sites typically host a brief description of the project, a Frequently Asked Questions (FAQ) list, project documentation, web links to related/competing projects, mailing lists for developers and users to discuss the program or project, and so on. The Software Release Practice HOWTO includes guidance to developers on how to create a project web site; the Free Software Project Management HOWTO provides guidance to those who manage such projects. In rare cases there may be a "fork", that is, competing projects whose programs that are based on a single original program. This sometimes happens if, for example, there is a major disagreement over technical or project direction. If both projects seem viable, evaluate the forks as separate projects. Next, you can evaluate the project and its program on a number of important attributes. Important attributes include functionality, cost, market share, support, maintenance, reliability, performance, scaleability, useability, security, flexibility/customizability, interoperability, and legal/license issues. The benefits, drawbacks, and risks of using a program can be determined from examining these attributes. The attributes are the same as with proprietary software, of course, but the way you should evaluate them with OSS/FS is often different. In particular, because the project and code is completely exposed to the world, you can (and should!) take advantage of this information during evaluation. Each of these will be discussed below; if there are other attributes that are important to you, by all means examine those too. 4.1 Functionality 4.2 Cost 4.3 Market Share 4.5 Maintenance/Longevity 4.4 Support 4.6 Reliability 4.7 Performance 4.8 Scaleability 4.9 Useability 4.10 Security 4.11 Flexibility/Customizability

4.13 Legal/license issues 4.13.1 Warranty/legal recourse 4.13.2 License Audits 4.13.3 License issues unique to OSS/FS 4.13.3.1 Checking if the program is OSS/FS 4.13.3.2 Why different OSS/FS licenses matter 4.13.3.3 Copylefting vs. non-copylefting 4.13.3.4 Computer Libraries 4.12.3.5 Other examples of license impacts 4.12.3.5 Other examples of license impacts 4.13.3.6 Patent defense 4.13.3.7 License summary In summary, OSS/FS software licenses are important to developers, and they can impact users who may become developers (or pay developers to make a change). However, OSS/FS software licenses primarily cover what developers can and cannot do, not of users who do not change the software.

Software development methods/methodologies OVERVIEW OF SOFTWARE DEVELOPMENT METHODS A Software Development Method; otherwise known as Software Development Methodology refers to the framework that is used to structure, plan, and control the process of developing an information system. A wide variety of such frameworks have evolved over the years, each with its own recognized strengths and weaknesses. What is worth noting of them is that one system development methodology is not necessarily suitable for use by all projects. Each of the available methodologies is best suited to specific kinds of projects, based on various technical, organizational, project and team considerations. Any framework of a software development methodology consists of:

A software development philosophy, with the approach or approaches of the software development process. This requires having a thorough understanding of the intended method. Multiple tools, models and methods, to assist in the software development process.

These frameworks are often bound to some kind of organization, which further develops, supports the use, and promotes the methodology. The methodology is often documented in some kind of formal documentation. History One of the oldest software development tools is flowcharting, which has its roots in the 1920s. The software development methodology didn't emerge until the 1960s. According to Elliott (2004) the Systems development life cycle (SDLC) can be considered to be the oldest formalized methodology for building information systems. The main idea of the SDLC has been "to pursue the development of information systems in a very deliberate, structured and methodical way, requiring each stage of the life cycle from inception of the idea to delivery of the final system, to be carried out in rigidly and sequentially". The main target of this methodology in the 1960s has been "to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines". Specific software development methodologies 1970s

Structured programming since 1969

1980s

Structured Systems Analysis and Design Methodology (SSADM) from 1980 onwards

1990s

Object-oriented programming (OOP) has been developed since the early 1960s, and developed as the dominant programming methodology during the mid-1990s. Rapid application development (RAD) since 1991. Scrum (development), since the late 1990s Team software process developed by Watts Humphrey at the SEI

2000s

Extreme Programming since 1999 Rational Unified Process (RUP) since 1998. Agile Unified Process (AUP) since 2005 by Scott Ambler

Software development approaches Every software development methodology has more or less its own approach to software development. There is a set of more general approaches, which are developed into several specific methodologies. These approaches are:[1]

Waterfall: linear framework type. Prototyping: iterative framework type Incremental: combination of linear and iterative framework type Spiral: combination linear and iterative framework type Rapid Application Development (RAD): Iterative Framework Type. A b

Waterfall model The waterfall the phases of model is a sequential development process, in which testing

development is seen as flowing steadily downwards (like a waterfall) through requirements analysis, design, implementation, (validation), integration, and maintenance. The first formal description of the waterfall model is often cited to be an article published by Winston W. Royce in 1970 although Royce did not use the term "waterfall" in this article. Basic principles of the waterfall model are:

Project is divided into sequential phases, with some overlap and splash back acceptable between phases. Emphasis is on planning, time schedules, target dates, budgets and implementation of an entire system at one time.

Tight control is maintained over the life of the project through the use of extensive written documentation, as well as through formal reviews and approval/signoff by the user and information technology management occurring at the end of most phases before beginning the next phase.

Prototyping Software prototyping, is the framework of activities during software development of creating prototypes, i.e., incomplete versions of the software program being developed. Basic principles of prototyping are:

Not a standalone, complete development methodology, but rather an approach to handling selected portions of a larger, more traditional development methodology (i.e. Incremental, Spiral, or Rapid Application Development (RAD)).

Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.

User is involved throughout the process, which increases the likelihood of user acceptance of the final implementation. Small-scale mock-ups of the system are developed following an iterative modification process until the prototype evolves to meet the users requirements.

While most prototypes are developed with the expectation that they will be discarded, it is possible in some cases to evolve from prototype to working system.

A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problem.

Incremental Various methods are acceptable for combining linear and iterative systems development methodologies, with the primary objective of each being to

reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process. Basic principles of incremental development are:

A series of mini-Waterfalls are performed, where all phases of the Waterfall development model are completed for a small part of the systems, before proceeding to the next incremental, or

Overall requirements are defined before proceeding to evolutionary, mini-Waterfall development of individual increments of the system, or The initial software concept, requirements analysis, and design of architecture and system core are defined using the Waterfall approach, followed by iterative Prototyping, which culminates in installation of the final prototype (i.e., working system).

Spiral Model. The spiral model is a software development process combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. Basic principles:

Focus is on risk assessment and on minimizing project risk by breaking a project into smaller segments and providing more ease-of-change during the development process, as well as providing the opportunity to evaluate risks and weigh consideration of project continuation throughout the life cycle.

"Each cycle involves a progression through the same sequence of steps, for each portion of the product and for each of its levels of elaboration, from an overall concept-of-operation document down to the coding of each individual program."[4]

Each trip around the spiral traverses four basic quadrants: (1) determine objectives, alternatives, and constraints of the iteration; (2) Evaluate

alternatives;

Identify

and

resolve

risks;

(3)

develop

and

verify

deliverables from the iteration; and (4) plan the next iteration.[5] Begin each cycle with an identification of stakeholders and their win conditions, and end each cycle with review and commitment. Rapid Application Development (RAD) Rapid Application Development (RAD) is a software development methodology, which involves iterative development and the construction of prototypes. Rapid application development is a term originally used to describe a software development process introduced by James Martin in 1991. Basic principles:

Key objective is for fast development and delivery of a high quality system at a relatively low investment cost. Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.

Aims to produce high quality systems quickly, primarily through the use of iterative Prototyping (at any stage of development), active user involvement, and computerized development tools. These tools may include Graphical User Interface (GUI) builders, Computer Aided Software Engineering (CASE) tools, Database Management Systems (DBMS), fourth-generation programming languages, code generators, and object-oriented techniques.

Key emphasis is on fulfilling the business need, while technological or engineering excellence is of lesser importance. Project control involves prioritizing development and defining delivery deadlines or time boxes. If the project starts to slip, emphasis is on reducing requirements to fit the time box, not in increasing the deadline.

Generally includes Joint Application Development (JAD), where users are intensely involved in system design, either through consensus building in structured workshops, or through electronically facilitated interaction.

Active user involvement is imperative. Iteratively produces production software, as opposed to a throwaway prototype. Produces documentation necessary to facilitate future development and maintenance. Standard systems analysis and design techniques can be fitted into this framework.

Other software development approaches Other method concepts are:

Object oriented development methodologies, such as Grady Booch's Object-oriented design (OOD), also known as object-oriented analysis and design (OOAD). The Booch model includes six diagrams: class, object, state transition, interaction, module, and process.[7]

Top-down programming: evolved in the 1970s by IBM researcher Harlan Mills (and Niklaus Wirth) in developed structured programming. Unified Process (UP) is an iterative software development methodology approach, based on UML. UP organizes the development of software into four phases, each consisting of one or more executable iterations of the software at that stage of development: Inception, Elaboration, Construction, and Guidelines. There are a number of tools and products available designed to facilitate UP implementation. One of the more popular versions of UP is the Rational Unified Process (RUP).

Agile Software Development refers to a group of software development methodologies based on iterative development, where requirements and solutions evolve through collaboration between self-organizing crossfunctional teams. The term was coined in the year 2001 when the Agile Manifesto was formulated.

Software prototyping Software prototyping, an activity during certain software development, is the creation of prototypes, i.e., incomplete versions of the software program being developed. A prototype typically simulates only a few aspects of the features of the eventual program, and may be completely different from the eventual implementation. The conventional purpose of a prototype is to allow users of the software to evaluate developers' proposals for the design of the eventual product by actually trying them out, rather than having to interpret and evaluate the design based on descriptions. Prototyping can also be used by end users to describe and prove requirements that developers have not considered, so "controlling the prototype" can be a key factor in the commercial relationship between solution providers and their clients. Prototyping has several benefits: The software designer and implementer can obtain feedback from the users early in the project. The client and the contractor can compare if the software made matches the software specification, according to which the software program is built. It also allows the software engineer some insight into the accuracy of initial project estimates and whether the deadlines and milestones proposed can be successfully met. The degree of completeness and the techniques used in the prototyping have been in development and debate since its proposal in the early 1970s.[6] This process is in contrast with the 1960s and 1970s monolithic development cycle of building the entire program first and then working out any inconsistencies between design and implementation, which led to higher software costs and poor estimates of time and cost. The monolithic approach has been dubbed the "Slaying the (software) Dragon" technique, since it assumes that the software designer and developer is a single hero who has to slay the entire dragon alone. Prototyping can also avoid the great expense and difficulty of changing a finished software product. Overview The process of prototyping involves the following steps
1. Identify basic requirements

Determine basic requirements including the input and output information desired. Details, such as security, can typically be ignored.

2. Develop Initial Prototype The initial prototype is developed that includes only user interfaces. 3. Review The customers, including end-users, examine the prototype and provide feedback on additions or changes. 4. Revise and Enhancing the Prototype Using the feedback both the specifications and the prototype can be improved. Negotiation about what is within the scope of the contract/product may be necessary. If changes are introduced then a repeat of steps #3 ands #4 may be needed. Types of prototyping Software prototyping has many variants. However, all the methods are in some way based on two major types of prototyping: Throwaway Prototyping and Evolutionary Prototyping. Throwaway prototyping Also called close ended prototyping. Throwaway or Rapid Prototyping refers to the creation of a model that will eventually be discarded rather than becoming part of the final delivered software. After preliminary requirements gathering is accomplished, a simple working model of the system is constructed to visually show the users what their requirements may look like when they are implemented into a finished system. Rapid Prototyping involved creating a working model of various parts of the system at a very early stage, after a relatively short investigation. The method used in building it is usually quite informal, the most important factor being the speed with which the model is provided. The model then becomes the starting point from which users can re-examine their expectations and clarify their requirements. When this has been achieved, the prototype model is 'thrown away', and the system is formally developed based on the identified requirements.[7] The most obvious reason for using Throwaway Prototyping is that it can be done quickly. If the users can get quick feedback on their requirements, they may be able to refine them early in the development of the software. Making

changes early in the development lifecycle is extremely cost effective since there is nothing at that point to redo. If a project is changed after a considerable work has been done then small changes could require large efforts to implement since software systems have many dependencies. Speed is crucial in implementing a throwaway prototype, since with a limited budget of time and money little can be expended on a prototype that will be discarded. Another strength of Throwaway Prototyping is its ability to construct interfaces that the users can test. The user interface is what the user sees as the system, and by seeing it in front of them, it is much easier to grasp how the system will work. it is asserted that revolutionary rapid prototyping is a more effective manner in which to deal with user requirements-related issues, and therefore a greater enhancement to software productivity overall. Requirements can be identified, simulated, and tested far more quickly and cheaply when issues of evolvability, maintainability, and software structure are ignored. This, in turn, leads to the accurate specification of requirements, and the subsequent construction of a valid and usable system from the user's perspective via conventional software development models. [8] Prototypes can be classified according to the fidelity with which they resemble the actual product in terms of appearance, interaction and timing. One method of creating a low fidelity Throwaway Prototype is Paper Prototyping. The prototype is implemented using paper and pencil, and thus mimics the function of the actual product, but does not look at all like it. Another method to easily build high fidelity Throwaway Prototypes is to use a GUI Builder and create a click dummy, a prototype that looks like the goal system, but does not provide any functionality. Not exactly the same as Throwaway Prototyping, but certainly in the same family, is the usage of storyboards, animatics or drawings. These are nonfunctional implementations but show how the system will look. SUMMARY:-In this approach the prototype is constructed with the idea that it will be discarded and the final system will be built from scratch. The steps in this approach are: 1. 2. 3. 4. Write preliminary requirements Design the prototype User experiences/uses the prototype, specifies new requirements. Writing final requirements

5. Developing the real product. Evolutionary prototyping Evolutionary Prototyping (also known as breadboard prototyping) is quite different from Throwaway Prototyping. The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it. "The reason for this is that the Evolutionary prototype, when built, forms the heart of the new system, and the improvements and further requirements will be built. When developing a system using Evolutionary Prototyping, the system is continually refined and rebuilt. "evolutionary prototyping acknowledges that we do not understand all the requirements and builds only those that are well understood."[5] This technique allows the development team to add features, or make changes that couldn't be conceived during the requirements and design phase. For a system to be useful, it must evolve through use in its intended operational environment. A product is never "done;" it is always maturing as the usage environment changeswe often try to define a system using our most familiar frame of reference---where we are now. We make assumptions about the way business will be conducted and the technology base on which the business will be implemented. A plan is enacted to develop the capability, and, sooner or later, something resembling the envisioned system is delivered.[9] Evolutionary Prototyping have an advantage over Throwaway Prototyping in that they are functional systems. Although they may not have all the features the users have planned, they may be used on an interim basis until the final system is delivered. "It is not unusual within a prototyping environment for the user to put an initial prototype to practical use while waiting for a more developed versionThe user may decide that a 'flawed' system is better than no system at all."[7] In Evolutionary Prototyping, developers can focus themselves to develop parts of the system that they understand instead of working on developing a whole system.

To minimize risk, the developer does not implement poorly understood features. The partial system is sent to customer sites. As users work with the system, they detect opportunities for new features and give requests for these features to developers. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the softwarerequirements specification, update the design, recode and retest.[10] Incremental prototyping The final product is built as separate prototypes. At the end the separate prototypes are merged in an overall design. Extreme prototyping Extreme Prototyping as a development process is used for developing especially web applications. Basically, it breaks down web development into three phases, each one based on the preceding one. The first phase is a static prototype that consists mainly of HTML pages. In the second phase, the screens are programmed and fully functional using a simulated services layer. In the third phase the services are implemented. The process is called Extreme Prototyping to draw attention to the second phase of the process, where a fully-functional UI is developed with very little regard to the services other than their contract. Advantages of prototyping There are many advantages to using prototyping in software development some tangible, some abstract.[11] Reduced time and costs: Prototyping can improve the quality of requirements and specifications provided to developers. Because changes cost exponentially more to implement as they are detected later in development, the early determination of what the user really wants can result in faster and less expensive software.[8] Improved and increased user involvement: Prototyping requires user involvement and allows them to see and interact with a prototype allowing them to provide better and more complete feedback and specifications.[7] The presence of the prototype being examined by the user prevents many misunderstandings and miscommunications that occur when each side believe the other understands what they said. Since users know the problem domain better than anyone on the development team does, increased interaction can result in final product that has greater tangible and intangible quality. The

final product is more likely to satisfy the users desire for look, feel and performance. Disadvantages of prototyping Using, or perhaps misusing, prototyping can also have disadvantages.[11] Insufficient analysis: The focus on a limited prototype can distract developers from properly analyzing the complete project. This can lead to overlooking better solutions, preparation of incomplete specifications or the conversion of limited prototypes into poorly engineered final projects that are hard to maintain. Further, since a prototype is limited in functionality it may not scale well if the prototype is used as the basis of a final deliverable, which may not be noticed if developers are too focused on building a prototype as a model. User confusion of prototype and finished system: Users can begin to think that a prototype, intended to be thrown away, is actually a final system that merely needs to be finished or polished. (They are, for example, often unaware of the effort needed to add error-checking and security features which a prototype may not have.) This can lead them to expect the prototype to accurately model the performance of the final system when this is not the intent of the developers. Users can also become attached to features that were included in a prototype for consideration and then removed from the specification for a final system. If users are able to require all proposed features be included in the final system this can lead to conflict. Developer misunderstanding of user objectives: Developers may assume that users share their objectives (e.g. to deliver core functionality on time and within budget), without understanding wider commercial issues. For example, user representatives attending Enterprise software (e.g. PeopleSoft) events may have seen demonstrations of "transaction auditing" (where changes are logged and displayed in a difference grid view) without being told that this feature demands additional coding and often requires more hardware to handle extra database accesses. Users might believe they can demand auditing on every field, whereas developers might think this is feature creep because they have made assumptions about the extent of user requirements. If the solution provider has committed delivery before the user requirements were reviewed, developers are between a rock and a hard place, particlarly if user management derives some advantage from their failure to implement requirements. Developer attachment to prototype: Developers can also become attached to prototypes they have spent a great deal of effort producing; this can lead to problems like attempting to convert a limited prototype into a final system when it does not have an appropriate underlying architecture. (This may

suggest that throwaway prototyping, rather than evolutionary prototyping, should be used.) Excessive development time of the prototype: A key property to prototyping is the fact that it is supposed to be done quickly. If the developers lose sight of this fact, they very well may try to develop a prototype that is too complex. When the prototype is thrown away the precisely developed requirements that it provides may not yield a sufficient increase in productivity to make up for the time spent developing the prototype. Users can become stuck in debates over details of the prototype, holding up the development team and delaying the final product. Expense of implementing prototyping: the start up costs for building a development team focused on prototyping may be high. Many companies have development methodologies in place, and changing them can mean retraining, retooling, or both. Many companies tend to just jump into the prototyping without bothering to retrain their workers as much as they should. A common problem with adopting prototyping technology is high expectations for productivity with insufficient effort behind the learning curve. In addition to training for the use of a prototyping technique, there is an often overlooked need for developing corporate and project specific underlying structure to support the technology. When this underlying structure is omitted, lower productivity can often result.[13] Best projects to use prototyping It has been argued that prototyping, in some form or another, should be used all the time. However, prototyping is most beneficial in systems that will have many interactions with the users. It has been found that prototyping is very effective in the analysis and design of on-line systems, especially for transaction processing, where the use of screen dialogs is much more in evidence. The greater the interaction between the computer and the user, the greater the benefit is that can be obtained from building a quick system and letting the user play with it.[7] Systems with little user interaction, such as batch processing or systems that mostly do calculations, benefit little from prototyping. Sometimes, the coding needed to perform the system functions may be too intensive and the potential gains that prototyping could provide are too small.[7] Prototyping is especially good for designing good human-computer interfaces. "One of the most productive uses of rapid prototyping to date has been as a

tool for iterative user requirements engineering and human-computer interface design."[8] Methods There are few formal prototyping methodologies even though most Agile Methods rely heavily upon prototyping techniques. Dynamic systems development method Dynamic Systems Development Method (DSDM)[18] is a framework for delivering business solutions that relies heavily upon prototyping as a core technique, and is itself ISO 9001 approved. It expands upon most understood definitions of a prototype. According to DSDM the prototype may be a diagram, a business process, or even a system placed into production. DSDM prototypes are intended to be incremental, evolving from simple forms into more comprehensive ones. DSDM prototypes may be throwaway or evolutionary. Evolutionary prototypes may be evolved horizontally (breadth then depth) or vertically (each section is built in detail with additional iterations detailing subsequent sections). Evolutionary prototypes can eventually evolve into final systems. The four categories of prototypes as recommended by DSDM are:

Business prototypes used to design and demonstrate the business processes being automated. Usability prototypes used to define, refine, and demonstrate user interface design usability, accessibility, look and feel. Performance and capacity prototypes - used to define, demonstrate, and predict how systems will perform under peak loads as well as to demonstrate and evaluate other non-functional aspects of the system (transaction rates, data storage volume, response time, etc.) Capability/technique prototypes used to develop, demonstrate, and evaluate a design approach or concept.

The DSDM lifecycle of a prototype is to: 1. 2. 3. 4. Identify prototype Agree to a plan Create the prototype Review the prototype

Operational prototyping Operational Prototyping was proposed by Alan Davis as a way to integrate throwaway and evolutionary prototyping with conventional system development. "[It] offers the best of both the quick-and-dirty and conventional-development worlds in a sensible manner. Designers develop only well-understood features in building the evolutionary baseline, while using throwaway prototyping to experiment with the poorly understood features."[5] Davis' belief is that to try to "retrofit quality onto a rapid prototype" is not the correct approach when trying to combine the two approaches. His idea is to engage in an evolutionary prototyping methodology and rapidly prototype the features of the system after each evolution. The specific methodology follows these steps:

[5]

An evolutionary prototype is constructed and made into a baseline using conventional development strategies, specifying and implementing only the requirements that are well understood. Copies of the baseline are sent to multiple customer sites along with a trained prototyper. At each site, the prototyper watches the user at the system. Whenever the user encounters a problem or thinks of a new feature or requirement, the prototyper logs it. This frees the user from having to record the problem, and allows them to continue working. After the user session is over, the prototyper constructs a throwaway prototype on top of the baseline system. The user now uses the new system and evaluates. If the new changes aren't effective, the prototyper removes them. If the user likes the changes, the prototyper writes feature-enhancement requests and forwards them to the development team. The development team, with the change requests in hand from all the sites, then produce a new evolutionary prototype using conventional methods.

Obviously, a key to this method is to have well trained prototypers available to go to the user sites. The Operational Prototyping methodology has many benefits in systems that are complex and have few known requirements in advance. Evolutionary systems development Evolutionary Systems Development is a class of methodologies that attempt to formally implement Evolutionary Prototyping. One particular type, called

Systemscraft is described by John Crinnion in his book: Evolutionary Systems Development. Systemscraft was designed as a 'prototype' methodology that should be modified and adapted to fit the specific environment in which it was implemented. Systemscraft was not designed as a rigid 'cookbook' approach to the development process. It is now generally recognised[sic] that a good methodology should be flexible enough to be adjustable to suit all kinds of environment and situation[7] The basis of Systemscraft, not unlike Evolutionary Prototyping, is to create a working system from the initial requirements and build upon it in a series of revisions. Systemscraft places heavy emphasis on traditional analysis being used throughout the development of the system. Evolutionary rapid development Evolutionary Rapid Development (ERD)[12] was developed by the Software Productivity Consortium, a technology development and integration agent for the Information Technology Office of the Defense Advanced Research Projects Agency (DARPA). Fundamental to ERD is the concept of composing software systems based on the reuse of components, the use of software templates and on an architectural template. Continuous evolution of system capabilities in rapid response to changing user needs and technology is highlighted by the evolvable architecture, representing a class of solutions. The process focuses on the use of small artisan-based teams integrating software and systems engineering disciplines working multiple, often parallel short-duration timeboxes with frequent customer interaction.

Key to the success of the ERD-based projects is parallel exploratory analysis and development of features, infrastructures, and components with and adoption of leading edge technologies enabling the quick reaction to changes in technologies, the marketplace, or customer requirements.[9] To elicit customer/user input, frequent scheduled and ad hoc/impromptu meetings with the stakeholders are held. Demonstrations of system capabilities are held to solicit feedback before design/implementation decisions are solidified. Frequent releases (e.g., betas) are made available for

use to provide insight into how the system could better support user and customer needs. This assures that the system evolves to satisfy existing user needs. The design framework for the system is based on using existing published or de facto standards. The system is organized to allow for evolving a set of capabilities that includes considerations for performance, capacities, and functionality. The architecture is defined in terms of abstract interfaces that encapsulate the services and their implementation (e.g., COTS applications). The architecture serves as a template to be used for guiding development of more than a single instance of the system. It allows for multiple application components to be used to implement the services. A core set of functionality not likely to change is also identified and established. The ERD process is structured to use demonstrated functionality rather than paper products as a way for stakeholders to communicate their needs and expectations. Central to this goal of rapid delivery is the use of the "time box" method. Time boxes are f\ixed periods of time in which specific tasks (e.g., developing a set of functionality) must be performed. Rather than allowing time to expand to satisfy some vague set of goals, the time is fixed (both in terms of calendar weeks and person-hours) and a set of goals is defined that realistically can be achieved within these constraints. To keep development from degenerating into a "random walk," long-range plans are defined to guide the iterations. These plans provide a vision for the overall system and set boundaries (e.g., constraints) for the project. Each iteration within the process is conducted in the context of these long-range plans. Once an architecture is established, software is integrated and tested on a daily basis. This allows the team to assess progress objectively and identify potential problems quickly. Since small amounts of the system are integrated at one time, diagnosing and removing the defect is rapid. User demonstrations can be held at short notice since the system is generally ready to exercise at all times. SCRUM Scrum is an agile method for project management. The approach was first described by Takeuchi and Nonaka in "The New New Product Development Game" (Harvard Business Review, Jan-Feb 1986). Tools Efficiently using prototyping requires that an organization have proper tools and a staff trained to use those tools. Tools used in prototyping can vary from individual tools like 4th generation programming languages used for rapid prototyping to complex integrated CASE tools. 4th generation programming

languages like Visual Basic and ColdFusion are frequently used since they are cheap, well known and relatively easy and fast to use. CASE tools, like the Requirements Engineering Environment are often developed or selected by the military or large organizations. Object oriented tools are also being developed like LYMB from the GE Research and Development Center. Users may prototype elements of an application themselves in a spreadsheet. Screen generators, design tools & Software Factories Also commonly used are screen generating programs that enable prototypers to show users systems that don't function, but show what the screens may look like. Developing Human Computer Interfaces can sometimes be the critical part of the development effort, since to the users the interface essentially is the system. Software Factories are Code Generators that allow you to model the domain model and then drag and drop the UI. Also they enable you to run the prototype and use basic database functionality. This approach allows you to explore the domain model and make sure it is in sync with the GUI prototype. Also you can use the UI Controls that will later on be used for real development. Application definition software A new class of software called also Application definition software enable users to rapidly build lightweight, animated simulations of another computer program, without writing code. Application simulation software allows both technical and non-technical users to experience, test, collaborate and validate the simulated program, and provides reports such as annotations, screenshot and schematics. As a solution specification technique, Application Simulation falls between low-risk, but limited, text or drawing-based mock-ups (or wireframes) sometimes called paper based prototyping, and time-consuming, high-risk code-based prototypes, allowing software professionals to validate requirements and design choices early on, before development begins. In doing so, risks and costs associated with software implementations can be dramatically reduced[1]. To simulate applications one can also use software which simulate real-world software programs for computer based training, demonstration, and customer support, such as screencasting software as those areas are closely related. There are also more specialised tools.[2][3] One of the leading tools in this category is iRise.

Visual Basic One of the most popular tools for Rapid Prototyping is Visual Basic (VB). Microsoft Access, which includes a Visual Basic extensibility module, is also a widely accepted prototyping tool that is used by many non-technical business analysts. Although VB is a programming language it has many features that facilitate using it to create prototypes, including:

An interactive/visual user interface design tool. Easy connection of user interface components to underlying functional behavior. Easy to learn and use implementation language (i.e. Basic). Modifications to the resulting software are easy to perform.

Requirements Engineering Environment "The Requirements Engineering Environment (REE), under development at Rome Laboratory since 1985, provides an integrated toolset for rapidly representing, building, and executing models of critical aspects of complex systems."[15] Requirements Engineering Environment is currently used by the Air Force to develop systems. It is: an integrated set of tools that allows systems analysts to rapidly build functional, user interface, and performance prototype models of system components. These modeling activities are performed to gain a greater understanding of complex systems and lessen the impact that inaccurate requirement specifications have on cost and scheduling during the system development process. Models can be constructed easily, and at varying levels of abstraction or granularity, depending on the specific behavioral aspects of the model being exercised.[15] REE is composed of three parts. The first, called proto is a CASE tool specifically designed to support rapid prototyping. The second part is called the Rapid Interface Prototyping System or RIP, which is a collection of tools that facilitate the creation of user interfaces. The third part of REE is a user interface to RIP and proto that is graphical and intended to be easy to use. Rome Laboratory, the developer of REE, intended that to support their internal requirements gathering methodology. Their method has three main parts: [One:] Elicitation from various sources which means u loose (users, interfaces to other systems), specification, and consistency checking [Two:] Analysis that the needs of diverse users taken together do not conflict and are technically and economically feasible [and Three:] Validation that requirements so derived are an accurate reflection of user needs.[15]

In 1996, Rome Labs contracted Software Productivity Solutions (SPS) to further enhance REE to create "a commercial quality REE that supports requirements specification, simulation, user interface prototyping, mapping of requirements to hardware architectures, and code generation"[16] This system is named the Advanced Requirements Engineering Workstation or AREW. LYMB LYMB[17] is an object-oriented development environment aimed at developing applications that require combining graphics-based user interfaces, visualization, and rapid prototyping. Non-relational environments Non-relational definition of data (e.g. using Cache or associative models can help make end-user prototyping more productive by delaying or avoiding the need to normalize data at every iteration of a simulation. This may yield earlier/greater clarity of business requirements, though it does not specifically confirm that requirements are technically and economically feasible in the target production system. PSDL PSDL is a prototype description language to describe real-time software. Software Reuse Software reuse has more traditionally been categorised as the use of external technical objects like print managers and libraries of routines purchased by the developer. However now there is growing interest in internal development of reusable modules. This has led to research into the development, documentation and cataloguing of reusable models. One such research project is SCI. The EUREKA project "Software Components for the Industry" (SCI) The EUREKA project "Software Components for the Industry" (Ref. EU1135) has been established to show the European IT industry ways to increase their productivity and the quality of their products, by adopting a component policy in their software manufacturing process, stimulating reuse of existing software. One track within the project focuses on the provision of technology for automation and reuse. Techniques and tools are required, for instance to support component registration and maintenance, components search, selection, retrieval, and finally to support assembly into applications. The other main track of the project concerns component evaluation and

certification, regarded to be a key success factor of a component policy for software manufacturing. 1. Introduction Software plays a more and more significant role in society, and in the industry in particular. Software products are continuously applied more widely and intensively. This trend will continue for the next decades. In the meantime, IT organisations must deal with increasingly complex products and with growing technical and commercial competition, while their customers have grown more emancipated and demanding. Software producers that will be able to master this complexity are likely to earn a leading role in industrial competition. IT organisations will have to address their productivity to face the challenge of international competition. At this point, production principles that are common to other branches of industry are expected to be helpful. One approach is increasing productivity by means of automation or assembling predefined components. Another approach is increasing product quality by using specialised manpower, by automating testing and validation, and favouring reuse of existing components. The application of a combination of these two approaches has proven to be successful in other branches of industry (e.g. the electronics industry). Software component quality will be a key issue in the resulting software component policy, since it contributes directly to the pursued product quality improvement and indirectly to productivity improvement, by its promotion of component reuse. The pursued component policy implies the establishment of a preference for (re)use of previously produced software components to build new software systems. The success of the component policy will to a large extent be determined by the confidence in the components (and in fact in the complete reuse process), which should be reinforced by experiences. This, which is our starting point of product quality improvement, makes component quality a key factor to the success of any component policy implementation. Adequate component evaluation will have to ensure a sufficient level of quality, and component certification (which may well be company internal, first party certification) may demonstrate that a component satisfies relevant requirements, and may confidently be used to construct new applications. Here it should be noticed that evaluation has a more formal nature than testing, and intends to prove that the object satisfies appropriate requirements in order to demonstrate its fitness for intended use, where the objective of testing should be to locate any faults in the object.

Subsequently, the central question is then: what is software component quality, and how should components be evaluated? Which requirements should be fulfilled by any component, which component properties should be tested to indicate the fulfilment of these requirements, and which criteria should be adopted to decide on requirements satisfaction?

2. Reuse theory: concepts, but what about implementation? For our component policy, we define a software component as: a software item with a discrete structure, for which a separate specification is available. It has a defined, precise behaviour, it is a black box with a defined and documented service interface, and it may have a specific usage context (domain). A software component does not have to be minimal: it may as well be an aggregate of basic components or a framework. Software components may be used by various mechanisms, e.g.: Cut and paste code; possible subsequent modification affects component. Subroutine libraries; textual importation, linking or calling of components, usually without subsequent modification (black-box usage). Object-Oriented technology (inheritance!); usage of objects, classes and frameworks. Obviously, future software development technology (e.g. automatic application generation) and growing organisational maturity will increase and enhance reuse possibilities, but current industrial practice indicates that widely feasible reuse requires a tangible reuse substance, such as software (code) components, while systematically organized (planned) reuse has the highest benefit potential.

The concept of reuse is largely established in society and industry: proven solutions for types of problems become accepted, generalized and possibly standardized. Also in software development reuse is actually common practice, but usually practised informally on the individual level, and therefore not to the full benefit. The software industry will achieve large benefits if reuse can be formalized and practised systematically. But how to implement and organize software reuse? Currently available literature on software reuse

theory provides sound concepts for the determination of a reuse policy (such as the taxonomy for reuse by Rubn Prieto-Daz, showing several perspectives for a classification of reuse), but generally fails to provide a consistent approach for the actual implementation of such a policy. This is probably also the reason why systematic software reuse still fails to occur widely in IT industry, while its potential is commonly recognized. Although providing promising concepts, reuse studies are often incomplete with respect to the implementation of the reuse process (e.g. process descriptions) and the consequences for the organisation. In terms of the mentioned reuse taxonomy, our component policy, aiming to formalize reuse to be systematically and widely used in an organisation as a (currently) practically feasible solution, may be characterized as planned, black-box and compositional reuse of software components. In our component policy, software product quality (and component quality in particular) is considered to be crucial to success. This promotes component evaluation and certification as key activities in the processes to implement such a policy.

We will now discuss the consequences for software testing of adopting a reuse policy. We will identify differences in test object and testing approach, between components and applications. Using these consequences as a reference, we will subsequently discuss the importance and the meaning of software component evaluation and certification. 3. Implementation of component reuse, and the consequences for testing To implement a software component policy, the software development process must be adapted. The software development process methodology and technology determine the possibilities for software reuse and the nature of the used components, but also the availability and adequacy of documents for component evaluation and testing. So the principles of software development methodology are important to our component policy, e.g. the breakdown of the process in separate phases and their relationship with consequent testing. 3.1. The shift from V-model to X-model

The V-model is a well known model for software development that explicitly

addresses the issue of testing, and which can be implemented through various system development methods. Like most common models for software development, the V-model is typically project-oriented. A component policy requires a shift from a project-culture (pursuing short-term project goals) to a component-culture. There are various ways to implement and bootstrap a component policy. The effectiveness and efficiency of the resulting software development process will be determined by the chosen way, which should depend on the organisations culture and practices.

To express the development and control of reusable components, SCI adopts Hodgsons X-model for component-oriented software development. The Xmodel expresses two activity cycles:

(1) the production of an application, according to the V-model (representing current software liabilities);

(2) the reverse activity to acquire systems and their artefacts for cataloguing components of the completed work for potential reuse (representing fixed software assets). The notions in this figure will be further addressed in section 3.2, where the Xmodel implementation is discussed. A component-culture requires an asset-based attitude towards software engineering and the nature and purport of the software development phases shall be different; the organisation and infrastructure of software development shall be adapted. Literature shows that there are several key factors for reuse. A distinction can be made between technical aspects and organisational aspects, such as management and culture. The component policy needs continuous management commitment, and should be driven by business considerations

(i.e. reduction of time-to-market, improvement of productivity in general or improvement of product quality). The possibilities and chances of success are clearly determined by the organisation itself and its maturity. More mature organisations will have an easier task in implementing a component policy. In any case, the implementation requires careful planning and execution, giving proper attention to the cultural aspect: the policy ultimately has to be made operational by the individuals in the organisation. This is also the reason why the quality of the components, establishing positive experiences in using the components, is crucial to the success. Technical key aspects are domain analysis and component evaluation. This paper intends to provide some clarification with respect to the key aspect of quality evaluation of software components. We will discuss the distinction between application development and component development by describing the development processes. We will point to the differences between the objects of testing (i.e. application vs. component) and their essential characteristics. We will describe the differences between testing of components and testing of applications. Finally we will, in accordance with the discussed testing principles, point to the importance of evaluation and certification of software components.

3.2.

Consequences for development and testing

We will have a closer look on this shift from the V-model to the X-model, and the role of testing in these models.

The X-model for component-based software development recognizes different concurrent lifecycles for components and applications. Consequently, the software development process can be structured into two distinct subprocesses with an interface in-between: (1) component development (producing the components) (2) application development (using the components) (3) the system of libraries interfacing the subprocesses (defining and containing the components).

The interface is implemented by the technology that supports the reuse policy, and contains the information base and component base. The implementation of the component policy in application development must be supported by adequate library and distribution systems, to facilitate easy-to-do component search, retrieval and validation. But also the mechanisms and techniques for integration of components into an application should be effectively supported by methods and tools. And since reusability of components is ultimately demonstrated by their actual usage, experience data on actual usage should be registered and provided during application development.

In component development, component quality should be a key consideration. As we will see, quality expectations regarding components will be higher, as they should be once in the component base (library), a component should be appropriate (i.e. reusable with justified confidence) to be used over and over without concern. This makes component evaluation and testing key activities in the implementation of a component policy. In the next section we will discuss both subprocesses of software development, and we will point out some differences regarding testing.

3.2.1. Application development and testing in the V-model

Application development is typically project-oriented, both with respect to institution and execution, and usually pursues short-term project goals. The primary goal is to deliver a product (i.e. the application) that satisfies the customers needs. But (almost) equally important to customer satisfaction is delivery within time and budget constraints, which seems to have obtained more attention than the aspect of software product quality. Testing savants usually explain the distinction between validation and verification as the questions of developing the right product and developing the product right. One could conclude that generally, we have been more occupied with developing our products right. The management of the software process has been given far more attention than the resulting quality of the software product, both in software development methodology and in quality assurance initiatives (e.g. ISO 9001, CMM). This is probably one of the reasons that testing still has not grown to a mature activity in most software producing organisations.

Testing as a part of traditional software development is structured according to the concept of hierarchical decomposition. This concept recognizes the notion of units or modules, that should NOT be confused with our notion of components! With the waterfall model, testing as a part of application development used to be regarded as the remainder of a software development project and was consequently used as a safety net to limit exceeding the time and budget constraints. The V-model intends to give testing a more prominent place in software development, by clearly distinguishing different testing activities and preparing these testing activities in an early stage, when the constraints in budget and time are not yet in sight. This certainly has improved the chance of reasonable allocation of testing effort, necessary to find a certain share of the faults that exist in the product before delivery. But still testing is not provided with much guidance on effectiveness.

Software development according to the V-model distinguishes the following testing activities:

(1) unit testing: during global design, the application is decomposed into

units, which are consequently specified in detail and realised (coded) more or less separately. Produced units are also tested separately, usually by glass box testing against the detailed specification.
(2) integration testing: the produced units have to interact properly with

surrounding units and the environment (of hardware and other interfaces), which has to be tested after integration into (part of) the application. This is usually conducted mainly by glass box testing, according to the design (architecture) specification. (3) functional testing (or system testing): when the complete software system is available, its environment can be established or simulated to test its complete functionality. This is usually conducted by black box testing, against the contract and the requirement specification. Functional testing or system testing in the V-model should not be confused with acceptance testing: functional testing is the responsibility of the software developer, to ensure (or in practice usually improve) satisfaction of user needs by the products functionality, behaviour and characteristics, while acceptance testing is a responsibility of the customer, to discharge the application developers. In practice not all types of testing activities are always executed

during a development process, and reasons for not testing seem to be easily found. 3.2.2 Component development and testing in the X-model

The production of reusable components in a component-culture requires other priorities and capabilities. The main principle for libraries of software components is consistency: all components should ideally depart from an overall domain analysis and global design, obeying one set of systematic, explicit and uniform conventions. Effective component development requires an overall, domain-oriented view and a generalizing approach. The component development process is represented by the following figure:

The preceding figure shows a process that represents component development as a part of an X-model implementation. Domain analysis provides the basis for reuse, i.e. a complete domain description, including domain-related component requirements. Subsequent service design provides component specifications, based on the domain description and on requests from application development. Component production manufactures components according to specifications, either by purchase, by

adaptation of existing software (generalization!) or by development from scratch. Component certification is based on a structured evaluation, which includes verification of adequate component testing (i.e. mainly unit testing).

We will now point to some essential differences between the testing of components and application testing. These differences are primarily caused by the completely different nature of the testing object and its intended usage. Components are building blocks, that are used indirectly; the application developer (our customer) uses them to construct an application, but does not use the components functionality directly. The end-user (the actual customer of our organisation) will not be aware of the components existence: only the complete picture will be perceived. And this complete picture, or rather pictures, is not known at the time of component testing. This is why dynamic testing of components does not make much sense: its complete environment (e.g. its execution profile) is principally not known. Summarizing, component testing should be primarily performed by static analysis and glass box testing, while application testing is mainly dynamic, black box testing. The differences between component and application testing can be clarified further by referring to the following classification of test activities (see section 3.2.1). 1) Unit testing (or module testing) generally will be part of component development. Unit testing during component development should be based on relevant requirements and conventions that are provided by domain analysis. 2) Integration testing If a component consists of more than one unit, their interaction should obviously be tested as a part of component testing. But also the interaction with other components in the component base (library) should be part of component testing. For this purpose, domain analysis should also provide a clear execution model of components in the domain. Furthermore, during application development integration testing should be conducted regarding units that do not originate from component development. 3) Functional testing is not relevant for software components. The environment, its requirements specifications (and contracts) that are necessary to test the complete functionality are not known.

The principles of testing and the practical experiences in the field of testing can only be applied partly in component development. This means that software testing has to be redefined within the software development process when a component policy is implemented. In the next sections we will redefine component testing as component evaluation. 4. Component evaluation: reusability and quality requirements Component evaluation is an important addition to the testing of software components. Component evaluation should complete and ensure sufficient and adequate testing of a component, for reuse in a specific software development environment. Component evaluation should give answers to questions such as: are all the necessary properties, characteristics and behaviour of a software component clearly specified and correspondingly demonstrated? But also: are all necessary tests conducted to justify the required confidence in the component? And: did the conducted tests provide meaningful and relevant results? Were these results satisfactory (i.e. were the right criteria correct and satisfied)? Subsequently certification, based on a component evaluation, can be carried out. The certified status implies that the component satisfies the organisations requirements for reuse, that the components specification is correct, complete and trustworthy, and that confidence in the components operation is justified. Component evaluation and certification should be the gateway (condition) for inclusion of a component in the component base, and as such verify whether the component is potentially suitable to be used for application construction in a specific (defined) domain. It should be noted that component evaluation does not address the ease-of-use of a component, which is largely determined by the techniques and tools that support the component policy, and hardly by the components software itself. Component evaluation primarily addresses qualities that are determined by the component itself. But which qualities are relevant to the components potential to be used for application development? 4.1. Component requirements for reusability

The requirements for evaluation and certification of software components in the context of a component policy particularly originate from the objective to stimulate the reuse of the software components. The needs of the user of our components (i.e. the application developer) have to be satisfied by the software components characteristics and behaviour. Component (quality) evaluation should optimally support the complete component policy, but

particularly demonstrate that the component may be confidently (re)used for application development, i.e. that the component is reusable. Therefore, we refer to our component evaluation as reusability evaluation.

It is important now to establish a detailed description of the required reusability, in terms of quality requirements that can be translated to measurable component attributes. Which questions should be answered by the component reusability evaluation? As we argued before, component evaluation can improve unit testing by providing a more meaningful aim, and unit testing should be adjusted to contribute to the answers of the questions to be posed. Still too often we seem to test what is easy to test or even what we always test. But testing should give answers to relevant questions, concentrating on the main concerns. So how can we define the required component quality for reuse, i.e. the reusability?

Software developers generally associate the reuse objective with higher quality demands on the software. A software reuse policy is expected to induce automatic improvement of software quality! Software components for reuse would need more development effort, both with respect to the generalization and abstraction process, and with respect to testing and documenting. A software component policy is expected to improve quality, uniformity and maintainability of the software. These expectations should not be confounded! But the main concern of the application developer will be that the component can be reused with confidence and without the occurrence of unpleasant surprises of any nature. The needs of software developers with respect to reusable components are represented by the following global requirements:

the function and behaviour of the component must be perfectly clear, the operation of the component must be dependable, and sufficiently verified, the components behaviour should not be a burden to performance and resource utilisation. Subsequently, our objective should be to elaborate these rather vague requirements into a complete and consistent set of detailed, concrete and

measurable requirements for component testing. We can attain this by establishing a quality profile for reusable components, which specifies the required characteristics and properties in a structured way, and which may subsequently be used to determine a strategy for component testing. A quality profile, in this context perhaps better called reusability profile, should be based on a software quality model. For this purpose we have adopted the ISO 9126 standard, providing a software quality model that hierarchically decomposes software quality into characteristics and subcharacteristics.

When using this standard to address the quality of software components, one should be aware of the particularities of a component as a software product: as argued before, the immediate usage and user of a component in our component policy are of a completely different nature than those of an application. Therefore, the ISO 9126 quality model should be applied with caution: some software quality (sub)characteristics typically refer to the application level, and are not directly applicable to individual software components (at least not generally) in the intended sense. However, most of the quality characteristics will be relevant to software components, some of them directly from the reusability perspective, and some of them depending on the requirements at the application level. When discussing the importance of the various quality (sub)characteristics, the following possibilities may be distinguished:

(1) a characteristic applies directly to components, and may be generally assessed against a priori stated component properties and associated criteria (originating from company policy, domain analysis and other fixed regulations and conventions); (2) a characteristic applies directly to components, but corresponding component requirements can only be derived from the requirements at application level; for these characteristics it must be ensured that appropriate information is available when reuse of the component is considered: corresponding component properties should be required to be appropriately documented (in a uniform and thus comparable way) and should be validated against those claims;
(3) a characteristic applies typically to the application level and is usually

determined by the functionality of one or more specific components in an

application, but does not generally apply to components; these characteristics (for instance usability!) have a totally different meaning when interpreted at the component level; evaluation at component level (apart from functionality assessment) seems artificial. Already we can recognize that the components specification takes a large share in the components quality for reuse. We should bear in mind that at component evaluation time, we have not much more available than just the component itself, consisting of its specification and code. Although very important, even a very thorough domain analysis will not provide sufficient information (criteria) for complete a priori testing and evaluation. To a large extent the actual criteria for component properties will only be known at the time of the reuse decision, derived from the application requirements. Therefore, component evaluation and testing should primarily ensure the availability of adequate information to facilitate a justified reuse decision and verify the validity of this information, where conformity assessment against generally applicable criteria is not possible. 5. Conclusions Software component quality is of crucial importance to the success of a software component policy and its implementation. The customer of the components, the application developer, needs justified confidence in the offered components. Software component evaluation and subsequent certification can provide that confidence.

This paper provided some directions for the specification and evaluation of software components, in particular from the perspective of component quality and testing. Starting from concepts from reuse theory the differences between testing of applications and testing of components have been presented. For software components it has been discussed that testing has some serious restrictions. Software components testing should be enhanced by following the principles of component evaluation. Structured evaluation may be expected to make component testing more effective.

SSADM

Short for Structured Systems Analysis and Design Method, a set of standards developed in the early 1980s for systems analysis and application design widely used for government computing projects in the United Kingdom. SSADM uses a combination of text and diagrams throughout the whole life cycle of a system design, from the initial design idea to the actual physical design of the application. SSADM uses a combination of three techniques:

Logical Data Modeling -- the process of identifying, modeling and documenting the data requirements of the system being designed. The data is separated into entities (things about which a business needs to record information) and relationships (the associations between the entities. Data Flow Modeling -- the process of identifying, modeling and documenting how data moves around an information system. Data Flow Modeling examines processes (activities that transform data from one form to another), data stores (the holding areas for data), external entities (what sends data into a system or receives data from a system, and data flows (routes by which data can flow). Entity Behavior Modeling -- the process of identifying, modeling and documenting the events that affect each entity and the sequence in which these events occur.

Each of these three system models provides a different viewpoint of the same system, and each viewpoint is required to form a complete model of the system being designed. The three techniques are cross-referenced against each other to ensure the completeness and accuracy of the whole application. SSADM application development projects are divided into five modules that are further broken down into a hierarchy of stages, steps and tasks:
1. Feasibility Study -- the business area is analyzed to determine whether a system can cost effectively support the business requirements. 2. Requirements Analysis -- the requirements of the system to be developed are identified and the current business environment is modeled in terms of the processes carried out and the data structures involved. 3. Requirements Specification -- detailed functional and non-functional requirements are identified and new techniques are introduced to define the required processing and data structures. 4. Logical System Specification -- technical systems options are produced and the logical design of update and enquiry processing and system dialogues. 5. Physical Design -- a physical database design and a set of program specifications are created using the logical system specification and technical system specification.

Unlike rapid application development, which conducts steps in parallel, SSADM builds each step on the work that was prescribed in the previous step with no deviation from the model. Because of the rigid structure of the methodology, SSADM is praised for its control over projects and its ability to develop better quality systems

SDLC
(1) Acronym for synchronous data link control, a protocol used in IBM's SNA networks. SDLC is similar to HDLC, an ISO standard. (2) Acronym for system development life cycle. SDLC is the process of developing information systems through investigation, analysis, design, implementation and maintenance. SDLC is also known as information systems development or application development. SDLC is a systems approach to problem solving and is made up of several phases, each comprised of multiple steps:

The software concept - identifies and defines a need for the new system A requirements analysis - analyzes the information needs of the end users The architectural design - creates a blueprint for the design with the necessary specifications for the hardware, software, people and data resources Coding and debugging - creates and programs the final system

System testing - evaluates the system's actual functionality in relation to expected or intended functionality 2.1 Structure and contents of the requirements definition The sectional organization of the requirements definition (ANSI/IEEE guide to Software Requirement Specification [ANSI 1984]) are: 1. Initial situation and goals 2. System application and system environment 3. User interfaces 4. Functional requirements 5. Nonfunctional requirements 6. Exception handling 7. Documentation requirements 8. Acceptance criteria 9. Glossary and index 1. Initial situation and goals Contents: A general description of The initial situation with reference to the requirements analysis,

The project goals, and A delimitation of these goals with respect to the system environment. 2. System application and system environment Contents: Description of the prerequisites that must apply for the system to be used. Note: The description of all information that is necessary for the employment of the system, but not part of the implementation. Specification of the number of users, the frequency of use, and the jobs of the users. 3. User interfaces Contents: The human-machine interface. Notes: This section is one of the most important parts of the requirements definition, documenting how the user communicates with the system. The quality of this section largely determines the acceptance of the software product. 4. Functional requirements Contents: Definition of the system functionality expected by the user All necessary specifications about the type, amount and expected precision of the data associated with each system function. Notes: Good specifications of system functionality contain only the necessary information about these functions. Any additional specifications, such as about the solution algorithm for a function, distracts from the actual specification task and restricts the flexibility of the subsequent system design. Only exact determination of value ranges for data permits a plausibility check to detect input errors. 5. Nonfunctional requirements Contents: Requirements of nonfunctional nature: reliability, portability, and response and processing times ... Note: For the purpose of the feasibility study, it is necessary to weight these requirements and to provide detailed justification. 6. Exception handling

Contents: Description of the effects of various kinds of errors and the required system behavior upon occurrence of an error. Note: Developing a reliable system means considering possible errors in each phase of development and providing appropriate measures to prevent or diminish the effects of errors. 7. Documentation requirements Contents: Establish the scope and nature of the documentation. Note: The documentation of a system provides the basis for both the correct utilization of the software product and for system maintenance. 8. Acceptance criteria Contents: Establishing the conditions for inspection of the system by the client. Notes: The criteria refer to both functional and nonfunctional requirements The acceptance criteria must be established for each individual system requirement. If no respective acceptance criteria can be found for a given requirement, then we can assume that the client is unclear about the purpose and value of the requirement. 9. Glossary and index Contents: A glossary of terms An extensive index Notes: The requirements definition constitutes a document that provides the basis for all phases of a software project and contains preliminary considerations about the entire software life cycle The specifications are normally not read sequentially, but serve as a reference for lookup purposes. 2.2 Quality criteria for requirements definition It must be correct and complete. It must be consistent and unambiguous. It should be minimal. It should be readable and comprehensible. It must be readily modifiable. 2.3 Fundamental problems in defining requirements The fundamental problems that arise during system specification are [Keller 1989]:

The goal/means conflict The determination and description of functional requirements The representation of the user interfaces The goal/means conflict in system specification. The primary task of the specification process is to establish the goal of system development rather than to describe the means for achieving the goal. The requirements definition describes what a system must do, but not how the individual functions are to be realized. Determining and describing the functional requirements. Describing functional requirements in the form of text is extremely difficult and leads to very lengthy specifications. A system model on the user interface level serving as an executable prototype supports the exploration of functional, nonfunctional and interaction-related requirements. It simplifies the determination of dependencies between system functions and abbreviates the requirements definition. A prototype that represents the most important functional aspects of a software system represents this system significantly better than a verbal description could do.

Designing the user interfaces. User interfaces represent a user-oriented abstraction of the functionality of a system. The graphical design of screen layouts requires particular effort and only affects one aspect of the user interface, its appearance. The much more important aspect, the dynamics behind a user interface, can hardly be depicted in purely verbal specifications. Therefore the user interface components of the requirements definition should always be realized as an executable prototype. 2.4 Algebraic specification Algebraic specification [Guttag 1977] is a technique whereby an object is specified in terms of the relationships between the operations that act on that object. A specification is presented in four parts (Figure 2.1): 1. Introduction part where the sort of the entity being specified is introduced and the name of any other specifications which are required are set out 2. Informal description of the sort and its operations 3. Signature where the names of the operations on that object and the sorts <SPECIFICATION NAME> (<Generic Parameter>) of their parameters are defined sort <name> 4. Axioms where the relationships between the sort operations are defined.
imports <LIST OF SPECIFICATION NAMES> <Informal description of the sort and its operations> <operation signatures setting out the names and the types of the parameters to the operations defined over the sort> <Axioms defining the operations over the sort>

Figure 2.1 The format of an algebraic specification. Note: The introduction part of a specification also includes an imports part which names the other specifications which are required in a specification. Description part: Formal text with an informal description Signature part: Names of the operations which are defined over the sort, Number and sort of their parameters, and Sort of the result of evaluating of the operation. Axioms part: Operations in terms of their relationships with each other. Two classes of operations: Constructor operations: Operation that create or modify entities of the sort which is defined in the specification. Inspection operations: Operations that evaluate attributes of the sort which defined in the specification. Example (Figure 2.2) Sort: Coord. Operations: - Creating a coordinate, - Testing coordinates for equality, and - Accessing the X and Y components. Imports: Two specifications: - BOOLEAN - INTEGER. Note: In the specification of the Eq operation the operator = is overloaded.

COORD

sort Coord imports INTEGER,BOOLEAN

Create(Integer,Integer) Coord; X(Coord) Integer; Y(Coord) Integer; X(Create(x,y)) = x Eq(Coord) Boolean; Y(Create(x,y)) = y; Figure

This specification defines a sort called Coord representing a Cartesian coordinate. The operations defined on Coord are X and Y which evaluate the X and Y attributes of an entity of this sort and Eq which compares two entities of sort Coord for equality.

2.2 The specification of Coord.

Eq(Create(x1,y1), Create(x2,y2)) = ((x1=x2) and (y1=y2))

2.5 Model-based specification Specification languages: Z ([Abrial 1980], [Hayes 1987]) VDM ([Jones 1980], [Jones 1986]) RAISE ([RAISE 1992]) Note: Z is based on typed set theory because sets are mathematical entities whose semantics are formally defined. Advantages: Model-based specification is a technique that relies on formulating a model of the system using well understood mathematical entities such as sets and functions System specifications are specified by defining how they affect the overall system model By contrast with the algebraic approach, the state of the system is exposed and a richer variety of mathematical operations is available State changes are straightforward to define All of the specification for each operation is grouped together The model is more concise than corresponding algebraic specifications. A specification in Z is presented as a collection of schemas where a schema introduces some specification entities and sets out relationships between these entities. Schema form: (Figure 2.3) - Schema name (the top line)

Schema Signature sets out the names and types of the entities introduced in the schema. Schema predicate (the bottom part of the specification) sets out the relationships between the entities in the signature by defining a predicate over the signature entities which must always hold.
<Schema name> <Schema signature>

<Schema predicate>

Figure 2.3 Schema form Example 2.1 A specification of a container which can be filled with thing (Figure 2.4).
Container contents: N capacity: N contents capacity

Figure 2.4 The specification of a container

Schema name: Container Schema signature: contents: a natural number capacity: a natural number Schema predicate: contents capacity (contents cannot exceed the capacity of the container.)

Example 2.2 A specification of an indicator (Figure 2.5). Schema name: Indicator Schema signature: light: {off, on} reading: a natural number danger: a natural number Schema predicate: light = on reading danger

Notes: Light is modeled by the values off and on, Reading is modeled as a natural number

Danger is modeled as a natural number. The light should be switched on if only if the reading drops to some dangerous value
light: {off, on} Indicator reading: N danger: N light = on reading danger

Figure 2.5 The specification of an indicator

Example 2.3 Given the specification of an indicator and a container, they can be combined to define a hopper which is a type of container (Figure 2.6). - Schema name: Hopper - Schema signature: Container Indicator - Schema predicate: reading = contents capacity = 5000 danger = 50 Notes: Hopper which has a capacity of 5000 things Light comes on when contents drops to 1% full We need not specify what is held in the hopper.

Hopper Container Indicator

reading = contents capacity = 5000 danger = 50

Figure 2.6 The specification of a hopper

The effect of combining specifications is to make a new specification which inherits the signatures and the predicates of the included specifications. Thus, hopper inherits the signatures of Container and Indicator and their predicates. These are combined with any new signatures and predicates which are

introduced in the specification. In Figure 2.7, three new predicates are introduced.
container: N capacity: N reading: N danger: N light: {off, on} contents capacity light = on reading danger readingFigure 2.7 = contents

Hopper

The expanded specification of a hopper

capacity = 5000 Example 2.4 Operation FillHopper (Figure 2.8) danger = 50 The fill operation adds a specified number of entities to the hopper.

FillHopper Hopper amount?: N contents = contents + amount?

Figure 2.8 The specification of the hopper filling operation New notions: Delta schemas (Figure 2.9), and Inputs. ? symbol: ? is a part of the name Names whose final character is a ? are taken to indicate inputs. Predicate: contents = contents + amount? (the contents after completion of the operation (referred as contents) should equal the sum of the contents before the operation and the amount added to the hopper).
Hopper Hopper Hopper

Figure 2.9 A delta schema

New notions: Xi schemas (Figure 2.10) Some operations do not result in a change of value but still find it useful to reference the values before and after operation. Figure 2.10 (a Xi schema) shows a schema which includes the delta schema and a predicate which states explicitly that the values are unchanged.
Hopper Hopper capacity = capacity contents = contents reading = reading light = light danger = danger

Figure 2.10 A Xi schema

One of the most commonly used techniques in model-based specifications is to use functions or mappings in writing specifications. In programming languages, a function is a abstraction over an expression. When provided with an input, it computes an output value based on the value of the input. A partial function is a function where not all possible inputs have a defined output. The domain of a function is the set of inputs over which the function has a defined result. The range of a function is the set of results which the function can produce. If ordering is important, sequences can be used as a specification mechanism. Sequences may be modeled as functions where the domain is the natural numbers greater than zero and the range is the set of entities which may be held in the sequence.

Object-oriented design

Figure 1:An object

Object-oriented design is part of OO methodology and it forces programmers to think in terms of objects, rather than procedures, when they plan their code. An object contains encapsulated data and procedures grouped together to represent an entity. The 'object interface', how the object can be interacted, is also defined. An object-oriented program is described by the interaction of these objects. Object-oriented design is the discipline of defining the objects and their interactions to solve a problem that was identified and documented during object-oriented analysis. From a business perspective, Object Oriented Design refers to the objects that make up that business. For example a business object can consist of people, data files, equipment, vehicles, etc. These are the elements which comprise the company and should be taken into consideration whenever analyzing the needs of any business. Object-oriented design Input (sources) for object-oriented design

Conceptual model (must have): Conceptual model is the result of objectoriented analysis, it captures concepts in the problem domain. The conceptual model is explicitly chosen to be independent of implementation details, such as concurrency or data storage. Use case (must have): Use case is description of sequences of events that, taken together, lead to a system doing something useful. Each use case provides one or more scenarios that convey how the system should interact with the users called actors to achieve a specific business goal or function. Use case actors may be end users or other systems. In many circumstances use cases are further elaborated into use case diagrams. Use case diagrams are use to identify the actor (users or other systems) and the processes they perform. System Sequence Diagram (should have): System Sequence diagram (SSD) is a picture that shows, for a particular scenario of a use case, the events that external actors generate their order, and possible intersystem events. User interface documentations (if applicable): Document that shows and describes the look and feel of the end product's user interface. It is not mandatory to have this, but it helps to visualize the end-product and therefore helps the designer. Relational data model (if applicable): A data model is an abstract model that describes how data is represented and used. If an object database is not used, the relational data model should usually be created before

the design can start. How the relational to object mapping is done is included to the OO design. Object-oriented concepts supported by an OO language The five basic concepts of object-oriented design are the implementation level features that are built into the programming language. These features are often referred to by these common names:

Object/Class: A tight coupling or association of data structures with the methods or functions that act on the data. This is called a class, or object (an object is created based on a class). Each object serves a separate function. It is defined by its properties, what it is and what it can do. An object can be part of a class, which is a set of objects that are similar. Information hiding: The ability to protect some components of the object from external entities. This is realized by language keywords to enable a variable to be declared as private or protected to the owning class. Inheritance: The ability for a class to extend or override functionality of another class. The so-called subclass has a whole section that is the superclass and then it has its own set of functions and data. Interface: The ability to defer the implementation of a method. The ability to define the functions or methods signatures without implementing them. Polymorphism: The ability to replace an object with its subobjects. The ability of an object-variable to contain, not only that object, but also all of its subobjects.

Designing concepts

Defining objects, creating class diagram from conceptual diagram: Usually map entity to class. * Identifying attributes. Use design patterns (if applicable): A design pattern is not a finished design, it is a description of a solution to a common problem, in a context[1]. The main advantage of using a design pattern is that it can be reused in multiple applications. It can also be thought of as a template for how to solve a problem that can be used in many different situations and/or applications. Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Define application framework (if applicable): Application framework is a term usually used to refer to a set of libraries or classes that are used to

implement the standard structure of an application for a specific operating system. By bundling a large amount of reusable code into a framework, much time is saved for the developer, since he/she is saved the task of rewriting large amounts of standard code for each new application that is developed.

Identify persistent objects/data (if applicable): Identify objects that have to last longer than a single runtime of the application. If a relational database is used, design the object relation mapping. Identify and define remote objects (if applicable)

Output (deliverables) of object-oriented design

Class diagram: A class diagram is a type of static structure UML diagram that describes the structure of a system by showing the system's classes, their attributes, and the relationships between the classes. Sequence Diagram: Extends the System Sequence Diagram to add specific objects that handle the system events. These are usually created for important and complex system events, not for simple or trivial ones. A sequence diagram shows, as parallel vertical lines, different processes or objects that live simultaneously, and, as horizontal arrows, the messages exchanged between them, in the order in which they occur.

Programming concepts

Aspect-oriented programming: One view of aspect-oriented programming (AOP) is that every major feature of the program, core concern (business logic), or cross-cutting concern (additional features), is an aspect, and by weaving them together (also called composition), you finally produce a whole out of the separate aspects. Dependency injection: The basic idea is that if an object depends upon having an instance of some other object then the needed object is "injected" into the dependent object; for example, being passed a database connection as an argument to the constructor instead of creating one internally. Acyclic dependencies principle: The dependency graph of packages or components should have no cycles. This is also referred to as having a directed acyclic graph. [2] For example, package C depends on package B, which depends on package A. If package A also depended on package C, then you would have a cycle.

Composite reuse principle: Favor polymorphic composition of objects over inheritance.[1]

Software maintenance Software maintenance is one of the activities in software engineering, and is the process of enhancing and optimizing deployed software (software release), as well as remedying defects . Software maintenance is also one of the phases in the System Development Life Cycle (SDLC), as it applies to software development. The maintenance phase is the phase which comes after deployment of the software into the field. The software maintenance phase involves changes to the software in order to correct defects and deficiencies found during field usage as well as the addition of new functionality to improve the software's usability and applicability. Software maintenance involves a number of specific techniques. One technique is static slicing, which is used to identify all the program code that can modify some variable. It is generally useful in refactoring program code and was specifically useful in assuring Y2K compliance. The software maintenance phase is an explicit part of the waterfall model of the software development process which was developed during the structured programming movement of computer programming. The other major model, the spiral model developed during the object oriented movement of software engineering makes no explicit mention of a maintenance phase. Nevertheless, this activity is notable, considering the fact that two-thirds of a software system's lifetime cost involves maintenance (Page-Jones pg 31). In a formal software development environment, the developing organization or team will have some mechanisms to document and track defects and deficiencies. Software just like most other products, is typically released with a known set of defects and deficiencies. The software is released with the issues because the development organization decides the utility and value of the software at a particular level of quality outweighs the impact of the known defects and deficiencies. The known issues are normally documented in a letter of operational considerations or release notes so that the users of the software will be able to work around the known issues and will know when the use of the software would be inappropriate for particular tasks. With the release of the software, other, undocumented defects and deficiencies will be discovered by the users of the software. As these issues are reported into the development organization, they will be entered into the defect tracking system. The people involved in the software maintenance phase are expected to work on these known issues, address them, and prepare for a new release of the

software, known as a maintenance release, which will address the documented issues. Capability Maturity Model Capability Maturity Model (CMM) is a collection of instructions an organization can follow with the purpose to gain better control over its Software development process. The CMM ranks software development organizations in a hierarchy of five levels, each with a progressively greater capability of producing quality software. Each level is described as a level of maturity. Those 5 levels are equipped with different number of instructions to follow. If an organization is on level 1 (currently an estimated 75% of software development organizations exist at this level, which can be best described as chaotic [source as of May 10, 1998]), it only follows few of the instructions in CMM, if on level 5 it follows everything from CMM. The CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in Pittsburgh. It has been used extensively for avionics software and for government projects since it was created in the mid1980s. Maturity model A maturity model is a structured collection of elements that describe characteristics of effective processes. A maturity model provides: a place to start the benefit of a communitys prior experiences a common language and a shared vision a framework for prioritizing actions a way to define what improvement means for your organization A maturity model can be used as a benchmark for assessing different organizations for equivalent comparison. The SEI has subsequently released a revised version known as the Capability Maturity Model Integration (CMMI). History Like best practices, the Capability Maturity Model was initially funded by military research, but its method of process improvement could not be more different. Where the best practices approach is "bottom up" and quite informal, the Capability Maturity Model is rigid, "top down", and prescriptive. The United States Air Force funded a study at the Carnegie-Mellon Software Engineering Institute to create a model for the military to use as an objective evaluation of software subcontractors. The result was the Capability Maturity Model, published as Managing the Software Process in 1989. The CMM has since been revised and updated; version 1.1 is now in print and the entire text is available on-line at the SEI's Web site. Context The term software originates from the idea that software is easy to change ("soft") in comparison to hardware, which was more difficult to change ("hard"). Another theory: software is soft in the sense that it is not tangible, unlike hardware, which we can replace and touch. In the 1970s, the field of

software development saw significant growth as more organizations began to move to computerized information systems. With this significant growth, two events began unfolding. The first event was that computerized information systems became commonplace and improved computer hardware allowed for more ambitious information system projects. Along with the improved computer hardware, new technologies and manufacturing processes resulted in cheaper, more reliable, and more flexible computer platforms and peripherials which in turn encouraged the use of information systems in more diverse applications. The second event was the need for many more people to develop the software needed for the computers created by the explosion in the number of computer information systems due to the increased application of computers to organizational problems. This in turn meant that people with little experience in the art of developing computer software moved into that area of work. Not only was there increased demand for people to design and write computer software, there was also increased demand for people to manage these projects. Many software projects failed due to inadequate processes and project management. This was primarily due to two causes. The first was software development, both the design and writing of computer software as well as the management of software development projects, did not have a large body of published work discussing software development and what work existed was not used by industry to any great extent. The second cause was that as information systems became more commonplace and people became more ambitious in the application of computer systems to organizational problems. Projects attempted moved from well known areas such as accounting systems or inventory systems which involved primarily numbers and the embedding of an abstract model into a computing platform with software to applications which involved the movement of physical objects in the real world. In addition, software development teams ran into the problem of attempting to model complex systems, such as the complete information flows of an enterprise, within information systems. The sheer complexity of the problem led to project failure. During the 1970s there were a number of proponents for a more scientific and professional practice. People such as Edward Yourdon, Larry Constantine, Gerald Weinberg, Tom DeMarco, and David Parnas published articles and books with research results in an attempt to professionalize the software development community. Watts Humphrey's Capability Maturity Model (CMM) was described in the book Managing the Software Process (1989). The CMM as conceived by Watts Humphrey was based on the earlier work of Phil Crosby. Active development of the model by the SEI (US Dept. of Defence Software Engineering Institute) began in 1986. The CMM was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. Though it

comes from the area of software development, it can be, has been and continues to be widely applied as a general model of the maturity of processes (e.g., ITIL service management processes) in IS/IT (and other) organisations. The model identifies five levels of process maturity for an organization: 1. Initial (chaotic, ad hoc, heroic) the starting point for use of a new process. 2. Repeatable (project management, process discipline) the process is used repeatedly. 3. Defined (institutionalized) the process is defined/confirmed as a standard business process. 4. Managed (quantified) process management and measurement takes place. 5. Optimizing (process improvement) process management includes deliberate process optimization/improvement. Within each of these maturity levels are KPAs (Key Process Areas) which characterize that level, and for each KPA there are five definitions identified: 1. Goals 2. Commitment 3. Ability 4. Measurement 5. Verification The KPAs are not necessarily unique to CMM, representing - as they do - the stages that organizations must go through on the way to becoming mature. The SEI has defined a rigorous process assessment method to appraise how well a software development organization meets the criteria for each level. The assessment is supposed to be led by an authorized lead assessor. One way in which companies are supposed to use the model is first to assess their maturity level and then form a specific plan to get to the next level. Skipping levels is not allowed. NB: The CMM was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. It may be suited for that purpose. When it became a general model for software process improvement, there were many critics. Shrinkwrap companies, which have also been called commercial offtheshelf firms or software package firms, included Borland, Claris, Apple, Symantec, Microsoft, and Lotus, amongst others. Many such companies rarely if ever managed their requirements documents as formally as the CMM described. This is a requirement to achieve level 2, and so all of these companies would probably fall into level 1 of the model. Origins The United States Air Force funded a study at the SEI to create a model for the military to use as an objective evaluation of software subcontractors. In 1989, the Capability Maturity Model was published as Managing the Software Process. Current state Although these models have proved useful to many organizations, the use of multiple models has been problematic. Further, applying multiple models that are not integrated within and across an organization is costly in terms of training, appraisals, and improvement activities. The CMM Integration project was formed to sort out the problem of using multiple CMMs. The CMMI Product Team's mission was to combine three source models: 1. The Capability Maturity Model for Software (SW-CMM) v2.0 draft C 2. The Systems Engineering Capability Model (SECM)

3. The Integrated Product Development Capability Maturity Model (IPDCMM) v0.98 4. Supplier sourcing CMMI is the designated successor of the three source models. The SEI has released a policy to sunset the Software CMM. The same can be said for the SECM and the IPD-CMM. These models are expected to be succeeded by CMMI. Levels of the CMM There are five levels of the CMM. According to the SEI, "Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief." Level 1 - Initial At maturity level 1, processes are usually ad hoc and the organization usually does not provide a stable environment. Success in these organizations depends on the competence and heroics of the people in the organization and not on the use of proven processes. In spite of this ad hoc, chaotic environment, maturity level 1 organizations often produce products and services that work; however, they frequently exceed the budget and schedule of their projects. Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be able to repeat their past successes again. Level 2 - Repeatable At maturity level 2, software development successes are repeatable. The organization may use some basic project management to track cost and schedule. Process discipline helps ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans. Project status and the delivery of services are visible to management at defined points (for example, at major milestones and at the completion of major tasks). Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications. Level 3 - Defined At maturity level 3, processes are well characterized and understood, and are described in standards, procedures, tools, and methods. The organizations set of standard processes, which is the basis for level 3, is established and improved over time. These standard processes are used to establish consistency across the organization. Projects establish their defined processes by the organizations set of standard processes according to tailoring guidelines.

The organizations management establishes process objectives based on the organizations set of standard processes and ensures that these objectives are appropriately addressed. A critical distinction between level 2 and level 3 is the scope of standards, process descriptions, and procedures. At level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of the process (for example, on a particular project). At level 3, the standards, process descriptions, and procedures for a project are tailored from the organizations set of standard processes to suit a particular project or organizational unit. Level 4 - Managed Using precise measurements, management can effectively control the software development effort. In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Sub processes are selected that significantly contribute to overall process performance. These selected sub processes are controlled using statistical and other quantitative techniques. A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable. Level 5 - Optimizing Maturity level 5 focuses on continually improving process performance through both incremental and innovative technological improvements. Quantitative process-improvement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing process improvement. The effects of deployed process improvements are measured and evaluated against the quantitative process-improvement objectives. Both the defined processes and the organizations set of standard processes are targets of measurable improvement activities. Process improvements to address common causes of process variation and measurably improve the organizations processes are identified, evaluated, and deployed. Optimizing processes that are nimble, adaptable and innovative depends on the participation of an empowered workforce aligned with the business values and objectives of the organization. The organizations ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate and share learning. A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At maturity level 4, processes are concerned with addressing special causes of process variation and providing statistical predictability of the results. Though processes may produce predictable results, the results may be insufficient to achieve the established objectives.

At maturity level 5, processes are concerned with addressing common causes of process variation and changing the process (that is, shifting the mean of the process performance) to improve process performance (while maintaining statistical probability) to achieve the established quantitative processimprovement objectives. Extensions Recent versions of CMMI from SEI indicate a "level 0", characterized as "Incomplete". Many observers leave this level out as redundant or unimportant, but Pressman and others make note of it. See page 18 of the August 2002 edition of CMMI from SEI (Note: PDF file). Anthony Finkelstein[1] extrapolated that negative levels are necessary to represent environments that are not only indifferent, but actively counterproductive, and this was refined by Tom Schorsch[2] as the Capability Immaturity Model: Process areas The CMMI contains several key process areas indicating the aspects of product development that are to be covered by company processes. The software industry is diverse and volatile. All methodologies for creating software have supporters and critics, and the CMM is no exception. Praise The CMM was developed to give Defense organizations a yardstick to assess and describe the capability of software contractors to provide software on time, within budget, and to acceptable standards. It has arguably been successful in this role, even reputedly causing some software sales people to clamour for their organizations' software engineers/developers to "implement CMM." The CMM is intended to enable an assessment of an organization's maturity for software development. It is an important tool for outsourcing and exporting software development work. Economic development agencies in India, Ireland, Egypt, and elsewhere have praised the CMM for enabling them to be able to compete for US outsourcing contracts on an even footing. The CMM provides a good framework for organizational improvement. It allows companies to prioritize their process improvement initiatives. Criticism CMM has failed to take over the world. It's hard to tell exactly how wide spread it is as the SEI only publishes the names and achieved levels of compliance of companies that have requested this information to be listed. The most current Maturity Profile for CMMI is available online. CMM is well suited for bureaucratic organizations such as government agencies, large corporations and regulated monopolies. If the organizations deploying CMM are large enough, they may employ a team of CMM auditors reporting their results directly to the executive level. (A practice encouraged by SEI.) The use of auditors and executive reports may influence the entire IT organization to focus on perfectly completed forms rather than application development, client needs or

The

the marketplace. If the project is driven by a due date, CMMs intensive reliance on process and forms may become a hindrance to meeting the due date in cases where time to market with some kind of product is more important than achieving high quality and functionality of the product. Suggestions of scientifically managing the software process with metrics only occur beyond the Fourth level. There is little validation of the processes cost savings to business other than a vague reference to empirical evidence. It is expected that a large body of evidence would show that adding all the business overhead demanded by CMM somehow reduces IT headcount, business cost, and time to market without sacrificing client needs. No external body actually certifies a software development center as being CMM compliant. It is supposed to be an honest self-assessment ([5] and [6]). The CMM does not describe how to create an effective software development organization. The CMM contains behaviors or best practices that successful projects have demonstrated. Being CMM compliant is not a guarantee that a project will be successful; however being compliant can increase a project's chances of being successful. The CMM can seem to be overly bureaucratic, promoting process over substance. For example, for emphasizing predictability over service provided to end users. More commercially successful methodologies (for example, the Rational Unified Process) have focused not on the capability of the organization to produce software to satisfy some other organization or a collectively-produced specification, but on the capability of organizations to satisfy specific end user "use cases" as per the Object Management Group's UML (Unified Modeling Language) approach[7]. most beneficial elements of CMM Level 2 and 3 Creation of Software Specifications, stating what it is that is going to be developed, combined with formal sign off, an executive sponsor and approval mechanism. This is NOT a living document, but additions are placed in a deferred or out of scope section for later incorporation into the next cycle of software development. A Technical Specification, stating how precisely the thing specified in the Software Specifications is to be developed will be used. This is a living document. Peer Review of Code (Code Review) with metrics that allow developers to walk through an implementation, and to suggest improvements or changes. Note - This is problematic because the code has already been developed and a bad design cannot be fixed by "tweaking", the Code Review gives complete code a formal approval mechanism. Version Control - a very large number of organizations have no formal revision control mechanism or release mechanism in place.

The idea that there is a "right way" to build software that it is a scientific process involving engineering design and that groups of developers are not there to simply work on the problem du jour.

Software reuse In most engineering disciplines, systems are designed by composing existing components that have been used in other systems. Software engineering has been more focused on original development but it is now recognised that to achieve better software, more quickly and at lower cost, we need to adopt a design process that is based on systematic software reuse. Reuse-based software engineering Application system reuse The whole of an application system may be reused either by incorporating it without change into other systems (COTS reuse) or by developing application families. Component reuse Components of an application from sub-systems to single objects may be reused. Covered in Chapter 19. Object and function reuse Software components that implement a single well-defined object or function may be reused. Reuse benefits 1
Increased dependability Reused software, that has been tried and tested in working systems, should be more dependable than new software. The initial use of the software reveals any design and implementation faults. These are then fixed, thus reducing the number of failures when the software is reused. If software exists, there is less uncertainty in the costs of reusing that software than in the costs of development. This is an important factor for project management as it reduces the margin of error in project cost estimation. This is particularly true when relatively large software components such as sub-systems are reused. Instead of application specialists doing the same work on different projects, these specialists can develop reusable software that encapsulate their knowledge.

Reduced process risk

Effective specialists

use

of

Reuse benefits 2
Standards compliance Some standards, such as user interface standards, can be implemented as a set of standard reusable components. For example, if menus in a user interfaces are implemented using reusable components, all applications present the same menu formats to users. The use of standard user interfaces improves dependability as users are less likely to make mistakes when presented with a familiar interface. Bringing a system to market as early as possible is

Accelerated

development

often more important than overall development costs. Reusing software can speed up system production because both development and validation time should be reduced.

Reuse problems 1
Increased maintenance costs If the source code of a reused software system or component is not available then maintenance costs may be increased as the reused elements of the system may become increasingly incompatible with system changes. CASE toolsets may not support development with reuse. It may be difficult or impossible to integrate these tools with a component library system. The software process assumed by these tools may not take reuse into account. Some software engineers sometimes prefer to re-write components as they believe that they can improve on the reusable component. This is partly to do with trust and partly to do with the fact that writing original software is seen as more challenging than reusing other peoples software.

Lack of tool support

Not-invented-here syndrome

Reuse problems 2
Creating and maintaining a component library Populating a reusable component library and ensuring the software developers can use this library can be expensive. Our current techniques for classifying, cataloguing and retrieving software components are immature. Software components have to be discovered in a library, understood and, sometimes, adapted to work in a new environment. Engineers must be reasonably confident of finding a component in the library before they will make routinely include a component search as part of their normal development process.

Finding, understanding and adapting reusable components

The reuse landscape Although reuse is often simply thought of as the reuse of system components, there are many different approaches to reuse that may be used. Reuse is possible at a range of levels from simple functions to complete application systems. The reuse landscape covers the range of possible reuse techniques. The reuse landscape

Design patterns Com ponent fram eworks Com ponent-based developm ent Application product lines COTS integ ration Configurable ver tical applications Program libraries

Aspect-oriented software developm ent Program generators

Legacy system wrapping

Service-oriented system s

Reuse approaches 1
Design patterns Generic abstractions that occur across applications are represented as design patterns that show abstract and concrete objects and interactions. Systems are developed by integrating components (collections of objects) that conform to component-model standards. This is covered in Chapter 19. Collections of abstract and concrete classes that can be adapted and extended to create application systems. Legacy systems (see Chapter 2) that can be wrapped by defining a set of interfaces and providing access to these legacy systems through these interfaces. Systems are developed by linking shared services that may be externally provided.

Component-based development

Application frameworks Legacy system wrapping

Service-oriented systems

Reuse approaches 2
Application product lines COTS integration Configurable vertical applications Program libraries An application type is generalised around a common architecture so that it can be adapted in different ways for different customers. Systems are developed by integrating existing application systems. A generic system is designed so that it can be configured to the needs of specific system customers. Class and function libraries implementing commonly-used abstractions are available for reuse. A generator system embeds knowledge of a

Program

generators Aspect-oriented software development

particular types of application and can generate systems or system fragments in that domain. Shared components are woven into an application at different places when the program is compiled.

Reuse planning factors The development schedule for the software. The expected software lifetime. The background, skills and experience of the development team. The criticality of the software and its non-functional requirements. The application domain. The execution platform for the software. Concept reuse When you reuse program or design components, you have to follow the design decisions made by the original developer of the component. This may limit the opportunities for reuse. However, a more abstract form of reuse is concept reuse when a particular approach is described in an implementation independent way and an implementation is then developed. The two main approaches to concept reuse are: Design patterns; Generative programming. Design patterns A design pattern is a way of reusing abstract knowledge about a problem and its solution. A pattern is a description of the problem and the essence of its solution. It should be sufficiently abstract to be reused in different settings. Patterns often rely on object characteristics such as inheritance and polymorphism. Pattern elements Name A meaningful pattern identifier. Problem description. Solution description. Not a concrete design but a template for a design solution that can be instantiated in different ways. Consequences The results and trade-offs of applying the pattern. Generator-based reuse Program generators involve the reuse of standard patterns and algorithms. These are embedded in the generator and parameterised by user commands. A program is then automatically generated. Generator-based reuse is possible when domain abstractions and their mapping to executable code can be identified. A domain specific language is used to compose and control these abstractions.

Types of program generator Types of program generator Application generators for business data processing; Parser and lexical analyser generators for language processing; Code generators in CASE tools. Generator-based reuse is very cost-effective but its applicability is limited to a relatively small number of application domains. It is easier for end-users to develop programs using generators compared to other component-based approaches to reuse. Reuse through program generation

Applica tion description

Program gener tor a

Generated program

Applica tion dom ain kno wledge

Da tabase

Aspect-oriented development Aspect-oriented development addresses a major software engineering problem - the separation of concerns. Concerns are often not simply associated with application functionality but are cross-cutting - e.g. all components may monitor their own operation, all components may have to maintain security, etc. Cross-cutting concerns are implemented as aspects and are dynamically woven into a program. The concern code is reuse and the new system is generated by the aspect weaver. Aspect-oriented development
Aspect 1 Aspect 2

Input source code <statem ents 1> join point 1 <statem ents 2> join point 2 <statem ents 3> Aspect Weaver

Generated code <statem ents 1> Aspect 1 <statem ents 2> Aspect 2 <statem ents 3>

Application frameworks Frameworks are a sub-system design made up of a collection of abstract and concrete classes and the interfaces between them. The sub-system is implemented by adding components to fill in parts of the design and by instantiating the abstract classes in the framework. Frameworks are moderately large entities that can be reused.

Framework classes System infrastructure frameworks Support the development of system infrastructures such as communications, user interfaces and compilers. Middleware integration frameworks Standards and classes that support component communication and information exchange. Enterprise application frameworks Support the development of specific types of application such as telecommunications or financial systems. Extending frameworks Frameworks are generic and are extended to create a more specific application or sub-system. Extending the framework involves Adding concrete classes that inherit operations from abstract classes in the framework; Adding methods that are called in response to events that are recognised by the framework. Problem with frameworks is their complexity which means that it takes a long time to use them effectively. Application system reuse Involves the reuse of entire application systems either by configuring a system for an environment or by integrating two or more systems to create a new application. Two approaches covered here: COTS product integration; Product line development. COTS product reuse COTS - Commercial Off-The-Shelf systems. COTS systems are usually complete application systems that offer an API (Application Programming Interface). Building large systems by integrating COTS systems is now a viable development strategy for some types of system such as E-commerce systems. The key benefit is faster application development and, usually, lower development costs. COTS design choices Which COTS products offer the most appropriate functionality? There may be several similar products that may be used. How will data be exchanged? Individual products use their own data structures and formats. What features of the product will actually be used? Most products have more functionality than is needed. You should try to deny access to unused functionality. E-procurement system

Clie nt W browser eb E-mail system

Serve r E-com merce system Adaptor Ordering and invoicing system

E-mail system

Adaptor

COTS products reused On the client, standard e-mail and web browsing programs are used. On the server, an e-commerce platform has to be integrated with an existing ordering system. This involves writing an adaptor so that they can exchange data. An e-mail system is also integrated to generate e-mail for clients. This also requires an adaptor to receive data from the ordering and invoicing system. COTS system integration problems Lack of control over functionality and performance COTS systems may be less effective than they appear Problems with COTS system inter-operability Different COTS systems may make different assumptions that means integration is difficult No control over system evolution COTS vendors not system users control evolution Support from COTS vendors COTS vendors may not offer support over the lifetime of the product Software product lines Software product lines or application families are applications with generic functionality that can be adapted and configured for use in a specific context. Adaptation may involve: Component and system configuration; Adding new components to the system;

Selecting from a library of existing components; Modifying components to meet new requirements. COTS product specialisation Platform specialisation Different versions of the application are developed for different platforms. Environment specialisation Different versions of the application are created to handle different operating environments e.g. different types of communication equipment. Functional specialisation Different versions of the application are created for customers with different requirements. Process specialisation Different versions of the application are created to support different business processes. COTS configuration Deployment time configuration A generic system is configured by embedding knowledge of the customers requirements and business processes. The software itself is not changed. Design time configuration A common generic code is adapted and changed according to the requirements of particular customers. ERP system organization

Configuration planning tool

Generic ER system P Configuration database

System database
ERP systems An Enterprise Resource Planning (ERP) system is a generic system that supports common business processes such as ordering and invoicing, manufacturing, etc. These are very widely used in large companies - they represent probably the most common form of software reuse. The generic core is adapted by including modules and by incorporating knowledge of business processes and rules. Design time configuration Software product lines that are configured at design time are instantiations of generic application architectures as discussed in Chapter 13. Generic products usually emerge after experience with specific products. Product line architectures Architectures must be structured in such a way to separate different subsystems and to allow them to be modified. The architecture should also separate entities and their descriptions and the higher levels in the system access entities through descriptions rather than directly. A resource management system

User interace f

User authentication

Resource delivery

Query management

Resource Resource policy management control

Resource allocation

T ransaction management Resource database


Vehicle despatching A specialised resource management system where the aim is to allocate resources (vehicles) to handle incidents. Adaptations include: At the UI level, there are components for operator display and communications; At the I/O management level, there are components that handle authentication, reporting and route planning; At the resource management level, there are components for vehicle location and despatch, managing vehicle status and incident logging; The database includes equipment, vehicle and map databases. A despatching system

User interace f

Comms system interface

Report Map and route Operator generator planner authentication

Query manager

Vehicle status Incident manager logger

Equipment Vehicle V ehicle despatcher manager locator

Equipment database

T ransaction management Incident log Vehicle database Map database

Product instance development


Renegotiate requirem ents Elicit stakeholder requirem ents Choose closest-fit fam ily m em ber Adapt existing system Deliver new fam ily m em ber

Product instance development Elicit stakeholder requirements Use existing family member as a prototype Choose closest-fit family member Find the family member that best meets the requirements Re-negotiate requirements Adapt requirements as necessary to capabilities of the software Adapt existing system Develop new modules and make changes for family member Deliver new family member Document key features for further member development Key points

Advantages of reuse are lower costs, faster software development and lower risks. Design patterns are high-level abstractions that document successful design solutions. Program generators are also concerned with software reuse - the reusable concepts are embedded in a generator system. Application frameworks are collections of concrete and abstract objects that are designed for reuse through specialisation. COTS product reuse is concerned with the reuse of large, off-the-shelf systems. Problems with COTS reuse include lack of control over functionality, performance, and evolution and problems with inter-operation. ERP systems are created by configuring a generic system with information about a customers business. Software product lines are related applications developed around a common core of shared functionality. Principles of user-centred design The key principles of user-centred design were developed from the design of the OMS (Gould, 1987): Focus early in the design process on users and their tasks Measure users reactions and performance to scenarios, manuals, simulations, and prototype are observed, recorded and analysed. Design iteratively: when problems are found in user testing, fix them and carry out more tests. All usability factors must emerge together and be under the responsibility of one control group. Gould commented that of 450 system designers and developers who were asked to write down the steps they recommend in the design of an office system, 26 percent of them mentioned none of the principles, and another 35 percent mentioned only one of the principles. Clearly, the principles of user-centred design were far from being obvious to designers at that time. 10.9 Debugging Characteristics of Bugs The symptom and the cause may be geographically remote The symptom may disappear (temporarily) when another error is corrected The symptom may actually be caused by non-errors(eg, round-off inaccuracies) The symptom may be caused by a human error that is not easily traced The symptom may be caused by a result of timing problems, rather than processing problems It may be difficult to accurately reproduce input conditions (eg a real-time application in which input ordering is indeterminate) The symptom may be intermittent. This is particularly common in embedded systems that couple hardware and software inextricably The symptom may be due to causes that are distributed across a number of tasks running on different processors Debugging Approaches

Brute force: is probably the most common and least efficient method for isolating the cause of a software error. The program is loaded with run-time traces, and WRITE statements, and hopefully some information will be produced that will indicated a clue to the cause of an error. Backtracking: fairly common in small programs. Starting from where the symptom has been uncovered, backtrack manually until the site of the cause is found. Unfortunately, as the number of source code lines increases, the number of potential backward paths may become unmanageably large. Cause Elimination: data related to the error occurrence are organized to isolate potential causes. A "cause hypothesis" is devised and the above data are used to prove or disapprove the hypothesis. Alternatively, a list of all possible causes is developed and tests are conducted to eliminate each. If the initial tests indicate that a particular cause hypothesis shows promise, the data are refined in a attempt to isolate the bug. Debugging Tools Debugging compliers Dynamic debugging aides ("tracers") Automatic test case generators Memory dumps Cross reference maps

Potrebbero piacerti anche