Sei sulla pagina 1di 114

First Edition, 2009

ISBN 978 81 907188 8 2

© All rights reserved.

Published by:

Global Media
1819, Bhagirath Palace,
Chandni Chowk, Delhi-110 006
Email: globalmedia@dkpd.com
Table of Contents

1. Introduction

2. Chapter1 - History of software engineering & software Engineering as a Profession

3. Chapter2 - Software design & Modeling languages for software design

4. Chapter3 - Software development & Software Testing

5. Chapter4 - Software development process, Computer-aided Software engineering


Software quality
Introduction
Software engineering

The new Airbus A380 uses a substantial amount of software to create a "paperless"
cockpit. Software engineering successfully maps and plans the millions of lines of code
comprising the plane's software

Software engineering is the application of a systematic, disciplined, quantifiable


approach to the development, operation, and maintenance of software, and the study of
these approaches; that is, the application of engineering to software.

The term software engineering first appeared in the 1968 NATO Software Engineering
Conference and was meant to provoke thought regarding the current "software crisis" at
the time. Since then, it has continued as a profession and field of study dedicated to
creating software that is of higher quality, more affordable, maintainable, and quicker to
build. Since the field is still relatively young compared to its sister fields of engineering,
there is still much debate around what software engineering actually is, and if it conforms
to the classical definition of engineering. It has grown organically out of the limitations
of viewing software as just programming. "Software development" is a much used term
in industry which is more generic and does not necessarily subsume the engineering
paradigm. Although it is questionable what impact it has had on actual software
development over the last more than 40 years, the field's future looks bright according to
Money Magazine and Salary.com who rated "software engineering" as the best job in
America in 2006.
Chapter-1
History of software engineering
& software Engineering as a Profession
History of software engineering
History of computing

Hardware before 1960

Hardware 1960s to present

Hardware in Soviet Bloc countries

Artificial intelligence

Computer science

Operating systems

Programming languages

Software engineering

Graphical user interface

Internet

Personal computers
Laptops

Video games

World Wide Web

Timeline of computing

 Timeline of computing 2400 BC–1949


 1950–1979
 1980–1989
 1990-1999
 2000—
 More timelines...

In the history of software engineering the software engineering has evolved steadily
from its founding days in the 1940s until today in the 2000s. Applications have evolved
continuously. The ongoing goal to improve technologies and practices, seeks to improve
the productivity of practitioners and the quality of applications to users.

Overview
There are a number of areas where the evolution of software engineering is notable:

 Emergence as a profession: By the early 1980s, software engineering had already


emerged as a bona fide profession, to stand beside computer science and
traditional engineering. See also software engineering professionalism.
 Role of women: In the 1940s, 1950s, and 1960s, men often filled the more
prestigious and better paying hardware engineering roles, but often delegated the
writing of software to women. Grace Hopper, Jamie Fenton and many other
unsung women filled many programming jobs during the first several decades of
software engineering. Today, many fewer women work in software engineering
than in other professions, a situation whose cause is not clearly identified and is
often attributed to sexual discrimination, cyberculture or bias in education. Many
academic and professional organizations are trying hard to solve this imbalance.
 Processes: Processes have become a big part of software engineering and are
hailed for their potential to improve software and sharply criticized for their
potential to constrict programmers.
 Cost of hardware: The relative cost of software versus hardware has changed
substantially over the last 50 years. When mainframes were expensive and
required large support staffs, the few organizations buying them also had the
resources to fund large, expensive custom software engineering projects.
Computers are now much more numerous and much more powerful, which has
several effects on software. The larger market can support large projects to create
commercial off the shelf software, as done by companies such as Microsoft. The
cheap machines allow each programmer to have a terminal capable of fairly rapid
compilation. The programs in question can use techniques such as garbage
collection, which make them easier and faster for the programmer to write. On the
other hand, many fewer organizations are interested in employing programmers
for large custom software projects, instead using commercial off the shelf
software as much as possible.

The Pioneering Era


The most important development was that new computers were coming out almost every
year or two, rendering existing ones obsolete. Software people had to rewrite all their
programs to run on these new machines. Programmers did not have computers on their
desks and had to go to the "machine room". Jobs were run by signing up for machine
time or by operational staff. Jobs were run by putting punched cards for input into the
machine's card reader and waiting for results to come back on the printer.

The field was so new that the idea of management by schedule was non-existent. Making
predictions of a project's completion date was almost impossible. Computer hardware
was application-specific. Scientific and business tasks needed different machines. Due to
the need to frequently translate old software to meet the needs of new machines, high-
order languages like FORTRAN, COBOL, and ALGOL were developed. Hardware
vendors gave away systems software for free as hardware could not be sold without
software. A few companies sold the service of building custom software but no software
companies were selling packaged software.

The notion of reuse flourished. As software was free, user organizations commonly gave
it away. Groups like IBM's scientific user group SHARE offered catalogs of reusable
components. Academia did not yet teach the principles of computer science. Modular
programming and data abstraction were already being used in programming.

1945 to 1965: The origins


The term software engineering first appeared in the late 1950s and early 1960s.
Programmers have always known about civil, electrical, and computer engineering and
debated what engineering might mean for software.
The NATO Science Committee sponsored two conferences on software engineering in
1968 and 1969, which gave the field its initial boost. Many believe these conferences
marked the official start of the profession of software engineering.

1965 to 1985: The software crisis


Software engineering was spurred by the so-called software crisis of the 1960s, 1970s,
and 1980s, which identified many of the problems of software development. Many
software projects ran over budget and schedule. Some projects caused property damage.
A few projects caused loss of life. The software crisis was originally defined in terms of
productivity, but evolved to emphasize quality. Some used the term software crisis to
refer to their inability to hire enough qualified programmers.

 Cost and Budget Overruns: The OS/360 operating system was a classic example.
This decade-long project from the 1960s eventually produced one of the most
complex software systems at the time. OS/360 was one of the first large (1000
programmers) software projects. Fred Brooks claims in The Mythical Man Month
that he made a multi-million dollar mistake of not developing a coherent
architecture before starting development.
 Property Damage: Software defects can cause property damage. Poor software
security allows hackers to steal identities, costing time, money, and reputations.
 Life and Death: Software defects can kill. Some embedded systems used in
radiotherapy machines failed so catastrophically that they administered lethal
doses of radiation to patients. The most famous of these failures is the Therac 25
incident.

Peter G. Neumann has kept a contemporary list of software problems and disasters. The
software crisis has been slowly fizzling out, because it is unrealistic to remain in crisis
mode for more than 20 years. SEs are accepting that the problems of SE are truly difficult
and only hard work over many decades can solve them.

1985 to 1989: No silver bullet


For decades, solving the software crisis was paramount to researchers and companies
producing software tools. Seemingly, they trumpeted every new technology and practice
from the 1970s to the 1990s as a silver bullet to solve the software crisis. Tools,
discipline, formal methods, process, and professionalism were touted as silver bullets:

 Tools: Especially emphasized were tools: Structured programming, object-


oriented programming, CASE tools, Ada, Java, documentation, standards, and
Unified Modeling Language were touted as silver bullets.
 Discipline: Some pundits argued that the software crisis was due to the lack of
discipline of programmers.
 Formal methods: Some believed that if formal engineering methodologies would
be applied to software development, then production of software would become
as predictable an industry as other branches of engineering. They advocated
proving all programs correct.
 Process: Many advocated the use of defined processes and methodologies like the
Capability Maturity Model.
 Professionalism: This led to work on a code of ethics, licenses, and
professionalism.

In 1986, Fred Brooks published the No Silver Bullet article, arguing that no individual
technology or practice would ever make a 10-fold improvement in productivity within 10
years.

Debate about silver bullets raged over the following decade. Advocates for Ada,
components, and processes continued arguing for years that their favorite technology
would be a silver bullet. Skeptics disagreed. Eventually, almost everyone accepted that
no silver bullet would ever be found. Yet, claims about silver bullets pop up now and
again, even today.

Some interpret no silver bullet to mean that software engineering failed. The search for a
single key to success never worked. All known technologies and practices have only
made incremental improvements to productivity and quality. Yet, there are no silver
bullets for any other profession, either. Others interpret no silver bullet as proof that
software engineering has finally matured and recognized that projects succeed due to
hard work.

However, it could also be said that there are, in fact, a range of silver bullets today,
including lightweight methodologies (see "Project management"), spreadsheet
calculators, customized browsers, in-site search engines, database report generators,
integrated design-test coding-editors with memory/differences/undo, and specialty shops
that generate niche software, such as information websites, at a fraction of the cost of
totally customized website development. Nevertheless, the field of software engineering
appears too complex and diverse for a single "silver bullet" to improve most issues, and
each issue accounts for only a small portion of all software problems.

1990 to 1999: Prominence of the Internet


The rise of the Internet led to very rapid growth in the demand for international
information display/e-mail systems on the World Wide Web. Programmers were required
to handle illustrations, maps, photographs, and other images, plus simple animation, at a
rate never before seen, with few well-known methods to optimize image display/storage
(such as the use of thumbnail images).

The growth of browser usage, running on the HTML language, changed the way in which
information-display and retrieval was organized. The widespread network connections
led to the growth and prevention of international computer viruses on MS Windows
computers, and the vast proliferation of spam e-mail became a major design issue in e-
mail systems, flooding communication channels and requiring semi-automated pre-
screening. Keyword-search systems evolved into web-based search engines, and many
software systems had to be re-designed, for international searching, depending on Search
Engine Optimization (SEO) techniques. Human natural-language translation systems
were needed to attempt to translate the information flow in multiple foreign languages,
with many software systems being designed for multi-language usage, based on design
concepts from human translators. Typical computer-user bases went from hundreds, or
thousands of users, to, often, many-millions of international users.

2000 to Present: Lightweight Methodologies


With the expanding demand for software in many smaller organizations, the need for
inexpensive software solutions led to the growth of simpler, faster methodologies that
developed running software, from requirements to deployment, quicker & easier. The use
of rapid-prototyping evolved to entire lightweight methodologies, such as Extreme
Programming (XP), which attempted to simplify many areas of software engineering,
including requirements gathering and reliability testing for the growing, vast number of
small software systems. Very large software systems still used heavily-documented
methodologies, with many volumes in the documentation set; however, smaller systems
had a simpler, faster alternative approach to managing the development and maintenance
of software calculations and algorithms, information storage/retrieval and display.

Current trends in software engineering

Software engineering is a young discipline, and is still developing. The directions in


which software engineering is developing include:

Aspects
Aspects help software engineers deal with quality attributes by providing tools to
add or remove boilerplate code from many areas in the source code. Aspects
describe how all objects or functions should behave in particular circumstances.
For example, aspects can add debugging, logging, or locking control into all
objects of particular types. Researchers are currently working to understand how
to use aspects to design general-purpose code. Related concepts include
generative programming and templates.
Agile
Agile software development guides software development projects that evolve
rapidly with changing expectations and competitive markets. Proponents of this
method believe that heavy, document-driven processes (like TickIT, CMM and
ISO 9000) are fading in importance. Some people believe that companies and
agencies export many of the jobs that can be guided by heavy-weight processes.
Related concepts include Extreme Programming, Scrum, and Lean software
development.
Experimental
Experimental software engineering is a branch of software engineering interested
in devising experiments on software, in collecting data from the experiments, and
in devising laws and theories from this data. Proponents of this method advocate
that the nature of software is such that we can advance the knowledge on software
through experiments only.
Model-driven
Model Driven Design develops textual and graphical models as primary design
artifacts. Development tools are available that use model transformation and code
generation to generate well-organized code fragments that serve as a basis for
producing complete applications.
Software Product Lines
Software Product Lines is a systematic way to produce families of software
systems, instead of creating a succession of completely individual products. This
method emphasizes extensive, systematic, formal code reuse, to try to
industrialize the software development process.

The Future of Software Engineering conference (FOSE), held at ICSE 2000, documented
the state of the art of SE in 2000 and listed many problems to be solved over the next
decade. The FOSE tracks at the ICSE 2000 and the ICSE 2007 conferences also help
identify the state of the art in software engineering.

Software engineering today

The profession is trying to define its boundary and content. The Software Engineering
Body of Knowledge SWEBOK has been tabled as an ISO standard during 2006 (ISO/IEC
TR 19759).

In 2006, Money Magazine and Salary.com rated software engineering as the best job in
America in terms of growth, pay, stress levels, flexibility in hours and working
environment, creativity, and how easy it is to enter and advance in the field.

Profession
Software engineer
A software engineer is a person who applies the principles of software engineering to
the design, development, testing, and evaluation of the software and systems that make
computers or anything containing software, such as chips, work.

Overview
Prior to the mid-1990s, most software practitioners called themselves programmers or
developers, regardless of their actual jobs. Many people prefer to call themselves
software developer and programmer, because most widely agree what these terms mean,
while software engineer is still being debated.

The term programmer has often been used as a pejorative term to refer to those without
the tools, skills, education, or ethics to write good quality software. In response, many
practitioners called themselves software engineers to escape the stigma attached to the
word programmer. In many companies, the titles programmer and software developer
were changed to software engineer, for many categories of programmers.

These terms cause confusion, because some denied any differences (arguing that
everyone does essentially the same thing with software) while others use the terms to
create a difference (because the terms mean completely different jobs).

A state of the art

In 2004, Keith Chapple U. S. Bureau of Labor Statistics counted 760,840 software


engineers holding jobs in the U.S.; in the same time period there were some 1.4 million
practitioners employed in the U.S. in all other engineering disciplines combined., Table
software engineer is used very liberally in the corporate world. Very few of the practicing
software engineers actually hold Engineering degrees from accredited universities. In
fact, according to the Association for Computing Machinery, "most people who now
function in the U.S. as serious software engineers have degrees in computer science, not
in software engineering". See also Debates within software engineering and
Controversies over the term Engineer.

Regulatory classification

The U.S. Bureau of Labor Statistics classifies computer software engineers as a


subcategory of "computer specialists", along with occupations such as computer scientist,
programmer, and network administrator. The BLS classifies all other engineering
disciplines, including computer hardware engineers, as "engineers".

The U.K. has seen the alignment of the Information Technology Professional and the
Engineering Professionals.

Software engineering in Canada has seen some contests in the courts over the use of the
title "Software Engineer" The Canadian Council of Professional Engineers (C.C.P.E. or
"Engineers Canada") will not grant a "Professional Engineer" status/license to anyone
who has not completed a recognized academic engineering program. Engineers qualified
outside Canada are similarly unable to obtain a "Professional Engineer" license. Since
2001, the Canadian Engineering Accreditation Board has accredited several university
programs in software engineering, allowing graduates to apply for a professional
engineering licence once the other prerequisites are obtained, although this does nothing
to help IT professionals using the title with degrees in other fields (such as computer
science).
Some of the United States of America regulate the use of terms such as "computer
engineer" and even "software engineer". These states include at least Texas and Florida.
Texas even goes so far as to ban anyone from writing any real-time code without an
engineering license.

Education
About half of all practitioners today have computer science degrees. A small, but
growing, number of practitioners have software engineering degrees. In 1987 Imperial
College London introduced the first three year software engineering Bachelor's degree in
the UK and the world, in the following year the University of Sheffield established a
similar programme. In 1996, Rochester Institute of Technology established the first
software engineering Bachelor's degree program in the United States, however, it did not
obtain ABET until 2003, the same time as Clarkson University, Milwaukee School of
Engineering and Mississippi State University.

Since then, software engineering undergraduate degrees have been established at many
universities. A standard international curriculum for undergraduate software engineering
degrees was recently defined by the CCSE. As of 2004, in the U.S., about 50 universities
offer software engineering degrees, which teach both computer science and engineering
principles and practices. The first software engineering Master's degree was established at
Seattle University in 1979. Since then graduate software engineering degrees have been
made available from many more universities. Likewise in Canada, the Canadian
Engineering Accreditation Board (CEAB) of the Canadian Council of Professional
Engineers has recognized several software engineering programs.

In 1998, the US Naval Postgraduate School (NPS) established the first doctorate program
in Software Engineering in the world. Additionally, many online advanced degrees in
Software Engineering have appeared such as the Master of Science in Software
Engineering (MSE) degree offered through the Computer Science and Engineering
Department at California State University, Fullerton. Steve McConnell opines that
because most universities teach computer science rather than software engineering, there
is a shortage of true software engineers. ETS University and UQAM were mandated by
IEEE to develop the SoftWare Engineering BOdy of Knowledge SWEBOK, which has
become an ISO standard describing the body of knowledge covered by a software
engineer.

Other degrees

In business, some software engineering practitioners have MIS degrees. In embedded


systems, some have electrical or computer engineering degrees, because embedded
software often requires a detailed understanding of hardware. In medical software,
practitioners may have medical informatics, general medical, or biology degrees.
Some practitioners have mathematics, science, engineering, or technology degrees. Some
have philosophy (logic in particular) or other non-technical degrees. And, others have no
degrees. For instance, Barry Boehm earned degrees in mathematics.

Profession
Employment

Most software engineers work as employees or contractors. Software engineers work


with businesses, government agencies (civilian or military), and non-profit organizations.
Some software engineers work for themselves as freelancers. Some organizations have
specialists to perform each of the tasks in the software development process. Other
organizations required software engineers to do many or all of them. In large projects,
people may specialize in only one role. In small projects, people may fill several or all
roles at the same time. Specializations include: in industry (analysts, architects,
developers, testers, technical support, managers) and in academia (educators,
researchers).

There is considerable debate over the future employment prospects for Software
Engineers and other IT Professionals. For example, an online futures market called the
Future of IT Jobs in America attempts to answer whether there will be more IT jobs,
including software engineers, in 2012 than there were in 2002.

Certification

Professional certification of software engineers is a contentious issue. Some see it as a


tool to improve professional practice.

Most successful certification programs in the software industry are oriented toward
specific technologies, and are managed by the vendors of these technologies. These
certification programs are tailored to the institutions that would employ people who use
these technologies.

The ACM had a professional certification program in the early 1980s, which was
discontinued due to lack of interest. . As of 2006, the IEEE had certified over 575
software professionals. In Canada the Canadian Information Processing Society has
developed a legally recognized professional certification called Information Systems
Professional (ISP).

Impact of globalization

Many students in the developed world have avoided degrees related to software
engineering because of the fear of offshore outsourcing (importing software products or
services from other countries) and of being displaced by foreign visa workers. Although
government statistics do not currently show a threat to software engineering itself; a
related career, computer programming does appear to have been affected. Often one is
expected to start out as a computer programmer before being promoted to software
engineer. Thus, the career path to software engineering may be rough, especially during
recessions.

Some career counselors suggest a student also focus on "people skills" and business skills
rather than purely technical skills because such "soft skills" are allegedly more difficult to
offshore. It is the quasi-management aspects of software engineering that appear to be
what has kept it from being impacted by globalization.

Prizes

There are several prizes in the field of software engineering:

 The CODiE awards is a yearly award issued by the Software and Information
Industry Association for excellence in software development the software
industry.
 Jolt Awards are awards in the software industry.
 Stevens Award is a software engineering award given in memory of Wayne
Stevens.

Debates within software engineering


Debates within software engineering

Controversies over the term Engineer

Controversies over the term Engineer

Some people believe that software engineering implies a certain level of academic
training, professional discipline, and adherence to formal processes that often are not
applied in cases of software development. A common analogy is that working in
construction does not make one a civil engineer, and so writing code does not make one a
software engineer. It is disputed by some - in particular by the Canadian Professional
Engineers Ontario (PEO) body, that the field is mature enough to warrant the title
"engineering". The PEO's position was that "software engineering" was not an
appropriate name for the field since those who practiced in the field and called
themselves "software engineers" were not properly licensed professional engineers, and
that they should therefore not be allowed to use the name.

The status of software engineering

The word engineering within the term software engineering causes a lot of confusion.
The wrangling over the status of software engineering (between traditional engineers and
computer scientists) can be interpreted as a fight over control of the word engineering.
Traditional engineers question whether software engineers can legally use the term.

Traditional engineers (especially civil engineers and the NSPE) claim that they have
special rights over the term engineering, and for anyone else to use it requires their
approval. In the mid-1990s, the NSPE sued to prevent anyone from using the job title
software engineering. The NSPE won their lawsuit in 48 states. However, SE
practitioners, educators, and researchers ignored the lawsuits and called themselves
software engineers anyway. The U.S. Bureau of Labor Statistics uses the term software
engineer, too. The term engineering is much older than any regulatory body, so many
believe that traditional engineers have few rights to control the term. As things stand at
2007, however, even the NSPE appears to have softened its stance towards software
engineering and following the heels of several overseas precedents, is investigating a
possibility of licensing software engineers in consultation with IEEE, NCEES and other
groups "for the protection of the public health safety and welfare" .

In Canada, the use of the words 'engineer' and 'engineering' are controlled in each
province by self-regulating professional engineering organizations, often aligned with
geologists and geophysicists, and tasked with enforcement of the governing legislation.
The intent is that any individual holding themselves out as an engineer (or geologist or
geophysicist) has been verified to have been educated to a certain accredited level, and
their professional practice is subject to a code of ethics and peer scrutiny. This system
was originally designed for the practise of engineering where public safety is a concern,
but extends to other branches of engineering as well, including electronics and software.

In New Zealand, IPENZ, the professional engineering organization entrusted by the New
Zealand government with legal power to license and regulate chartered engineers
(CPEng), recognizes software engineering as a legitimate branch of professional
engineering and accepts application of software engineers to obtain chartered status
provided he or she has a tertiary degree of approved subjects. Software Engineering is
included but Computer Science is normally not.

Employment

In 2004, the U. S. Bureau of Labor Statistics counted 760,840 software engineers holding
jobs in the U.S.; in the same time period there were some 1.4 million practitioners
employed in the U.S. in all other engineering disciplines combined. Due to its relative
newness as a field of study, formal education in software engineering is often taught as
part of a computer science curriculum, and as a result most software engineers hold
computer science degrees.

Most software engineers work as employees or contractors. Software engineers work


with businesses, government agencies (civilian or military), and non-profit organizations.
Some software engineers work for themselves as freelancers. Some organizations have
specialists to perform each of the tasks in the software development process. Other
organizations require software engineers to do many or all of them. In large projects,
people may specialize in only one role. In small projects, people may fill several or all
roles at the same time. Specializations include: in industry (analysts, architects,
developers, testers, technical support, managers) and in academia (educators,
researchers).

There is considerable debate over the future employment prospects for software
engineers and other IT professionals. For example, an online futures market called the
"ITJOBS Future of IT Jobs in America" attempts to answer whether there will be more IT
jobs, including software engineers, in 2012 than there were in 2002.

Certification

Professional certification of software engineers is a contentious issue. Some see it as a


tool to improve professional practice; "The only purpose of licensing software engineers
is to protect the public".

The ACM had a professional certification program in the early 1980s, which was
discontinued due to lack of interest. The ACM examined the possibility of professional
certification of software engineers in the late 1990s, but eventually decided that such
certification was inappropriate for the professional industrial practice of software
engineering. As of 2006, the IEEE had certified over 575 software professionals. In the
U.K. the British Computer Society has developed a legally recognized professional
certification called Chartered IT Professional (CITP), available to fully qualified
Members (MBCS). In Canada the Canadian Information Processing Society has
developed a legally recognized professional certification called Information Systems
Professional (ISP). The Software Engineering Institute offers certification on specific
topic such as Security, Process improvement and Software architecture.

Most certification programs in the IT industry are oriented toward specific technologies,
and are managed by the vendors of these technologies. These certification programs are
tailored to the institutions that would employ people who use these technologies.

Impact of globalization

Many students in the developed world have avoided degrees related to software
engineering because of the fear of offshore outsourcing (importing software products or
services from other countries) and of being displaced by foreign visa workers. Although
government statistics do not currently show a threat to software engineering itself; a
related career, computer programming does appear to have been affected. Often one is
expected to start out as a computer programmer before being promoted to software
engineer. Thus, the career path to software engineering may be rough, especially during
recessions.

Some career counselors suggest a student also focus on "people skills" and business skills
rather than purely technical skills because such "soft skills" are allegedly more difficult to
offshore. It is the quasi-management aspects of software engineering that appear to be
what has kept it from being impacted by globalization.

Education
A knowledge of programming is the main pre-requisite to becoming a software engineer,
but it is not sufficient. Many software engineers have degrees in Computer Science due to
the lack of software engineering programs in higher education. However, this has started
to change with the introduction of new software engineering degrees, especially in post-
graduate education. A standard international curriculum for undergraduate software
engineering degrees was defined by the CCSE.

Steve McConnell opines that because most universities teach computer science rather
than software engineering, there is a shortage of true software engineers. In 2004 the
IEEE Computer Society produced the SWEBOK, which has become an ISO standard
describing the body of knowledge covered by a software engineer.

The European Commission within the Erasmus Mundus Programme offers a European
master degree called European Master on Software Engineering for students from Europe
and also outside Europe. This is a joint program (double degree) involving 4 universities
in Europe.
Chapter-2
Software design & Modeling languages
for software design

Software design
Software development process

Activities and steps

Requirements · Specification
Architecture · Design
Implementation · Testing
Deployment · Maintenance

Models

Agile · Cleanroom · DSDM


Iterative · RAD · RUP · Spiral
Waterfall · XP · Scrum · Lean
V-Model · FDD

Supporting disciplines

Configuration management
Documentation
Quality assurance (SQA)
Project management
User experience design

Tools

Compiler · Debugger · Profiler


GUI designer
Integrated development environment

Software design is a process of problem-solving and planning for a software solution.


After the purpose and specifications of software are determined, software developers will
design or employ designers to develop a plan for a solution. It includes low-level
component and algorithm implementation issues as well as the architectural view.

Overview
The software requirements analysis (SRA) step of a software development process yields
specifications that are used in software engineering. If the software is "semiautomated" or
user centered, software design may involve user experience design yielding a story board
to help determine those specifications. If the software is completely automated (meaning
no user or user interface), a software design may be as simple as a flow chart or text
describing a planned sequence of events. There are also semi-standard methods like
Unified Modeling Language and Fundamental modeling concepts. In either case some
documentation of the plan is usually the product of the design.

A software design may be platform-independent or platform-specific, depending on the


availability of the technology called for by the design.

Software design topics


Design Concepts

The design concepts provide the software designer with a foundation from which more
sophisticated methods can be applied. A set of fundamental design concepts has evolved.
They are:

 1.Abstraction - Abstraction is the process or result of generalization by reducing


the information content of a concept or an observable phenomenon, typically in
order to retain only information which is relevant for a particular purpose.
 2.Refinement - It is the process of elaboration. A hierarchy is developed by
decomposing a macroscopic statement of function in a stepwise fashion until
programming language statements are reached. In each step, one or several
instructions of a given program are decomposed into more detailed instructions.
Abstraction and Refinement are complementary concepts.
 3.Modularity - Software architecture is divided into components called modules.
 4.Software Architecture - It refers to the overall structure of the software and the
ways in which that structure provides conceptual integrity for a system. A
software architecture is the development work product that gives the highest
return on investment with respect to quality, schedule and cost.
 5.Control Hierarchy - A program structure that represent the organization of a
program components and implies a hierarchy of control.
 6.Structural Partitioning - The program structure can be divided both horizontally
and vertically. Horizontal partitions define separate branches of modular
hierarchy for each major program function. Vertical partitioning suggests that
control and work should be distributed top down in the program structure.
 7.Data Structure - It is a representation of the logical relationship among
individual elements of data.
 8.Software Procedure - It focuses on the processing of each modules individually
 9.Information Hiding - Modules should be specified and designed so that
information contained within a module is inaccessible to other modules that have
no need for such information.

Design considerations

There are many aspects to consider in the design of a piece of software. The importance
of each should reflect the goals the software is trying to achieve. Some of these aspects
are:

 Compatibility - The software is able to operate with other products that are
designed for interoperability with another product. For example, a piece of
software may be backward-compatible with an older version of itself.
 Extensibility - New capabilities can be added to the software without major
changes to the underlying architecture.
 Fault-tolerance - The software is resistant to and able to recover from component
failure.
 Maintainability - The software can be restored to a specified condition within a
specified period of time. For example, antivirus software may include the ability
to periodically receive virus definition updates in order to maintain the software's
effectiveness.
 Modularity - the resulting software comprises well defined, independent
components. That leads to better maintainability. The components could be then
implemented and tested in isolation before being integrated to form a desired
software system. This allows division of work in a software development project.
 Packaging - Printed material such as the box and manuals should match the style
designated for the target market and should enhance usability. All compatibility
information should be visible on the outside of the package. All components
required for use should be included in the package or specified as a requirement
on the outside of the package.
 Reliability - The software is able to perform a required function under stated
conditions for a specified period of time.
 Reusability - the modular components designed should capture the essence of the
functionality expected out of them and no more or less. This single-minded
purpose renders the components reusable wherever there are similar needs in
other designs.
 Robustness - The software is able to operate under stress or tolerate unpredictable
or invalid input. For example, it can be designed with a resilience to low memory
conditions.
 Security - The software is able to withstand hostile acts and influences.
 Usability - The software user interface must be intuitive (and often aesthetically
pleasing) to its target user/audience. Default values for the parameters must be
chosen so that they are a good choice for the majority of the users. In many cases,
online help should be included and also carefully designed.

Modeling language
A modeling language is any artificial language that can be used to express information or
knowledge or systems in a structure that is defined by a consistent set of rules. The rules
are used for interpretation of the meaning of components in the structure. A modeling
language can be graphical or textual. Examples of graphical modelling languages for
software design are:

 Business Process Modeling Notation (BPMN, and the XML form BPML) is an
example of a Process Modeling language.
 EXPRESS and EXPRESS-G (ISO 10303-11) is an international standard general-
purpose data modeling language.
 Extended Enterprise Modeling Language (EEML) is commonly used for business
process modeling across a number of layers.
 Flowchart is a schematic representation of an algorithm or a stepwise process,
 Fundamental Modeling Concepts (FMC) modeling language for software-
intensive systems.
 IDEF is a family of modeling languages, the most notable of which include
IDEF0 for functional modeling, IDEF1X for information modeling, and IDEF5
for modeling ontologies.
 Jackson Structured Programming (JSP) is a method for structured programming
based on correspondences between data stream structure and program structure
 LePUS3 is an object-oriented visual Design Description Language and a formal
specification language that is suitable primarily for modelling large object-
oriented (Java, C++, C#) programs and design patterns.
 Unified Modeling Language (UML) is a general modeling language to describe
software both structurally and behaviorally. It has a graphical notation and allow
for extension with a Profile (UML).
 Alloy (specification language) is a general purpose specification language for
expressing complex structural constraints and behavior in a software system. It
provides a concise language based on first-order relational logic.

Business Process Modeling Notation

Example of a Business Process Modelling Notation of a process with a normal flow.

Business Process Modelling Notation (BPMN) is a graphical representation for


specifying business processes in a workflow.

BPMN was developed by Business Process Management Initiative (BPMI), and is


currently maintained by the Object Management Group since the two organizations
merged in 2005. As of January 2009, the current version of BPMN is 1.2, with a major
revision process for BPMN 2.0 in progress.

Overview
The Business Process Modeling Notation (BPMN) is a standard for business process
modeling, and provides a graphical notation for specifying business processes in a
Business Process Diagram (BPD), based on a flowcharting technnique very similar to
activity diagrams from Unified Modeling Language (UML). The objective of BPMN is to
support business process management for both technical users and business users by
providing a notation that is intuitive to business users yet able to represent complex
process semantics. The BPMN specification also provides a mapping between the
graphics of the notation to the underlying constructs of execution languages, particularly
Business Process Execution Language.

The primary goal of BPMN is to provide a standard notation that is readily


understandable by all business stakeholders. These business stakeholders include the
business analysts who create and refine the processes, the technical developers
responsible for implementing the processes, and the business managers who monitor and
manage the processes. Consequently, BPMN is intended to serve as common language to
bridge the communication gap that frequently occurs between business process design
and implementation.

Currently there are several competing standards for business process modeling languages
used by modeling tools and processes. Widespread adoption of the BPMN will help unify
the expression of basic business process concepts (e.g., public and private processes,
choreographies), as well as advanced process concepts (e.g., exception handling,
transaction compensation).

BPMN topics
Scope

BPMN will be constrained to support only the concepts of modeling that are applicable to
business processes. This means that other types of modeling done by organizations for
non-business purposes will be out of scope for BPMN. For example, the modeling of the
following will not be a part of BPMN:

 Organizational structures
 Functional breakdowns
 Data models

In addition, while BPMN will show the flow of data (messages), and the association of
data artifacts to activities, it is not a data flow diagram.

Elements

The modeling in BPMN is made by simple diagrams with a small set of graphical
elements. It should make it easy for business users as well as developers to understand
the flow and the process. The four basic categories of elements are as follows:

Flow Objects
Events, Activities, Gateways
Connecting Objects
Sequence Flow, Message Flow, Association
Swimlanes
Pool, Lane
Artifacts (Artefacts)
Data Object, Group, Annotation

These four categories of elements give us the opportunity to make a simple business
process diagram (BPD). It is also allowed in BPD to make your own type of a Flow
Object or an Artifact to make the diagram more understandable.

Flow objects and connecting objects

Event Activity Gateway Connections

Flow objects are the main describing elements within BPMN, and consist of three core
elements (Events, Activities, and Gateways):

Event
An Event is represented with a circle and denotes something that happens (rather
than Activities which are something that is done). Icons within the circle denote
the type of event (e.g. envelope for message, clock for time). Events are also
classified as Catching (as in, they might catch an incoming message to Start the
process) or Throwing (as in, they might throw a message at the End of the
process).
Start event
Acts as a trigger for the process; indicated by a single narrow border; and can
only be Catch, so is shown with an open (outline) icon.
End event
Represents the result of a process; indicated by a single thick or bold border; and
can only Throw, so is shown with a solid icon.
Intermediate event
Represents something that happens between the start and end events; is indicated
by a tramline border; and can Throw or Catch (using solid or open icons as
appropriate) - for example, a task could flow to an event that throws a message
across to another pool and a subsequent event waits to catch the response before
continuing.
Activity
An Activity is represented with a rounded-corner rectangle and describes the kind
of work which must be done.
Task
A task represents a single unit of work that is not or cannot be broken down to a
further level of business process detail without diagramming the steps in a
procedure (not the purpose of BPMN)
Sub-process
Used to hide or reveal additional levels of business process detail - when
collapsed a sub-process is indicated by a plus sign against the bottom line of the
rectangle; when expanded the rounded rectangle expands to show all flow objects,
connecting objects, and artefacts.
Has its own self-contained start and end events, and sequence flows from the
parent process must not cross the boundary.
Transaction
A form of sub-process in which all contained activities must be treated as a
whole, i.e., they must all be completed to meet an objective, and if any one of
them fails they must all be compensated (undone). Transactions are differentiated
from expanded sub-processes by being surrounded by a tramline border.
Gateway
A Gateway is represented with a diamond shape and will determine forking and
merging of paths depending on the conditions expressed.

Flow objects are connected to each other using Connecting objects, which consist of
three types (Sequences, Messages, and Associations):

Sequence Flow
A Sequence Flow is represented with a solid line and arrowhead and shows in
which order the activities will be performed. The sequence flow may be also have
a symbol at its start, a small diamond indicates one of a number of conditional
flows from an activity while a diagonal slash indicates the default flow from a
decision or activity with conditional flows.
Message Flow
A Message Flow is represented with a dashed line, an open circle at the start, and
an open arrowhead at the end. It tells us what messages flow across organisational
boundaries (i.e., between pools). A message flow can never be used to connect
activities or events within the same pool.
Association
An Association is represented with a dotted line. It is used to associate an Artifact
or text to a Flow Object, and can indicate some directionality using an open
arrowhead (toward the artifact to represent a result, from the artifact to represent
an input, and both to indicate it is read and updated). No directionality would be
used when the Artifact or text is associated with a sequence or message flow (as
that flow already shows the direction).

Swimlanes and artifacts

Annotation

Swimlanes Groups
Data objects
Swim lanes are a visual mechanism of organising and categorising activities, based on
cross functional flowcharting, and in BPMN consist of two types:

Pool
Represents major participants in a process, typically separating different
organisations. A pool contains one or more lanes (like a real swimming pool). A
pool can be open (i.e., showing internal detail) when it is depicted as a large
rectangle showing one or more lanes, or collapsed (i.e., hiding internal detail)
when it is depicted as an empty rectangle stretching the width or height of the
diagram.
Lane
Used to organise and categorise activities within a pool according to function or
role, and depicted as a rectangle stretching the width or height of the pool. A lane
contains the Flow Objects, Connecting Objects and Artifacts.

Artifacts allow developers to bring some more information into the model/diagram. In
this way the model/diagram becomes more readable. There are three pre-defined Artifacts
and they are:

Data Objects
Data Objects show the reader which data is required or produced in an activity.
Group
A Group is represented with a rounded-corner rectangle and dashed lines. The
Group is used to group different activities but does not affect the flow in the
diagram.
Annotation
An Annotation is used to give the reader of the model/diagram an understandable
impression.

Types of Business Process Diagram

A Process with Normal FlowDiscussion Cycle


E-Mail Voting ProcessCollect Votes

Within and between these three BPMN sub-models, many types of Diagrams can be
created. The following are the types of business processes that can be modeled with
BPMN (those with asterisks may not map to an executable language):

 High-level private process activities (not functional breakdown)*


 Detailed private business process
 As-is or old business process*
 To-be or new business process
 Detailed private business process with interactions to one or more external entities
(or “Black Box” processes)
 Two or more detailed private business processes interacting
 Detailed private business process relationship to Abstract Process
 Detailed private business process relationship to Collaboration Process
 Two or more Abstract Processes*
 Abstract Process relationship to Collaboration Process*
 Collaboration Process only (e.g., ebXML BPSS or RosettaNet)*
 Two or more detailed private business processes interacting through their Abstract
Processes
 Two or more detailed private business processes interacting through a
Collaboration Process
 Two or more detailed private business processes interacting through their Abstract
Processes and a Collaboration Process

BPMN is designed to allow all the above types of Diagrams. However, it should be
cautioned that if too many types of sub-models are combined, such as three or more
private processes with message flow between each of them, then the Diagram may
become too hard for someone to understand. Thus, we recommend that the modeler pick
a focused purpose for the BPD, such as a private process, or a collaboration process.

BPMN 2.0

The Business Process Model and Notation is the name of the working proposal for
BPMN 2.0 The vision of BPMN 2.0 is to have one single specification for a new
Business Process Model and Notation that defines the notation, metamodel and
interchange format but with a modified name that still preserves the "BPMN" brand. The
proposed features include

 Aligning BPMN with the business process definition meta model BPDM to form
a single consistent language
 Enabling the exchange of business process models and their diagram layouts
among process modeling tools to preserve semantic integrity
 Expand BPMN to allow model orchestrations and choreographies as stand-alone
or integrated models
 Support the display and interchange of different perspectives on a model that
allow a user to focus on specific concerns
 Serialize BPMN and provide XML schemes for model transformation and to
extend BPMN towards business modeling and executive decision support.

A beta version of the specification is expected to be released in September, 2009, with


the final release scheduled for June, 2010.
Uses of BPMN
Business process modeling is used to communicate a wide variety of information to a
wide variety of audiences. BPMN is designed to cover this wide range of usage and
allows modeling of end-to-end business processes to allow the viewer of the Diagram to
be able to easily differentiate between sections of a BPMN Diagram. There are three
basic types of sub-models within an end-to-end BPMN model: Private (internal) business
processes, Abstract (public) processes, and Collaboration (global) processes:

Private (internal) business processes


Private business processes are those internal to a specific organization and are the
type of processes that have been generally called workflow or BPM processes. If
swim lanes are used then a private business process will be contained within a
single Pool. The Sequence Flow of the Process is therefore contained within the
Pool and cannot cross the boundaries of the Pool. Message Flow can cross the
Pool boundary to show the interactions that exist between separate private
business processes.
Abstract (public) processes
This represents the interactions between a private business process and another
process or participant. Only those activities that communicate outside the private
business process are included in the abstract process. All other “internal”
activities of the private business process are not shown in the abstract process.
Thus, the abstract process shows to the outside world the sequence of messages
that are required to interact with that business process. Abstract processes are
contained within a Pool and can be modeled separately or within a larger BPMN
Diagram to show the Message Flow between the abstract process activities and
other entities. If the abstract process is in the same Diagram as its corresponding
private business process, then the activities that are common to both processes can
be associated.
Collaboration (global) processes
A collaboration process depicts the interactions between two or more business
entities. These interactions are defined as a sequence of activities that represent
the message exchange patterns between the entities involved. Collaboration
processes may be contained within a Pool and the different participant business
interactions are shown as Lanes within the Pool. In this situation, each Lane
would represent two participants and a direction of travel between them. They
may also be shown as two or more Abstract Processes interacting through
Message Flow (as described in the previous section). These processes can be
modeled separately or within a larger BPMN Diagram to show the Associations
between the collaboration process activities and other entities. If the collaboration
process is in the same Diagram as one of its corresponding private business
process, then the activities that are common to both processes can be associated.

Weaknesses of BPMN
The weaknesses of BPMN could relate to:

 ambiguity and confusion in sharing BPMN models


 support for routine work
 support for knowledge work, and
 converting BPMN models to executable environments

EXPRESS (data modeling language)

Fig 1. Requirements of a database for an audio compact disc (CD) collection, presented
in EXPRESS-G notation.

EXPRESS is a standard data modeling language for product data. EXPRESS is


formalized in the ISO Standard for the Exchange of Product model STEP (ISO 10303),
and standardized as ISO 10303-11 .

Overview
Data models formally define data objects and relationships among data objects for a
domain of interest. Some typical applications of data models include supporting the
development of databases and enabling the exchange of data for a particular area of
interest. Data models are specified in a data modeling language. EXPRESS is a data
modeling language defined in ISO 10303-11, the EXPRESS Language Reference
Manual..

An EXPRESS data model can be defined in two ways, textually and graphically. For
formal verification and as input for tools such as SDAI the textual representation within
an ASCII file is the most important one. The graphical representation on the other hand is
often more suitable for human use such as explanation and tutorials. The graphical
representation, called EXPRESS-G, is not able to represent all details that can be
formulated in the textual form.

EXPRESS is similar to programming languages such as PASCAL. Within a SCHEMA


various datatypes can be defined together with structural constraints and algorithmic
rules. A main feature of EXPRESS is the possibility to formally validate a population of
datatypes - this is to check for all the structural and algorithmic rules.

EXPRESS-G

EXPRESS-G is a standard graphical notation for information models. It is a useful


companion to the EXPRESS language for displaying entity and type definitions,
relationships and cardinality. This graphical notation supports a subset of the EXPRESS
language. One of the advantages of using EXPRESS-G over EXPRESS is that the
structure of a data model can be presented in a more understandable manner. A
disadvantage of EXPRESS-G is that complex constraints cannot be formally specified.
Figure 1 is an example. The data model presented in figure could be used to specify the
requirements of a database for an audio compact disc (CD) collection.

Simple example
SCHEMA Family;

ENTITY Person
ABSTRACT SUPERTYPE OF (ONEOF (Male, Female));
name: STRING;
mother: OPTIONAL Female;
father: OPTIONAL Male;
END_ENTITY;

ENTITY Female
SUBTYPE OF (Person);
END_ENTITY;

ENTITY Male
SUBTYPE of (Person);
END_ENTITY;

END_SCHEMA;

The data model is enclosed within the EXPRESS schema Family. It contains a supertype
entity Person with the two subtypes Male and Female. Since Person is declared to be
ABSTRACT only occurrences of either (ONEOF) the subtype Male or Female can exist.
Every occurrence of a person has a mandatory name attribute and optionally attributes
mother and father. There is a fixed style of reading for attributes of some entity type:

 a Female can play the role of motherfor a Person


 a Male can play the role of father for a Person
EXPRESS Building blocks
Datatypes

EXPRESS offers a series of datatypes, with specific data type symbols of the EXPRESS-
G notation:

 Entity data type: This is the most important datatype in EXPRESS. It is covered
below in more details. Entity datatypes can be related in two ways, in a sub-
supertype tree and/or by attributes.

 Enumeration data type: Enumeration values are simple strings such as red, green,
and blue for an rgb-enumeration. In the case that an enumeration type is declared
to be extensible it can be extended in other schemas.

 Defined data type. They can be used to specialize other datatypes further on. E.g.
it is possible to define the datatype positive which is of type integer with a value >
0.

 Select data type: Selects define a choice or an alternative between different


options. Most commonly used are selects between different entity_types. More
rarely are selects which include defined types. In the case that an enumeration
type is declared to be extensible it can be extended in other schemas.

 Simple data type


o String: This is the most often used simple type. EXPRESS strings can be
of any length and can contain any character (ISO 10646/Unicode).
However it is common practise.
o Binary: This data type is only very rarely used. It covers a number of bits
(not bytes). For some implementations the size is limited to 32 bit.
o Logical: Similar to the boolean datatype a logical has the possible values
TRUE and FALSE and in addition UNKNOWN.
o Boolean: With the boolean values TRUE and FALSE.
o Number: The number data type is a supertype of both, integer and real.
Most implementations take uses a double type to represent a real_type,
even if the actual value is an integer.
o Integer: EXPRESS integers can have in principle any length, but most
implementations restricted them to a signed 32 bit value.
o Real: Ideally an EXPRESS real value is unlimited in accuracy and size.
But in practise a real value is represented by a floating point value of type
double.

 Aggregation data type: The possible kinds of aggregation_types are SET, BAG,
LIST and ARRAY. While SET and BAG are unordered, LIST and ARRAY are
ordered. A BAG may contain a particular value more than once, this is not
allowed for SET. An ARRAY is the only aggregate which may contain unset
members. This is not possible for SET, LIST, BAG. The members of an aggregate
may be of any other data type

A few general things are to be mentioned for datatypes.

 Constructed datatypes can be defined within an EXPRESS schema. They are


mainly used to define entities, and to specify the type of entity attributes and
aggregate members.
 Datatypes can be used in a recursive way to build up more and more complex data
types. E.g. it is possible to define a LIST of an ARRAY of a SELECT of either
some entities or other datatypes. If it makes sense to define such datatypes is a
different question.
 EXPRESS defines a couple of rules how a datatype can be further specialized.
This is important for re-declared attributes of entities.
 GENERIC data types can be used for procedures, functions and abstract entities.

Entity-Attribute

Entity attributes allow to add "properties" to entities and to relate one entity with another
one in a specific role. The name of the attribute specifies the role. Most datatypes can
directly serve as type of an attribute. This includes aggregation as well.

There are three different kinds of attributes, explicit, derived and inverse attributes. And
all these can be re-declared in a subtype. In addition an explicit attribute can be re-
declared as derived in a subtype. No other change of the kind of attributes is possible.

 Explicit attributes are those which have direct values visible in a STEP-File.
 Derived attributes get their values from an expression. In most cases the
expression refers to other attributes of THIS instance. The expression may also
use EXPRESS functions.
 Inverse attributes do not add "information" to an entity, but only name and
constrain an explicit attribute to an entity from the other end.
Specific attribute symbols of the EXPRESS-G notation:

Supertypes and subtypes

An entity can be defined to be a subtype of one or several other entities (multiple


inheritance is allowed!). A supertype can have any number of subtypes. It is very
common practice in STEP to build very complex sub-supertype graphs. Some graphs
relate 100 and more entities with each other.

An entity instance can be constructed for either a single entity (if not abstract) or for a
complex combination of entities in such a sub-supertype graph. For the big graphs the
number of possible combinations is likely to grow in astronomic ranges. To restrict the
possible combinations special supertype constraints got introduced such as ONEOF and
TOTALOVER. Furthermore an entity can be declared to be abstract to enforce that no
instance can be constructed of just this entity but only if it contains a non-abstract
subtype.

Algorithmic constraints

Entities and defined data types may be further constraint with WHERE rules. WHERE
rules are also part of global rules. A WHERE rule is an expression, which must evaluate
to TRUE, otherwise a population of an EXPRESS schema, is not valid. Like derived
attributes these expression may invoke EXPRESS functions, which may further invoke
EXPRESS procedures. The functions and procedures allow formulating complex
statements with local variables, parameters and constants - very similar to a programming
language.

The EXPRESS language can describe local and global rules. For example:

ENTITY area_unit
SUBTYPE OF (named_unit);
WHERE
WR1: (SELF\named_unit.dimensions.length_exponent = 2) AND
(SELF\named_unit.dimensions.mass_exponent = 0) AND
(SELF\named_unit.dimensions.time_exponent = 0) AND
(SELF\named_unit.dimensions.electric_current_exponent = 0) AND
(SELF\named_unit.dimensions.
thermodynamic_temperature_exponent = 0) AND
(SELF\named_unit.dimensions.amount_of_substance_exponent = 0)
AND
(SELF\named_unit.dimensions.luminous_intensity_exponent = 0);
END_ENTITY; -- area_unit

This example describes that area_unit entity must have square value of length. For this
the attribute dimensions.length_exponent must be equal to 2 and all other exponents of
basic SI units must be 0.

Another example:

TYPE day_in_week_number = INTEGER;


WHERE
WR1: (1 <= SELF) AND (SELF <= 7);
END_TYPE; -- day_in_week_number

That is, it means that week value cannot exceed 7.

And so, you can describe some rules to your entities. More details on the given examples
can be found in ISO 10303-41

Extended Enterprise Modeling Language

Example of EEML Goal modeling and process modeling.


Extended Enterprise Modeling Language (EEML) in software engineering is a
modelling language used for Enterprise modelling across a number of layers.

Overview
Extended Enterprise Modeling Language (EEML) is modelling language, which
combines structural modeling, business process modeling, goal modeling with goal
hierarchies, and resource modeling. It is used in practice to bridge the type of goal
modeling used in common requirements engineering to other modeling approaches. The
process logic in EEML is mainly expressed through nested structures of tasks and
decision points. The sequencing of tasks is expressed by the flow relation between
decision points. Each task has an input port and the output port being decision points for
modeling process logic.

EEML is intended to be a simple language, which makes it easy to update models. In


addition to capturing the various tasks(can consist of several sub-tasks) and their
interdependencies, models show which roles perform each task, and the tools, services
and information they apply.

History
Extended Enterprise Modeling Language (EEML) is end 1990s developed in the EU
project EXTERNAL as extension of the Action Port Model (APM) by S. Carlsen (1998).
The EXTERNAL project aimed to facilitate inter-organisational cooperation in
knowledge intensive industries. It is the hypotheses of the project that interactive process
models form a suitable framework for tools and methodologies for dynamically
networked organisations. In the project EEML (Extended Enterprise Modelling
Language) was first constructed as a common metamodel, designed to enable syntactic
and semantic interoperability.

It has been further developed in the EU projects Unified Enterprise Modelling Language
(EUML) from 2002 to 2003 and the ongoing ATHENA project. The objectives of UEML
Working group has been to define, to validate and to disseminate a set of core language
constructs to support a Unified Language for Enterprise Modelling, named UEML, to
serve as a basis for interoperability within a smart organisation or a network of
enterprises.

EEML Topics
Modeling domains

The EEML-language is divided into 4 sub-languages, with well-defined links across these
languages:

 Process modeling
 Data modeling
 Resource modeling
 Goal modeling

Process modeling supports the modeling of process logic which is mainly expressed
through nested structures of tasks and decision points. The sequencing of the tasks is
expressed by the flow relation between decision points. Each task has minimum an input
port and an output port being decision points for modeling process logic, Resource roles
are used to connect resources of various kinds (persons, organizations, information,
material objects, software tools and manual tools) to the tasks. In addition, data modeling
(using UML class diagrams), goal modeling and competency modeling (skill
requirements and skills possessed) can be integrated with the process models.

EEML Layers

EEML has four layers of interest

 Generic Task Type: This layer identifies the constituent tasks of generic,
repetitive processes and the logical dependencies between these tasks.
 Specific Task Type: In this layer process models are expanded, concretised,
decomposed and specialised to facilitate business solutions.
 Manage Task Instances: Here, more detailed decisions are taken regarding work
in the actual work environment with its organisational, information, and tool
resources.
 Perform Task Instances: This layer covers the actual execution of tasks.

Goal Modelling

Goal Modelling is one of the four EEML modeling domains age. A goal expresses the
wanted (or unwanted) state of affairs (either current or future) in a certain context.
Example of the goal model is depicted below. It shows goals and relationships between
them. It is possible to model advanced goal-relationships in EEML by using goal
connectors. A goal connector is used when one need to link several goals.

Goal modeling in EEMLConnecting relationships


Goal modeling and process modeling

In goal modeling to fulfil Goal1, one must achieve to other goals: both Goal2 and Goal3
(goal-connector with “and” as the logical relation going out). If Goal2 and Goal3 are two
different ways of achieving Goal1, then it should be “xor” logical relationship. It can be
an opposite situation when both Goal2 and Goal3 need to be fulfilled and to achieve them
one must fulfil Goal1. In this case Goal2 and Goal3 are linked to goal connector and this
goal connector has a link to Goal1 with ”and”-logical relationship.

The table indicate different types of connecting relationships in EEML goal modeling.
Goal model can also be interlinked with a process model.

Goal modelling principles

Within requirements engineering (RE), the notion of goal has increasingly been used.
Goals generally describe objectives which a system should achieve through cooperation
of actors in the intended software and in the environment . Goals are central in some RE
frameworks, and can play a supporting role in others. Goal-oriented techniques may
particularly be useful in early-phase RE. Early-phase requirements consider e.g. how the
intended system meets organizational goals, why the system is needed and how the
stakeholders’ interests may be addressed.

 Expresses the relationships between systems and their environments : Earlier,


requirements engineering focused only on what the system is supposed to do.
Over the past years, there has been a more or less mutual understanding, that it is
also very important to understand and characterize the interaction between the
intended system and its environment. Relationships between systems and their
environments are often expressed as goal-based relationships. The motivation for
this is “partly today's more dynamic business and organizational environments,
where systems are increasingly used to fundamentally change business processes
rather than to automate long-established practices”. Goals can also be useful when
modelling contexts.
 Clarifies requirements : Specifying goals leads to asking “why”, “how” and “how
else”. Requirements of the stakeholders are often revealed in this process. The
stakeholders may seem to be more likely to become aware of potential
alternatives for fulfilling their goals, and thereby less likely to over-specify their
requirements. Requirements from clients and stakeholders may often be unclear,
especially the non-functional ones. A goal-oriented approach allows the
requirements to be refined and clarified through an incremental process, by
analyzing requirements in terms of goal decomposition.
 Deals with conflicts : Goals may provide a useful way of dealing with conflicts,
such as tradeoffs between costs performance, flexibility, etc, and divergent
interests of the stakeholders. Goals can deal with conflicts because meeting of one
goal can interfere with the meeting of others. Different opinions on how to meet a
goal has led to different ways of handling conflicts.
 Decides requirements completeness : Requirements can be considered complete if
they fulfil explicit goals in the requirement model.
 Connects requirements to design : Goals can be used in order to connect the
requirements to the design. For some, goals are an important mechanism in this
matter. (The Non-Functional Requirements (NFR) framework uses goals to guide
the design process.)
Goal-oriented Requirements Language

GRL Notation

Goal-oriented Requirements Language (GRL) is a language that is designed to support


goal-oriented modeling and reasoning about requirements, especially the non-functional
requirements It allows to express conflict between goals and helps to make decisions that
resolve conflicts. There are three main categories of concepts in GRL: intentional
elements, intentional relationships and actors . They are called for intentional because
they are used in models that primarily concerned with answering "why" question of
requirements (for ex. why certain choices for behavior or structure were made, what
alternatives exist and what is the reason for choosing of certain alternative.)

Flowchart
A flowchart is a common type of chart, that represents an algorithm or process, showing
the steps as boxes of various kinds, and their order by connecting these with arrows.
Flowcharts are used in analyzing, designing, documenting or managing a process or
program in various fields.

History
The first structured method for documenting process flow, the "flow process chart", was
introduced by Frank Gilbreth to members of ASME in 1921 as the presentation “Process
Charts—First Steps in Finding the One Best Way”. Gilbreth's tools quickly found their
way into industrial engineering curricula. In the early 1930s, an industrial engineer, Allan
H. Mogensen began training business people in the use of some of the tools of industrial
engineering at his Work Simplification Conferences in Lake Placid, New York.

A 1944 graduate of Mogensen's class, Art Spinanger, took the tools back to Procter and
Gamble where he developed their Deliberate Methods Change Program. Another 1944
graduate, Ben S. Graham, Director of Formcraft Engineering at Standard Register
Corporation, adapted the flow process chart to information processing with his
development of the multi-flow process chart to displays multiple documents and their
relationships. In 1947, ASME adopted a symbol set derived from Gilbreth's original work
as the ASME Standard for Process Charts by Mishad,Ramsan,Raiaan.
Douglas Hartree explains that Herman Goldstine and John von Neumann developed the
flow chart (originally, diagram) to plan computer programs. His contemporary account is
endorsed by IBM engineers and by Goldstine's personal recollections. The original
programming flow charts of Goldstine and von Neumann can be seen in their
unpublished report, "Planning and coding of problems for an electronic computing
instrument, Part II, Volume 1," 1947, which is reproduced in von Neumann's collected
works.

Flowcharts used to be a popular means for describing computer algorithms and are still
used for this purpose. Modern techniques such as UML activity diagrams can be
considered to be extensions of the flowchart. However, their popularity decreased when,
in the 1970s, interactive computer terminals and third-generation programming languages
became the common tools of the trade, since algorithms can be expressed much more
concisely and readably as source code in such a language. Often, pseudo-code is used,
which uses the common idioms of such languages without strictly adhering to the details
of a particular one.

Flowchart building blocks


Symbols

A typical flowchart from older Computer Science textbooks may have the following
kinds of symbols:

Start and end symbols


Represented as circles, ovals or rounded rectangles, usually containing the word
"Start" or "End", or another phrase signaling the start or end of a process, such as
"submit enquiry" or "receive product".
Arrows
Showing what's called "flow of control" in computer science. An arrow coming
from one symbol and ending at another symbol represents that control passes to
the symbol the arrow points to.
Processing steps
Represented as rectangles. Examples: "Add 1 to X"; "replace identified part";
"save changes" or similar.
Input/Output
Represented as a parallelogram. Examples: Get X from the user; display X.
Conditional or decision
Represented as a diamond (rhombus). These typically contain a Yes/No question
or True/False test. This symbol is unique in that it has two arrows coming out of
it, usually from the bottom point and right point, one corresponding to Yes or
True, and one corresponding to No or False. The arrows should always be labeled.
More than two arrows can be used, but this is normally a clear indicator that a
complex decision is being taken, in which case it may need to be broken-down
further, or replaced with the "pre-defined process" symbol.
A number of other symbols that have less universal currency, such as:

 A Document represented as a rectangle with a wavy base;


 A Manual input represented by parallelogram, with the top irregularly sloping up
from left to right. An example would be to signify data-entry from a form;
 A Manual operation represented by a trapezoid with the longest parallel side at
the top, to represent an operation or adjustment to process that can only be made
manually.
 A Data File represented by a cylinder.

Flowcharts may contain other symbols, such as connectors, usually represented as circles,
to represent converging paths in the flowchart. Circles will have more than one arrow
coming into them but only one going out. Some flowcharts may just have an arrow point
to another arrow instead. These are useful to represent an iterative process (what in
Computer Science is called a loop). A loop may, for example, consist of a connector
where control first enters, processing steps, a conditional with one arrow exiting the loop,
and one going back to the connector. Off-page connectors are often used to signify a
connection to a (part of another) process held on another sheet or screen. It is important
to remember to keep these connections logical in order. All processes should flow from
top to bottom and left to right.

Examples

A flowchart for computing factorial N (N!) where N! = (1 * 2 * 3 * ... * N). This


flowchart represents a "loop and a half" — a situation discussed in introductory
programming textbooks that requires either a duplication of a component (to be both
inside and outside the loop) or the component to be put inside a branch in the loop.

Types of flowcharts
There are many different types of flowcharts. On the one hand there are different types
for different users, such as analysts, designers, engineers, managers, or programmers. On
the other hand those flowcharts can represent different types of objects. Sterneckert
(2003) divides four more general types of flowcharts:

 Document flowcharts, showing a document flow through system


 Data flowcharts, showing data flows in a system
 System flowcharts showing controls at a physical or resource level
 Program flowchart, showing the controls in a program within a system

However there are several of these classifications. For example Andrew Veronis (1978)
named three basic types of flowcharts: the system flowchart, the general flowchart, and
the detailed flowchart. That same year Marilyn Bohl (1978) stated "in practice, two kinds
of flowcharts are used in solution planning: system flowcharts and program
flowcharts...". More recently Mark A. Fryman (2001) stated that there are more
differences. Decision flowcharts, logic flowcharts, systems flowcharts, product
flowcharts, and process flowcharts are "just a few of the different types of flowcharts that
are used in business and government.

Software
Manual

Any vector-based drawing program can be used to create flowchart diagrams, but these
will have no underlying data model to share data with databases or other programs such
as project management systems or spreadsheets. Some tools offer special support for
flowchart drawing, e.g., ConceptDraw, SmartDraw, Visio, and OmniGraffle.

Automatic

Many software packages exist that can create flowcharts automatically, either directly
from source code, or from a flowchart description language. For example, Graph::Easy, a
Perl package, takes a textual description of the graph, and uses the description to generate
various output formats including HTML, ASCII or SVG.

Web-based

Recently, online flowchart solutions have become available, e.g., DrawAnywhere. It is


easy to use and flexible but does not meet the power of off-line software like Visio or
SmartDraw.

Extended Enterprise Modeling Language


Example of EEML Goal modeling and process modeling.

Extended Enterprise Modeling Language (EEML) in software engineering is a

Overview
Extended Enterprise Modeling Language (EEML) is modelling language, which
combines structural modeling, business process modeling, goal modeling with goal
hierarchies, and resource modeling. It is used in practice to bridge the type of goal
modeling used in common requirements engineering to other modeling approaches. The
process logic in EEML is mainly expressed through nested structures of tasks and
decision points. The sequencing of tasks is expressed by the flow relation between
decision points. Each task has an input port and the output port being decision points for
modeling process logic.

EEML is intended to be a simple language, which makes it easy to update models. In


addition to capturing the various tasks(can consist of several sub-tasks) and their
interdependencies, models show which roles perform each task, and the tools, services
and information they apply.

History
Extended Enterprise Modeling Language (EEML) is end 1990s developed in the EU
project EXTERNAL as extension of the Action Port Model (APM) by S. Carlsen (1998).
The EXTERNAL project aimed to facilitate inter-organisational cooperation in
knowledge intensive industries. It is the hypotheses of the project that interactive process
models form a suitable framework for tools and methodologies for dynamically
networked organisations. In the project EEML (Extended Enterprise Modelling
Language) was first constructed as a common metamodel, designed to enable syntactic
and semantic interoperability.

It has been further developed in the EU projects Unified Enterprise Modelling Language
(EUML) from 2002 to 2003 and the ongoing ATHENA project. The objectives of UEML
Working group has been to define, to validate and to disseminate a set of core language
constructs to support a Unified Language for Enterprise Modelling, named UEML, to
serve as a basis for interoperability within a smart organisation or a network of
enterprises.

EEML Topics
Modeling domains

The EEML-language is divided into 4 sub-languages, with well-defined links across these
languages:
 Process modeling
 Data modeling
 Resource modeling
 Goal modeling

Process modeling supports the modeling of process logic which is mainly expressed
through nested structures of tasks and decision points. The sequencing of the tasks is
expressed by the flow relation between decision points. Each task has minimum an input
port and an output port being decision points for modeling process logic, Resource roles
are used to connect resources of various kinds (persons, organizations, information,
material objects, software tools and manual tools) to the tasks. In addition, data modeling
(using UML class diagrams), goal modeling and competency modeling (skill
requirements and skills possessed) can be integrated with the process models.

EEML Layers

EEML has four layers of interest

 Generic Task Type: This layer identifies the constituent tasks of generic,
repetitive processes and the logical dependencies between these tasks.
 Specific Task Type: In this layer process models are expanded, concretised,
decomposed and specialised to facilitate business solutions.
 Manage Task Instances: Here, more detailed decisions are taken regarding work
in the actual work environment with its organisational, information, and tool
resources.
 Perform Task Instances: This layer covers the actual execution of tasks.

Goal Modelling

Goal Modelling is one of the four EEML modeling domains age. A goal expresses the
wanted (or unwanted) state of affairs (either current or future) in a certain context.
Example of the goal model is depicted below. It shows goals and relationships between
them. It is possible to model advanced goal-relationships in EEML by using goal
connectors. A goal connector is used when one need to link several goals.

Goal modeling in EEMLConnecting relationships


Goal modeling and process modeling

In goal modeling to fulfil Goal1, one must achieve to other goals: both Goal2 and Goal3
(goal-connector with “and” as the logical relation going out). If Goal2 and Goal3 are two
different ways of achieving Goal1, then it should be “xor” logical relationship. It can be
an opposite situation when both Goal2 and Goal3 need to be fulfilled and to achieve them
one must fulfil Goal1. In this case Goal2 and Goal3 are linked to goal connector and this
goal connector has a link to Goal1 with ”and”-logical relationship.

The table indicate different types of connecting relationships in EEML goal modeling.
Goal model can also be interlinked with a process model.

Goal modelling principles

Within requirements engineering (RE), the notion of goal has increasingly been used.
Goals generally describe objectives which a system should achieve through cooperation
of actors in the intended software and in the environment . Goals are central in some RE
frameworks, and can play a supporting role in others. Goal-oriented techniques may
particularly be useful in early-phase RE. Early-phase requirements consider e.g. how the
intended system meets organizational goals, why the system is needed and how the
stakeholders’ interests may be addressed.

 Expresses the relationships between systems and their environments : Earlier,


requirements engineering focused only on what the system is supposed to do.
Over the past years, there has been a more or less mutual understanding, that it is
also very important to understand and characterize the interaction between the
intended system and its environment. Relationships between systems and their
environments are often expressed as goal-based relationships. The motivation for
this is “partly today's more dynamic business and organizational environments,
where systems are increasingly used to fundamentally change business processes
rather than to automate long-established practices”. Goals can also be useful when
modelling contexts.
 Clarifies requirements : Specifying goals leads to asking “why”, “how” and “how
else”. Requirements of the stakeholders are often revealed in this process. The
stakeholders may seem to be more likely to become aware of potential
alternatives for fulfilling their goals, and thereby less likely to over-specify their
requirements. Requirements from clients and stakeholders may often be unclear,
especially the non-functional ones. A goal-oriented approach allows the
requirements to be refined and clarified through an incremental process, by
analyzing requirements in terms of goal decomposition.
 Deals with conflicts : Goals may provide a useful way of dealing with conflicts,
such as tradeoffs between costs performance, flexibility, etc, and divergent
interests of the stakeholders. Goals can deal with conflicts because meeting of one
goal can interfere with the meeting of others. Different opinions on how to meet a
goal has led to different ways of handling conflicts.
 Decides requirements completeness : Requirements can be considered complete if
they fulfil explicit goals in the requirement model.
 Connects requirements to design : Goals can be used in order to connect the
requirements to the design. For some, goals are an important mechanism in this
matter. (The Non-Functional Requirements (NFR) framework uses goals to guide
the design process.)

Goal-oriented Requirements Language

GRL Notation

Goal-oriented Requirements Language (GRL) is a language that is designed to support


goal-oriented modeling and reasoning about requirements, especially the non-functional
requirements It allows to express conflict between goals and helps to make decisions that
resolve conflicts. There are three main categories of concepts in GRL: intentional
elements, intentional relationships and actors . They are called for intentional because
they are used in models that primarily concerned with answering "why" question of
requirements (for ex. why certain choices for behavior or structure were made, what
alternatives exist and what is the reason for choosing of certain alternative.)

Fundamental modeling concepts


Fundamental Modeling Concepts (FMC) provide a framework to describe software-
intensive systems. It strongly emphasizes the communication about software-intensive
systems by using a semi-formal graphical notation that can easily be understood.

Introduction
FMC distinguishes three perspectives to look at a software system:

 Structure of the system


 Processes in the system
 Value domains of the system

FMC defines a dedicated diagram type for each perspective. FMC diagrams use a simple
and lean notation. The purpose of FMC diagrams is to facilitate the communication about
a software system, not only between technical experts but also between technical experts
and business or domain experts. The comprehensibility of FMC diagrams has made them
famous among its supporters.

The common approach when working with FMC is to start with a high-level diagram of
the compositional structure of a system. This “big picture” diagram serves as a reference
in the communication with all involved stakeholders of the project. Later on, the high-
level diagram is iteratively refined to model technical details of the system.
Complementary diagrams for processes observed in the system or value domains found in
the system are introduced as needed.

Diagram Types
FMC uses three diagram types to model different aspects of a system:

 Compositional Structure Diagram depicts the static structure of a system. This


diagram type is also known as FMC Block Diagram
 Dynamic Structure Diagram depicts processes that can be observed in a system.
This diagram type is also known as FMC Petri-net
 Value Range Structure Diagram depicts structures of values found in the
system. This diagram type is also known as FMC E/R Diagram

All FMC diagrams are bipartite graphs. Each Bipartite graph consists of two disjoint sets
of vertices with the condition that no vertex is connected to another vertex of the same
set. In FMC diagrams, members of one set are represented by angular shapes, and
members of the other set are represented by curved shapes. Each element in an FMC
diagram can be refined by another diagram of the same type, provided that the combined
graph is also bipartite. This mechanism allows modeling all relevant layers of abstraction
with the same notation.

Compositional Structure Diagram

Compositional structure diagrams depict the static structure of a system, and the
relationships between system components. System components can be active or passive.
Agents are active system components. They perform activities in the system. Storages
and channels are passive components which store or transmit information.

The image to the right is an example of a compositional structure diagram. It contains the
agents Order Processor, Supplier Manager, Supplier, Online Shop and an unnamed
human agent. Agents are represented by rectangles. The dots and the shadow of the
agent Supplier indicate that this agent has multiple instances, i.e. the Supplier Manager
communicates with one or many suppliers. The so called human agent represents a user
interacting with the system.

The diagram contains the storages Orders, Purchase Order and Product Catalog.
Storages are represented by curved shapes. Agents can read from storages, write to
storages or modify the content of storages. The directions of the arrows indicate which
operation is performed by an agent. In the diagram, the Supplier Manager can modify the
content of the Product Catalog, whereas the Order Processor can only read the content
of the Product Catalog.
Agents communicate via channels. The direction of information flow is either indicated
by arrows (not shown in the picture), by a request-response-symbol (e.g. between
Supplier Manager and Supplier) or omitted (e.g. between Order Processor and Supplier
Manager).

Dynamic Structure Diagram

Dynamic structures are derived from petri nets.

"They are used to express system behavior over time, depicting the actions
performed by the agents. So they clarify how a system is working and how
communication takes place between different agents."

Value Range Structure Diagram

Value range structure diagrams (also known as FMC Entity Relationship Diagrams) can
be compared with the Entity-relationship model.

"[They] are used to depict value range structures or topics as mathematical


structures. Value range structures describe observable values at locations within
the system whereas topic diagrams allow a much wider usage in order to cover all
correlations between interesting points."

IDEF

IDEF Methods: Part of the Systems Engineer’s Toolbox


IDEF (Integration DEFinition) is a family of modeling languages in the field of systems
and software engineering. They cover a wide range of uses, from functional modeling to
data, simulation, object-oriented analysis/design and knowledge acquisition. These
"definition languages" were developed under funding from U.S. Air Force and although
still most commonly used by them, as well as other military and Department of Defense
(DoD) agencies, are in the public domain.

The most-widely recognized and used of the IDEF family are IDEF0, a functional
modeling language building on SADT, and IDEF1X, which addresses information
models and database design issues.

History
IDEF originally stands for ICAM Definition, that were initiated in the 1970s at the US Air
Force Materials Laboratory, Wright-Patterson Air Force Base in Ohio by Dennis E.
Wisnosky and Dan L. Shunk and others. and finished being developed in the 1980s.
IDEF was a product of the Integrated Computer-Aided Manufacturing (ICAM) initiative
of the United States Air Force. "IDEF" initially stood for "ICAM DEFinition" language;
the IEEE standards recast IDEF as "Integration DEFinition."

The specific projects that produced IDEF were ICAM project priorities 111 and 112
(later renumber 1102). The subsequent Integrated Information Support System (IISS)
project priorities 6201, 6202, and 6203 were an effort to create an information processing
environment that could be run in heterogeneous physical computing environments.
Further development of IDEF occurred under those projects as a result of experience
gained applying the new modeling techniques. The intent of the IISS efforts was to create
'generic subsystems' which could be used by a large number of collaborating enterprises,
such as U.S. Defense contractors and the armed forces of friendly nations.

Functional modeling

Example of an IDEF0 diagram: A function model of the process of "Maintain Reparable


Spares".
The IDEF0 Functional Modeling method is designed to model the decisions, actions, and
activities of an organization or system. It was derived from the established graphic
modeling language Structured Analysis and Design Technique (SADT) developed by
Douglas T. Ross and SofTech, Inc.. In its original form, IDEF0 includes both a definition
of a graphical modeling language (syntax and semantics) and a description of a
comprehensive methodology for developing models. The US Air Force commissioned
the SADT developers to develop a function model method for analyzing and
communicating the functional perspective of a system. IDEF0 should assist in organizing
system analysis and promote effective communication between the analyst and the
customer through simplified graphical devices.

Information modeling

At the time of the ICAM 1102 effort there were numerous, mostly incompatible, data
model methods for storing computer data — Sequential (VSAM), Hierarchical (IMS),
Network (Cincom's TOTAL and CODASYL, and Cullinet's IDMS). The relational data
model was just emerging as a promising way of thinking about structuring data for easy,
efficient, and accurate access. Relational Database Management Systems had not yet
emerged as a general standard for data management.

The ICAM program office deemed it valuable to create a "neutral" way of describing the
data content of large-scale systems. The emerging academic literature suggested that
methods were needed to process data independently of the way it was physically stored.
Thus the IDEF1 language was created to allow a neutral description of data structures,
that could be equally applied regardless of the storage method or file access method.

IDEF1 was developed under ICAM program priority 1102 by Dr. Robert R. Brown of the
Hughes Aircraft Company, under contract to SofTech, Inc. Dr. Brown had previously
been responsible for the development of IMS while working at Rockwell International
(Rockwell chose not to pursue IMS as a marketable product; International Business
Machines (IBM), which had served as a support contractor during development,
subsequently took over the product and was successful in further developing it for
market.) Dr. Brown credits his Hughes colleague Mr. Timothy Ramey as the inventor of
IDEF1 as a viable formalism for modeling information structures. The two Hughes
researchers built on ideas from and interactions with many luminaries in the field at the
time. In particular, IDEF1 draws on the following techniques:

 the Evolving Natural Language Information Model (ENALIM) technique of Dr.


G. M. Nijssen (Control Data Corporation) — this technique is now more widely
known as NIAM or the Object-Role Model ORM;
 the network data structures technique, popularly called the CODASYL approach,
of Dr. Charles Bachman (Honeywell Information Systems);
 the hierarchical data management technique, implemented in IBM's IMS data
management system, developed by Dr. R. R. Brown (Rockwell International);
 the relational approach to data of Dr. E. F. Codd (IBM);
 The Entity-Relationship Approach (E-R) of Dr. Peter Chen (UCLA).
The effort to develop IDEF1 resulted in both a new method for information modeling and
an example of its use in the form of a "reference information model of manufacturing."
This latter artifact was developed by D. S. Coleman of the D. Appleton & Company
(DACOM) acting as a sub-contractor to Hughes and under the direction of Mr. Ramey.
Personnel at DACOM became quite expert at IDEF1 modeling and subsequently
produced a training course and accompanying materials for the IDEF1 modeling
technique.

Experience with IDEF1 revealed that the translation of information requirements into
database designs was more difficult than had originally be anticipated. The most
beneficial value of the IDEF1 information modeling technique was its ability to represent
data independent of how those data were to be stored and used. It provided data modelers
and data analysts with a way to represent data requirements during the requirements-
gathering process. This allowed designers to face the decision of which DBMS to use
under various circumstances after the nature of the data requirements was understood.
The result was reduction of the "misfit" of data requirements to the capabilities, and
limitations, of the DBMS. The translation from IDEF1 models to database designs proved
to be difficult, however.

IDEF1X

Example of an IDEF1X Diagram.

To satisfy the data modeling enhancement requirements that were identified in the IISS-
6202 project, a sub-contractor, DACOM, obtained a license to the Logical Database
Design Technique (LDDT) and its supporting software (ADAM). LDDT had been
developed in 1982 by Robert G. Brown of The Database Design Group entirely outside
the IDEF program and with no knowledge of IDEF1. LDDT combined elements of the
relational data model, the E-R model, and generalization in a way specifically intended to
support data modeling and the transformation of the data models into database designs.
The graphic syntax of LDDT differed from that of IDEF1 and, more importantly, LDDT
contained interrelated modeling concepts not present in IDEF1. Mary E. Loomis wrote a
concise summary of the syntax and semantics of a substantial subset of LDDT, using
terminology compatible with IDEF1 wherever possible. DACOM labeled the result
IDEF1X and supplied it to the ICAM program.

Because the IDEF program was funded by the government, the techniques are in the
public domain. In addition to the ADAM software, sold by DACOM under the name
Leverage, a number of CASE tools, such as ERwin, use IDEF1X as their representation
technique for data modeling.

The IISS projects actually produced working prototypes of an information processing


environment that would run in heterogeneous computing environments. Current
advancements in such techniques as Java and JDBC are now achieving the goals of
ubiquity and versatility across computing environments which was first demonstrated by
IISS.

IDEF2 and IDEF3

Example of an Enhanced Transition Schematic, modelled with IDEF3.

The third IDEF (IDEF2) was originally intended as a user interface modeling method.
However, since the Integrated Computer-Aided Manufacturing (ICAM) Program needed
a simulation modeling tool, the resulting IDEF2 was a method for representing the time
varying behavior of resources in a manufacturing system, providing a framework for
specification of math model based simulations. It was the intent of the methodology
program within ICAM to rectify this situation but limitation of funding did not allow this
to happen. As a result, the lack of a method which would support the structuring of
descriptions of the user view of a system has been a major shortcoming of the IDEF
system. The basic problem from a methodology point of view is the need to distinguish
between a description of what a system (existing or proposed) is supposed to do and a
representative simulation model that will predict what a system will do. The latter was
the focus of IDEF2, the former is the focus of IDEF3.

IDEF 4
Example of the IDEF4: An Behavior Diagram for methods Implementing Louder.

The development of IDEF4 came from the recognition that the modularity,
maintainability and code reusability that results from the object oriented programming
paradigm can be realized in traditional data processing applications. The proven ability of
the object oriented programming paradigm to support data level integration in large
complex distributed systems is also a major factor in the widespread interest in this
technology from the traditional data processing community.

IDEF4 was developed as a design tool for software designers who use object-oriented
languages such as the Common LISP Object System, Flavors, C++, SmallTalk, Objective
C and others. Since effective usage of the object-oriented paradigm requires a different
thought process than used with conventional procedural or database languages, standard
methodologies such as structure charts, data flow diagrams, and traditional data design
models (hierarchical, relational, and network) are not sufficient. IDEF4 seeks to provide
the necessary facilities to support the object-oriented design decision making process.

IDEF5

Example of an IDEF5 Composition Schematic for a Ballpoint Pen.

IDEF5 or Integrated Definition for Ontology Description Capture Method is a software


engineering method to develop and maintain usable, accurate, domain ontologies. In the
field of computer science ontologies are used to capture the concept and objects in a
specific domain, along with associated relationships and meanings. In addition, ontology
capture helps coordinate projects by standardizing terminology and creates opportunities
for information reuse. The lDEF5 Ontology Capture Method has been developed to
reliably construct ontologies in a way that closely reflects human understanding of the
specific domain.

In the IDEF5 method, an ontology is constructed by capturing the content of certain


assertions about real-world objects, their properties, and their interrelationships and
representing that content in an intuitive and natural form. The IDEF5 method has three
main components: A graphical language to support conceptual ontology analysis, a
structured text language for detailed ontology characterization, and a systematic
procedure that provides guidelines for effective ontology capture.

IDEF6

IDEF6 model of IDEF4 Design Activities

IDEF6 or Integrated Definition for Design Rationale Capture is a method to facilitate the
acquisition, representation, and manipulation of the design rationale used in the
development of enterprise systems. Rationale is the reason, justification, underlying
motivation, or excuse that moved the designer to select a particular strategy or design
feature. More simply, rationale is interpreted as the answer to the question, “Why is this
design being done in this manner?” Most design methods focus on the what the design is
(i.e., on the final product, rather than why the design is the way it is).

IDEF6 will be a method that possesses the conceptual resources and linguistic
capabilities needed (i) to represent the nature and structure of the information that
constitutes design rationale within a given system, and (ii) to associate that rationale with
design specifications, models, and documentation for the system. The scope of IDEF6
applicability covers all phases of the information system development process, from
initial conceptualization through both preliminary and detailed design activities. To the
extent that detailed design decisions for software systems are relegated to the coding
phase, the IDEF6 technique should be usable during the software construction process as
well.

IDEF8
IDEF8 or Integrated Definition for Human-System Interaction Design is a method for
producing high-quality designs of the interactions that occur between users and the
systems they operate. Systems are characterized as a collection of objects which perform
functions to accomplish a particular goal. The system with which the user interacts can be
any system, not necessarily a computer program. Human-system interactions are
designed at three levels of specification within the IDEF8 method. The first level defines
the philosophy of system operation and produces a set of models and textual descriptions
of overall system processes. The second level of design specifies role-centered scenarios
of system use. The third level of IDEF8 design is for human-system design detailing. At
this level of design, IDEF8 provides a library of metaphors to help users and designers
specify the desired behavior in terms of other objects whose behavior is more familiar.
Metaphors provide a model of abstract concepts in terms of familiar, concrete objects and
experiences.

IDEF9

Typical business systems.

IDEF9 or Integrated Definition for Business Constraint Discovery is designed to assist in


the discovery and analysis of constraints in a business system. A primary motivation
driving the development of IDEF9 was an acknowledgment that the collection of
constraints that forge an enterprise system is generally poorly defined. The knowledge of
what constraints exist and how those constraints interact is incomplete, disjoint,
distributed, and often completely unknown. This situation is not necessarily alarming.
Just as living organisms do not need to be aware of the genetic or autonomous constraints
that govern certain behaviors, organizations can (and most do) perform well without
explicit knowledge of the glue that structures the system. However, if the desire exists to
modify the business in a predictable manner, the knowledge of these constraints is as
critical as knowledge of genetics is to the genetic engineer.

IDEF14
IDEF14 or Integrated Definition for Network Design Method is a method that targets at
modeling and designing computer and communication networks. It can be used to model
existing ("as is") computer networks or envisioned ("to be") computer networks. It helps
the network designer work with "what if" potential network designs and document design
rationale. The fundamental goals of the IDEF14 method research project have developed
from a perceived need for good network designs that can be implemented quickly and
accurately.

IDEF Methods
Eventually the IDEF methods have been defined up to IDEF14:

 IDEF0 : Function modeling


 IDEF1 : Information Modeling
 IDEF1X : Data Modeling
 IDEF2 : Simulation Model Design
 IDEF3 : Process Description Capture
 IDEF4 : Object-Oriented Design
 IDEF5 : Ontology Description Capture
 IDEF6 : Design Rationale Capture
 IDEF7 : Information System Auditing
 IDEF8 : User Interface Modeling
 IDEF9 : Business Constraint Discovery
 IDEF10 : Implementation Architecture Modeling
 IDEF11 : Information Artifact Modeling
 IDEF12 : Organization Modeling
 IDEF13 : Three Schema Mapping Design
 IDEF14 : Network Design

In 1995 only the IDEF0, IDEF1X, IDEF2, IDEF3 and IDEF4 had been developed in full.
Some of the other IDEF concepts had some prelimary design. Some of the last efforts
were new IDEF developments in 1995 toward establishing reliable methods for business
constraint discovery IDEF9, design rationale capture IDEF6, humansystem interaction
design IDEF8, and network design IDEF14.

In 2009 the methods IDEF7, IDEF10, IDEF11, IDEF 12 and IDEF13 haven't been
developed any further then their initial definition.

Jackson Structured Programming


Jackson Structured Programming or JSP is a method for structured programming
based on correspondences between data stream structure and program structure. JSP
structures programs and data in terms of sequences, iterations and selections, and as a
consequence it is applied when designing a program's detailed control structure, below
the level where object-oriented methods become important.
Introduction
JSP was originally developed in the 1970s by Michael A. Jackson and documented in his
1975 book Principles of Program Design. Jackson's aim was to make COBOL batch file
processing programs easier to modify and maintain, but the method can be used to design
programs for any programming language that has structured control constructs, languages
such as C, Java and Perl. Despite its age, JSP is still in use and is supported by
diagramming tools such as Microsoft's Visio and CASE tools such as Jackson
Workbench .

Jackson Structured Programming was seen by many as related to Warnier Structured


Programming, but the latter method focused almost exclusively on the structure of the
output stream. JSP and Warnier's method both structure programs and data using only
sequences, iterations and selections, so they essentially create programs that are parsers
for regular expressions which simultaneously match the program's input and output data
streams.

Because JSP focusses on the existing input and output data streams, designing a program
using JSP is claimed to be more straightforward than with other structured programming
methods, avoiding the leaps of intuition needed to successfully program using methods
such as top-down decomposition.

Another consequence of JSP's focus on data streams is that it creates program designs
with a very different structure to the kind created by the stepwise refinement methods of
Wirth and Dijkstra. One typical feature of the structure of JSP programs is that they have
several input operations distributed throughout the code in contrast to programs designed
using stepwise refinement, which tend to have only one input operation. Jackson
illustrates this difference in Chapter 3 of Principles of Program Design. He presents two
versions of a program, one designed using JSP, the other using 'traditional' methods.

Structural equivalent
The JSP version of the program is structurally equivalent to

String line;

line = in.readLine();
while (line != null) {
int count = 0;
String firstLineOfGroup = line;

while (line != null && line.equals(firstLineOfGroup)) {


count++;
line = in.readLine();
}
System.out.println(firstLineOfGroup + " " + count);
}
and the traditional version of the program is equivalent to

String line;

int count = 0;
String firstLineOfGroup = null;
while ((line = in.readLine()) != null) {
if (firstLineOfGroup == null
|| !line.equals(firstLineOfGroup)) {
if (firstLineOfGroup != null) {
System.out.println(firstLineOfGroup + " " + count);
}
count = 0;
firstLineOfGroup = line;
}
count++;
}
if (firstLineOfGroup != null) {
System.out.println(firstLineOfGroup + " " + count);
}

Jackson criticises the traditional version, claiming that it hides the relationships which
exist between the input lines, compromising the programs understandability and
maintainability by, for example, forcing the use of a special case for the first line and
forcing another special case for a final output operation.

The method
JSP uses semi-formal steps to capture the existing structure of a program's inputs and
outputs in the structure of the program itself.

The intent is to create programs which are easy to modify over their lifetime. Jackson's
major insight was that requirement changes are usually minor tweaks to the existing
structures. For a program constructed using JSP, the inputs, the outputs, and the internal
structures of the program all match, so small changes to the inputs and outputs should
translate into small changes to the program.

JSP structures programs in terms of four component types:

 fundamental operations
 sequences
 iterations
 selections

The method begins by describing a program's inputs in terms of the four fundamental
component types. It then goes on to describe the program's outputs in the same way. Each
input and output is modelled as a separate Data Structure Diagram (DSD). To make JSP
work for compute-intensive applications, such as digital signal processing (DSP) it is also
necessary to draw algorithm structure diagrams, which focus on internal data structures
rather than input and output ones.

The input and output structures are then unified or merged into a final program structure,
known as a Program Structure Diagram (PSD). This step may involve the addition of a
small amount of high level control structure to marry up the inputs and outputs. Some
programs process all the input before doing any output, whilst others read in one record,
write one record and iterate. Such approaches have to be captured in the PSD.

The PSD, which is language neutral, is then implemented in a programming language.


JSP is geared towards programming at the level of control structures, so the implemented
designs use just primitive operations, sequences, iterations and selections. JSP is not used
to structure programs at the level of classes and objects, although it can helpfully
structure control flow within a class's methods.

JSP uses a diagramming notation to describe the structure of inputs, outputs and
programs, with diagram elements for each of the fundamental component types.

A simple operation is drawn as a box.

An operation

A sequence of operations is represented by boxes connected with lines. In the example


below, operation A consists of the sequence of operations B, C and D.

A sequence

An iteration is again represented with joined boxes. In addition the iterated operation has
a star in the top right corner of its box. In the example below, operation A consists of an
iteration of zero or more invocations of operation B.

An iteration

Selection is similar to a sequence, but with a circle drawn in the top right hand corner of
each optional operation. In the example, operation A consists of one and only one of
operations B, C or D.

A selection

A worked example
As an example, here is how a programmer would design and code a run length encoder
using JSP.

A run length encoder is a program which takes as its input a stream of bytes. It outputs a
stream of pairs consisting of a byte along with a count of the byte's consecutive
occurrences. Run length encoders are often used for crudely compressing bitmaps.

With JSP, the first step is to describe the structure of a program's inputs. A run length
encoder has only one input, a stream of bytes which can be viewed as zero or more runs.
Each run consists of one or more bytes of the same value. This is represented by the
following JSP diagram.

The run length encoder input

The second step is to describe the structure of the output. The run length encoder output
can be described as zero or more pairs, each pair consisting of a byte and its count. In this
example, the count will also be a byte.

The run length encoder output

The next step is to describe the correspondences between the operations in the input and
output structures.

The correspondences between the run length encoders inputs and its outputs

It is at this stage that the astute programmer may encounter a structure clash, in which
there is no obvious correspondence between the input and output structures. If a structure
clash is found, it is usually resolved by splitting the program into two parts, using an
intermediate data structure to provide a common structural framework with which the
two program parts can communicate. The two programs parts are often implemented as
processes or coroutines.

In this example, there is no structure clash, so the two structures can be merged to give
the final program structure.

The run length encoder program structure

At this stage the program can be fleshed out by hanging various primitive operations off
the elements of the structure. Primitives which suggest themselves are

1. read a byte
2. remember byte
3. set counter to zero
4. increment counter
5. output remembered byte
6. output counter

The iterations also have to be fleshed out. They need conditions added. Suitable
conditions would be

1. while there are more bytes


2. while there are more bytes and this byte is the same as the run's first byte and the
count will still fit in a byte

If we put all this together, we can convert the diagram and the primitive operations into
C, maintaining a one-to-one correspondence between the code and the operations and
structure of the program design diagram.

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[])


{
int c;

c = getchar();
while (c != EOF) {
int count = 1;

int first_byte = c;

c = getchar();

while (c != EOF && c == first_byte && count < 255) {


count++;
c = getchar();
}

putchar(first_byte);
putchar(count);
}
return EXIT_SUCCESS;
}

Criticism
This method will work only when translation from input to output is equivalent to a
context-free grammar.

Lepus3
LePUS3 is an object-oriented, visual Design Description Language, namely a software
modelling language and a formal specification language that is suitable primarily for
modelling large object-oriented (Java, C++, C#) programs and design motifs such as
design patterns . It is defined as an axiomatized subset of First-order predicate logic.

LePUS is an abbreviation for Language for Pattern Uniform Specification.

Purpose
LePUS3 is tailored for the following purposes:

 Scalability: To model industrial-scale programs using small charts with only few
symbols
 Automated verifiability: To allow programmers to continuously keep the design in
synch with the implementation
 Program visualization: To allow tools to reverse-engineer legible charts from
plain source code modelling their design
 Pattern implementation: To allow tools to determine automatically whether your
program implements a design pattern
 Design abstraction: To specify unimplemented programs without committing
prematurely to implementation minutiae
 Genericity: To model a design pattern not as a specific implementation but as a
design motif
 Rigour: To allow software designers to be sure exactly what design charts mean
and reason rigorously about them

Context
LePUS3 belongs to the following families of languages:

 Object oriented software modelling languages (e.g., UML): LePUS3 is a visual


notation that is used to represent the building-blocks in the design of programs
object-oriented programming languages
 Formal specification Languages: Like other Logic Visual Languages, LePUS3
charts articulate sentences in mathematical logic. LePUS3 is axiomatized in and
defined as a recursive (turing-decidable) subset of first-order predicate calculus.
Its semantics are defined using finite structure (mathematical logic).
 Architecture Description Languages: LePUS3 is a non-functional specification
language used to represent design decisions about programs in class-based object-
oriented programming languages (such as Java and C++).
 Tool supported specification languages: Verification of LePUS3 charts (checking
their consistency with a Java 1.4 program) can be established (‘verified’) by a
click of a button, as demonstrated by the Two-Tier Programming Toolkit.
 Program Visualization Notations are notations which offer a graphical
representation of the program, normally generated by reverse-engineering the
source code of the program.

Vocabulary
LePUS3 was designed to accommodate for parsimony and for economy of expression. Its
vocabulary consists of only 15 visual tokens.

Tool support
Version 0.5.1 of the Two-Tier Programming Toolkit can be used to create LePUS3
specifications (charts), automatically verifying their consistency with Java 1.4 programs,
and for reverse-engineering these charts from Java source code.

Design Patterns
LePUS3 was specifically designed to model, among others, the 'Gang of Four' design
patterns, including Abstract Factory, Factory Method, Adapter, Decorator, Composite,
Proxy, Iterator, State, Strategy, Template Method, and Visitor. (See "The 'Gang of Four'
Companion") The abbreviation LePUS for "Language for Pattern Uniform Specification"
became because the precursor of this language was primarily concerned with design
patterns.

Examples
LePUS3 is particularly suitable for modelling large programs, design patterns, and
object-oriented application frameworks. It is unsuitable for modelling non object-oriented
programs, architectural styles, and undecidable and semi-decidable properties.

The Closable inheritance hierarchy, The Factory method


package java.io in LePUS3 pattern in LePUS3
The Enterprise JavaBeans
in LePUS3

Unified Modeling Language


A collage of UML diagrams.

Unified Modeling Language (UML) is a standardized general-purpose modeling


language in the field of software engineering.

UML includes a set of graphical notation techniques to create abstract models of specific
systems.

Overview
The Unified Modeling Language (UML) is an open method used to specify, visualize,
modify, construct and document the artifacts of an object-oriented software intensive
system under development. UML offers a standard way to write a system's blueprints,
including conceptual components such as:

 actors,
 business processes and
 system components and activities

as well as concrete things such as:

 programming language statements,


 database schemas, and
 reusable software components.

UML combines best practices from data modeling concepts such as entity relationship
diagrams, business modeling (work flow), object modeling and component modeling. It
can be used with all processes, throughout the software development life cycle, and
across different implementation technologies. UML has succeeded the concepts of the
Booch method, the Object-modeling technique (OMT) and Object-oriented software
engineering (OOSE) by fusing them into a single, common and widely usable modeling
language. UML aims to be a standard modeling language which can model concurrent
and distributed systems. UML is not an industry standard, but is taking shape under the
auspices of the Object Management Group (OMG). OMG has initially called for
information on object-oriented methodologies, that might create a rigorous software
modeling language. Many industry leaders have responded in earnest to help create the
standard.

UML models may be automatically transformed to other representations (e.g. Java) by


means of QVT-like transformation languages, supported by the OMG. UML is
extensible, offering the following mechanisms for customization: profiles and stereotype.
The semantics of extension by profiles have been improved with the UML 1.0 major
revision.

History

History of object-oriented methods and notation.

Before UML 1.x

After Rational Software Corporation hired James Rumbaugh from General Electric in
1994, the company became the source for the two most popular object-oriented modeling
approaches of the day: Rumbaugh's OMT, which was better for object-oriented analysis
(OOA), and Grady Booch's Booch method, which was better for object-oriented design
(OOD). Together Rumbaugh and Booch attempted to reconcile their two approaches and
started work on a Unified Method.

They were soon assisted in their efforts by Ivar Jacobson, the creator of the object-
oriented software engineering (OOSE) method. Jacobson joined Rational in 1995, after
his company, Objectory AB, was acquired by Rational. The three methodologists were
collectively referred to as the Three Amigos, since they were well known to argue
frequently with each other regarding methodological practices.

In 1996 Rational concluded that the abundance of modeling languages was slowing the
adoption of object technology, so repositioning the work on a unified method, they tasked
the Three Amigos with the development of a non-proprietary Unified Modeling
Language. Representatives of competing object technology companies were consulted
during OOPSLA '96; they chose boxes for representing classes over Grady Booch's
Booch method's notation that used cloud symbols.

Under the technical leadership of the Three Amigos, an international consortium called
the UML Partners was organized in 1996 to complete the Unified Modeling Language
(UML) specification, and propose it as a response to the OMG RFP. The UML Partners'
UML 1.0 specification draft was proposed to the OMG in January 1997. During the same
month the UML Partners formed a Semantics Task Force, chaired by Cris Kobryn and
administered by Ed Eykholt, to finalize the semantics of the specification and integrate it
with other standardization efforts. The result of this work, UML 1.1, was submitted to the
OMG in August 1997 and adopted by the OMG in November 1997.

UML 1.x

As a modeling notation, the influence of the OMT notation dominates (e. g., using
rectangles for classes and objects). Though the Booch "cloud" notation was dropped, the
Booch capability to specify lower-level design detail was embraced. The use case
notation from Objectory and the component notation from Booch were integrated with
the rest of the notation, but the semantic integration was relatively weak in UML 1.1, and
was not really fixed until the UML 2.0 major revision.

Concepts from many other OO methods were also loosely integrated with UML with the
intent that UML would support all OO methods. For example CRC Cards (circa 1989
from Kent Beck and Ward Cunningham), and OORam were retained. Many others also
contributed, with their approaches flavoring the many models of the day, including: Tony
Wasserman and Peter Pircher with the "Object-Oriented Structured Design (OOSD)"
notation (not a method), Ray Buhr's "Systems Design with Ada", Archie Bowen's use
case and timing analysis, Paul Ward's data analysis and David Harel's "Statecharts"; as
the group tried to ensure broad coverage in the real-time systems domain. As a result,
UML is useful in a variety of engineering problems, from single process, single user
applications to concurrent, distributed systems, making UML rich but also large.

The Unified Modeling Language is an international standard:

ISO/IEC 19501:2005 Information technology — Open Distributed Processing —


Unified Modeling Language (UML) Version 1.4.2

Development toward UML 2.0


UML has matured significantly since UML 1.1. Several minor revisions (UML 1.3, 1.4,
and 1.5) fixed shortcomings and bugs with the first version of UML, followed by the
UML 2.0 major revision that was adopted by the OMG in 2005.

There are four parts to the UML 2.x specification:

1. the Superstructure that defines the notation and semantics for diagrams and their
model elements;
2. the Infrastructure that defines the core metamodel on which the Superstructure is
based;
3. the Object Constraint Language (OCL) for defining rules for model elements;
4. and the UML Diagram Interchange that defines how UML 2 diagram layouts are
exchanged.

The current versions of these standards follow: UML Superstructure version 2.2, UML
Infrastructure version 2.2, OCL version 2.0, and UML Diagram Interchange version 1.0.

Although many UML tools support some of the new features of UML 2.x, the OMG
provides no test suite to objectively test compliance with its specifications.

Unified Modeling Language topics


Software Development Methods

UML is not a development method by itself, however, it was designed to be compatible


with the leading object-oriented software development methods of its time (for example
OMT, Booch method, Objectory). Since UML has evolved, some of these methods have
been recast to take advantage of the new notations (for example OMT), and new methods
have been created based on UML. The best known is IBM Rational Unified Process
(RUP). There are many other UML-based methods like Abstraction Method, Dynamic
Systems Development Method, and others, designed to provide more specific solutions,
or achieve different objectives.

Modeling

It is very important to distinguish between the UML model and the set of diagrams of a
system. A diagram is a partial graphical representation of a system's model. The model
also contains a "semantic backplane" — documentation such as written use cases that
drive the model elements and diagrams.

UML diagrams represent two different views of a system model:

 Static (or structural) view: Emphasizes the static structure of the system using
objects, attributes, operations and relationships. The structural view includes class
diagrams and composite structure diagrams.
 Dynamic (or behavioral) view: Emphasizes the dynamic behavior of the system
by showing collaborations among objects and changes to the internal states of
objects. This view includes sequence diagrams, activity diagrams and state
machine diagrams.

UML models can be exchanged among UML tools by using the XMI interchange format.

Diagrams overview

UML 2.0 has 13 types of diagrams divided into three categories. Six diagram types
represent the structure application, seven represent general types of behavior, including
four that represent different aspects of interactions.

UML does not restrict UML element types to a certain diagram type. In general, every
UML element may appear on almost all types of diagrams. This flexibility has been
partially restricted in UML 2.0.

In keeping with the tradition of engineering drawings, a comment or note explaining


usage, constraint, or intent is allowed in a UML diagram.

Structure diagrams

Structure diagrams emphasize what things must be in the system being modeled:

 Class diagram: describes the structure of a system by showing the system's


classes, their attributes, and the relationships among the classes.
 Component diagram: depicts how a software system is split up into components
and shows the dependencies among these components.
 Composite structure diagram: describes the internal structure of a class and the
collaborations that this structure makes possible.
 Deployment diagram: serves to model the hardware used in system
implementations, and the execution environments and artifacts deployed on the
hardware.
 Object diagram: shows a complete or partial view of the structure of a modeled
system at a specific time.
 Package diagram: depicts how a system is split up into logical groupings by
showing the dependencies among these groupings.

Component Composite structure


diagram diagrams
Class diagram Deployment
diagram
Object diagram Package diagram

Since structure diagrams represent the structure of a system, they are used extensively in
documenting the architecture of software systems.

Behavior diagrams

Behavior diagrams emphasize what must happen in the system being modeled:

 Activity diagram: represents the business and operational step-by-step workflows


of components in a system. An activity diagram shows the overall flow of control.
 State machine diagram: standardized notation to describe many systems, from
computer programs to business processes.
 Use case diagram: shows the functionality provided by a system in terms of
actors, their goals represented as use cases, and any dependencies among those
use cases.

State Machine diagram

UML Activity Diagram Use case diagram

Since behaviour diagrams illustrate the behaviour of system, they are used extensively to
describe the functionality of software systems.

Interaction diagrams

Interaction diagrams, a subset of behavior diagrams, emphasize the flow of control and
data among the things in the system being modeled:

 Communication diagram: shows the interactions between objects or parts in terms


of sequenced messages. They represent a combination of information taken from
Class, Sequence, and Use Case Diagrams describing both the static structure and
dynamic behavior of a system.
 Interaction overview diagram: are a type of activity diagram in which the nodes
represent interaction diagrams.
 Sequence diagram: shows how objects communicate with each other in terms of a
sequence of messages. Also indicates the lifespans of objects relative to those
messages.
 Timing diagrams: are a specific type of interaction diagram, where the focus is on
timing constraints.

The Protocol State Machine is a sub-variant of the State Machine. It may be used to
model network communication protocols.

Meta modeling

The Object Management Group (OMG) has developed a metamodeling architecture to


define the Unified Modeling Language (UML), called the Meta-Object Facility (MOF).
The Meta-Object Facility is a standard for model-driven engineering, designed as a four-
layered architecture, see image. It provides a meta-meta model at the top layer, called the
M3 layer. This M3-model is the language used by Meta-Object Facility to build
metamodels, called M2-models. The most prominent example of a Layer 2 Meta-Object
Facility model is the UML metamodel, the model that describes the UML itself. These
M2-models describe elements of the M1-layer, and thus M1-models. These would be, for
example, models written in UML. The last layer is the M0-layer or data layer. It is used
to describe real-world objects.

Beyond the M3-model, the Meta-Object Facility describes the means to create and
manipulate models and metamodels by defining CORBA interfaces that describe those
operations. Because of the similarities between the Meta-Object Facility M3-model and
UML structure models, Meta-Object Facility metamodels are usually modeled as UML
class diagrams. A supporting standard of Meta-Object Facility is XMI, which defines an
XML-based exchange format for models on the M3-, M2-, or M1-Layer.

Criticisms
Although UML is a widely recognized and used modeling standard, it is frequently
criticized for the following deficiencies:

Language bloat
Bertrand Meyer, in a satirical essay framed as a student's request for a grade
change, apparently criticized UML as of 1997 for being unnecessarily large; a
disclaimer was added later pointing out that his company nevertheless supports
UML. Ivar Jacobson, a co-architect of UML, said that objections to UML 2.0's
size were valid enough to consider the application of intelligent agents to the
problem. It contains many diagrams and constructs that are redundant or
infrequently used.
Problems in learning and adopting
The problems cited above can make learning and adopting UML problematic,
especially when required of engineers lacking the prerequisite skills. In practice,
people often draw diagrams with the symbols provided by their CASE tool, but
without the meanings those symbols are intended to provide.
Cumulative Impedance/Impedance Mismatching
As with any notational system, UML is able to represent some systems more
concisely or efficiently than others. Thus a developer gravitates toward solutions
that reside at the intersection of the capabilities of UML and the implementation
language. This problem is particularly pronounced if the implementation language
does not adhere to orthodox object-oriented doctrine, as the intersection set
between UML and implementation language may be that much smaller.
Dysfunctional interchange format
While the XMI (XML Metadata Interchange) standard is designed to facilitate the
interchange of UML models, it has been largely ineffective in the practical
interchange of UML 2.x models. This interoperability ineffectiveness is
attributable to two reasons. Firstly, XMI 2.x is large and complex in its own right,
since it purports to address a technical problem more ambitious than exchanging
UML 2.x models. In particular, it attempts to provide a mechanism for facilitating
the exchange of any arbitrary modeling language defined by the OMG's Meta-
Object Facility (MOF). Secondly, the UML 2.x Diagram Interchange
specification lacks sufficient detail to facilitate reliable interchange of UML 2.x
notations between modeling tools. Since UML is a visual modeling language, this
shortcoming is substantial for modelers who don't want to redraw their diagrams.

Modeling experts have written sharp criticisms of UML, including Bertrand Meyer's
"UML: The Positive Spin", and Brian Henderson-Sellers in "Uses and Abuses of the
Stereotype Mechanism in UML 1.x and 2.0".
Chapter-3
Software development & Software
Testing

Software development process

Activities and steps

Requirements · Specification
Architecture · Design
Implementation · Testing
Deployment · Maintenance

Models

Agile · Cleanroom · DSDM


Iterative · RAD · RUP · Spiral
Waterfall · XP · Scrum · Lean
V-Model · FDD

Supporting disciplines

Configuration management
Documentation
Quality assurance (SQA)
Project management
User experience design

Tools
Compiler · Debugger · Profiler
GUI designer
Integrated development environment

Software development is the set of activities that results in software products. Software
development may include research, new development, modification, reuse, re-
engineering, maintenance, or any other activities that result in software products.
Especially the first phase in the software development process may involve many
departments, including marketing, engineering, research and development and general
management.

The term software development may also refer to computer programming, the process of
writing and maintaining the source code.

Overview
There are several different approaches to software development, much like the various
views of political parties toward governing a country. Some take a more structured,
engineering-based approach to developing business solutions, whereas others may take a
more incremental approach, where software evolves as it is developed piece-by-piece.
Most methodologies share some combination of the following stages of software
development:

 Market research
 Gathering requirements for the proposed business solution
 Analyzing the problem
 Devising a plan or design for the software-based solution
 Implementation (coding) of the software
 Testing the software
 Deployment
 Maintenance and bug fixing

These stages are often referred to collectively as the software development lifecycle, or
SDLC. Different approaches to software development may carry out these stages in
different orders, or devote more or less time to different stages. The level of detail of the
documentation produced at each stage of software development may also vary. These
stages may also be carried out in turn (a “waterfall” based approach), or they may be
repeated over various cycles or iterations (a more "extreme" approach). The more
extreme approach usually involves less time spent on planning and documentation, and
more time spent on coding and development of automated tests. More “extreme”
approaches also promote continuous testing throughout the development lifecycle, as
well as having a working (or bug-free) product at all times. More structured or
“waterfall” based approaches attempt to assess the majority of risks and develop a
detailed plan for the software before implementation (coding) begins, and avoid
significant design changes and re-coding in later stages of the software development
lifecycle.

There are significant advantages and disadvantages to the various methodologies, and the
best approach to solving a problem using software will often depend on the type of
problem. If the problem is well understood and a solution can be effectively planned out
ahead of time, the more "waterfall" based approach may work the best. If, on the other
hand, the problem is unique (at least to the development team) and the structure of the
software solution cannot be easily envisioned, then a more "extreme" incremental
approach may work best. A software development process is a structure imposed on the
development of a software product. Synonyms include software life cycle and software
process. There are several models for such processes, each describing approaches to a
variety of tasks or activities that take place during the process.

Software development topics


Marketing

The sources of ideas for software products are legion. These ideas can come from market
research including the demographics of potential new customers, existing customers,
sales prospects who rejected the product, other internal software development staff, or a
creative third party. Ideas for software products are usually first evaluated by marketing
personnel for economic feasibility, for fit with existing channels distribution, for possible
effects on existing product lines, required features, and for fit with the company's
marketing objectives. In a marketing evaluation phase, the cost and time assumptions
become evaluated. A decision is reached early in the first phase as to whether, based on
the more detailed information generated by the marketing and development staff, the
project should be pursued further.

In the book "Great Software Debates", Alan M. Davis states in the chapter
"Requirements", subchapter "The Missing Piece of Software Development":

“ Students of engineering learn engineering and are rarely exposed to finance or


marketing. Students of marketing learn marketing and are rarely exposed to
finance or engineering. Most of us become specialists in just one area. To
complicate matters, few of us meet interdisciplinary people in the workforce, so
there are few roles to mimic. Yet, software product planning is critical to the
development success and absolutely requires knowledge of multiple disciplines. ”
Because software development may involve compromising or going beyond what is
required by the client, a software development project may stray into less technical
concerns such as human resources, risk management, intellectual property, budgeting,
crisis management, etc. These processes may also cause the role of business development
to overlap with software development.

Software development methodology

A software development methodology is a framework that is used to structure, plan, and


control the process of developing information systems. A wide variety of such
frameworks have evolved over the years, each with its own recognized strengths and
weaknesses. One system development methodology is not necessarily suitable for use by
all projects. Each of the available methodologies is best suited to specific kinds of
projects, based on various technical, organizational, project and team considerations.

Recent trends in the sector


Given the rapid growth of this sector, several companies have started to use offshore
development in China, India and other countries with a lower cost per developer model.
Several new Web 2.0 platforms and sites are now developed offshore while management
is located in Western countries. The advantages mostly revolve around better cost-control
over the process, which means that there is lower cash-outflow (often the biggest struggle
for startups). Furthermore, the time difference when working with India and China for the
Western world allows work to be done round the clock adding a competitive advantage.
Notable firms that are involved in development include Tata Consultancy Services,
Infosys, Wipro, and Satyam.

Software testing
Software Testing is an empirical investigation conducted to provide stakeholders with
information about the quality of the product or service under test, with respect to the
context in which it is intended to operate. Software Testing also provides an objective,
independent view of the software to allow the business to appreciate and understand the
risks at implementation of the software. Test techniques include, but are not limited to,
the process of executing a program or application with the intent of finding software
bugs. Software Testing can also be stated as the process of validating and verifying that a
software program/application/product (1) meets the business and technical requirements
that guided its design and development; (2) works as expected; and (3) can be
implemented with the same characteristics.

Software Testing, depending on the testing method employed, can be implemented at any
time in the development process, however most of the test effort occurs after the
requirements have been defined and the coding process has been completed.

Overview
Testing can never completely identify all the defects within software. Instead, it furnishes
a criticism or comparison that compares the state and behavior of the product against
oracles—principles or mechanisms by which someone might recognize a problem. These
oracles may include (but are not limited to) specifications, contracts, comparable
products, past versions of the same product, inferences about intended or expected
purpose, user or customer expectations, relevant standards, applicable laws, or other
criteria.

Every software product has a target audience. For example, the audience for video game
software is completely different from banking software. Therefore, when an organization
develops or otherwise invests in a software product, it can assess whether the software
product will be acceptable to its end users, its target audience, its purchasers, and other
stakeholders. Software testing is the process of attempting to make this assessment.

A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy
$59.5 billion annually. More than a third of this cost could be avoided if better software
testing was performed.

History
The separation of debugging from testing was initially introduced by Glenford J. Myers
in 1979. Although his attention was on breakage testing ("a successful test is one that
finds a bug"), it illustrated the desire of the software engineering community to separate
fundamental development activities, such as debugging, from that of verification. Dave
Gelperin and William C. Hetzel classified in 1988 the phases and goals in software
testing in the following stages:

 Until 1956 - Debugging oriented


 1957–1978 - Demonstration oriented
 1979–1982 - Destruction oriented
 1983–1987 - Evaluation oriented
 1988–2000 - Prevention oriented

Software testing topics


Scope

A primary purpose for testing is to detect software failures so that defects may be
uncovered and corrected. This is a non-trivial pursuit. Testing cannot establish that a
product functions properly under all conditions but can only establish that it does not
function properly under specific conditions. The scope of software testing often includes
examination of code as well as execution of that code in various environments and
conditions as well as examining the aspects of code: does it do what it is supposed to do
and do what it needs to do. In the current culture of software development, a testing
organization may be separate from the development team. There are various roles for
testing team members. Information derived from software testing may be used to correct
the process by which software is developed.

Defects and failures

Not all software defects are caused by coding errors. One common source of expensive
defects is caused by requirement gaps, e.g., unrecognized requirements, that result in
errors of omission by the program designer. A common source of requirements gaps is
non-functional requirements such as testability, scalability, maintainability, usability,
performance, and security.

Software faults occur through the following processes. A programmer makes an error
(mistake), which results in a defect (fault, bug) in the software source code. If this defect
is executed, in certain situations the system will produce wrong results, causing a failure.
Not all defects will necessarily result in failures. For example, defects in dead code will
never result in failures. A defect can turn into a failure when the environment is changed.
Examples of these changes in environment include the software being run on a new
hardware platform, alterations in source data or interacting with different software. A
single defect may result in a wide range of failure symptoms.

Compatibility

A frequent cause of software failure is compatibility with another application, a new


operating system, or, increasingly, web browser version. In the case of lack of backward
compatibility, this can occur (for example...) because the programmers have only
considered coding their programs for, or testing the software upon, "the latest version of"
this-or-that operating system. The unintended consequence of this fact is that: their latest
work might not be fully compatible with earlier mixtures of software/hardware, or it
might not be fully compatible with another important operating system. In any case,
these differences, whatever they might be, may have resulted in (unintended...) software
failures, as witnessed by some significant population of computer users.

This could be considered a "prevention oriented strategy" that fits well with the latest
testing phase suggested by Dave Gelperin and William C. Hetzel, as cited below .

Input combinations and preconditions

A very fundamental problem with software testing is that testing under all combinations
of inputs and preconditions (initial state) is not feasible, even with a simple product. This
means that the number of defects in a software product can be very large and defects that
occur infrequently are difficult to find in testing. More significantly, non-functional
dimensions of quality (how it is supposed to be versus what it is supposed to do)—
usability, scalability, performance, compatibility, reliability—can be highly subjective;
something that constitutes sufficient value to one person may be intolerable to another.

Static vs. dynamic testing


There are many approaches to software testing. Reviews, walkthroughs, or inspections
are considered as static testing, whereas actually executing programmed code with a
given set of test cases is referred to as dynamic testing. Static testing can be (and
unfortunately in practice often is) omitted. Dynamic testing takes place when the program
itself is used for the first time (which is generally considered the beginning of the testing
stage). Dynamic testing may begin before the program is 100% complete in order to test
particular sections of code (modules or discrete functions). Typical techniques for this are
either using stubs/drivers or execution from a debugger environment. For example,
Spreadsheet programs are, by their very nature, tested to a large extent interactively ("on
the fly"), with results displayed immediately after each calculation or text manipulation.

Software verification and validation

Software testing is used in association with verification and validation:

 Verification: Have we built the software right? (i.e., does it match the
specification).
 Validation: Have we built the right software? (i.e., is this what the customer
wants).

The terms verification and validation are commonly used interchangeably in the industry;
it is also common to see these two terms incorrectly defined. According to the IEEE
Standard Glossary of Software Engineering Terminology:

Verification is the process of evaluating a system or component to determine


whether the products of a given development phase satisfy the conditions imposed
at the start of that phase.
Validation is the process of evaluating a system or component during or at the end
of the development process to determine whether it satisfies specified
requirements.

The software testing team

Software testing can be done by software testers. Until the 1980s the term "software
tester" was used generally, but later it was also seen as a separate profession. Regarding
the periods and the different goals in software testing, different roles have been
established: manager, test lead, test designer, tester, automation developer, and test
administrator.

Software Quality Assurance (SQA)

Though controversial, software testing may be viewed as an important part of the


software quality assurance (SQA) process. In SQA, software process specialists and
auditors take a broader view on software and its development. They examine and change
the software engineering process itself to reduce the amount of faults that end up in the
delivered software: the so-called defect rate.
What constitutes an "acceptable defect rate" depends on the nature of the software. For
example, an arcade video game designed to simulate flying an airplane would
presumably have a much higher tolerance for defects than mission critical software such
as that used to control the functions of an airliner that really is flying!

Although there are close links with SQA, testing departments often exist independently,
and there may be no SQA function in some companies.

Software Testing is a task intended to detect defects in software by contrasting a


computer program's expected results with its actual results for a given set of inputs. By
contrast, QA (Quality Assurance) is the implementation of policies and procedures
intended to prevent defects from occurring in the first place.

Testing methods
Approach of boxes

Software testing methods are traditionally divided into black box testing and white box
testing. These two approaches are used to describe the point of view that a test engineer
takes when designing test cases.

Black box testing

Black box testing treats the software as a "black box"—without any knowledge of
internal implementation. Black box testing methods include: equivalence partitioning,
boundary value analysis, all-pairs testing, fuzz testing, model-based testing, traceability
matrix, exploratory testing and specification-based testing.

Specification-based testing: Specification-based testing aims to test the


functionality of software according to the applicable requirements. Thus, the
tester inputs data into, and only sees the output from, the test object. This level of
testing usually requires thorough test cases to be provided to the tester, who then
can simply verify that for a given input, the output value (or behavior), either "is"
or "is not" the same as the expected value specified in the test case.
Specification-based testing is necessary, but it is insufficient to guard against
certain risks.
Advantages and disadvantages: The black box tester has no "bonds" with the
code, and a tester's perception is very simple: a code must have bugs. Using the
principle, "Ask and you shall receive," black box testers find bugs where
programmers do not. But, on the other hand, black box testing has been said to be
"like a walk in a dark labyrinth without a flashlight," because the tester doesn't
know how the software being tested was actually constructed. As a result, there
are situations when (1) a tester writes many test cases to check something that
could have been tested by only one test case, and/or (2) some parts of the back-
end are not tested at all.
Therefore, black box testing has the advantage of "an unaffiliated opinion," on the one
hand, and the disadvantage of "blind exploring," on the other.

White box testing

White box testing is when the tester has access to the internal data structures and
algorithms including the code that implement these.

Types of white box testing


The following types of white box testing exist:

 API testing (application programming interface) - Testing of the


application using Public and Private APIs
 Code coverage - creating tests to satisfy some criteria of code coverage
(e.g., the test designer can create tests to cause all statements in the
program to be executed at least once)
 Fault injection methods
 Mutation testing methods
 Static testing - White box testing includes all static testing

Code completeness evaluation


White box testing methods can also be used to evaluate the completeness of a test
suite that was created with black box testing methods. This allows the software
team to examine parts of a system that are rarely tested and ensures that the most
important function points have been tested.
Two common forms of code coverage are:

 Function coverage, which reports on functions executed


 Statement coverage, which reports on the number of lines executed to
complete the test

They both return a code coverage metric, measured as a percentage.

Grey Box Testing

Grey box testing involves having access to internal data structures and algorithms for
purposes of designing the test cases, but testing at the user, or black-box level.
Manipulating input data and formatting output do not qualify as grey box, because the
input and output are clearly outside of the "black-box" that we are calling the system
under test. This distinction is particularly important when conducting integration testing
between two modules of code written by two different developers, where only the
interfaces are exposed for test. However, modifying a data repository does qualify as grey
box, as the user would not normally be able to change the data outside of the system
under test. Grey box testing may also include reverse engineering to determine, for
instance, boundary values or error messages.
Integration Testing

Integration testing

Integration testing is any type of software testing, that seeks to uncover collisions of
individual software modules to each other. Such integration defects can arise, when the
new modules are developed in separate branches, and then integrated into the main
project.

Regression Testing

Regression testing

Regression testing is any type of software testing, that seeks to uncover software
regressions. Such regression occur whenever software functionality, that was previously
working correctly, stops working as intended. Typically, regressions occur as an
unintended consequence of program changes, when the newly developed part of the
software collides with the previously existing. Common methods of regression testing
include re-running previously run tests and checking whether previously fixed faults have
re-emerged. The depth of testing depends on the phase in the release process and the risk
of the added features. They can either be complete, for changes added late in the release
or deemed to be risky, to very shallow, consisting of positive tests on each feature, if the
changes are early in the release or deemed to be of low risk.

Acceptance testing

Acceptance testing

Acceptance testing can mean one of two things:

1. A smoke test is used as an acceptance test prior to introducing a new build to the
main testing process, i.e. before integration or regression.
2. Acceptance testing performed by the customer, often in their lab environment on
their own HW, is known as user acceptance testing (UAT).

Non Functional Software Testing

Special methods exist to test non-functional aspects of software.

 Performance testing checks to see if the software can handle large quantities of
data or users. This is generally referred to as software scalability. This activity of
Non Functional Software Testing is often referred to as Endurance Testing.
 Stability testing checks to see if the software can continuously function well in or
above an acceptable period. This activity of Non Functional Software Testing is
oftentimes referred to as load (or endurance) testing.
 Usability testing is needed to check if the user interface is easy to use and
understand.
 Security testing is essential for software that processes confidential data to
prevent system intrusion by hackers.
 Internationalization and localization is needed to test these aspects of software, for
which a pseudolocalization method can be used.

In contrast to functional testing, which establishes the correct operation of the software
(correct in that it matches the expected behavior defined in the design requirements), non-
functional testing verifies that the software functions properly even when it receives
invalid or unexpected inputs. Software fault injection, in the form of fuzzing, is an
example of non-functional testing. Non-functional testing, especially for software, is
designed to establish whether the device under test can tolerate invalid or unexpected
inputs, thereby establishing the robustness of input validation routines as well as error-
handling routines. Various commercial non-functional testing tools are linked from the
Software fault injection page; there are also numerous open-source and free software
tools available that perform non-functional testing..

Destructive testing

Destructive testing attempts to cause the software or a sub-system to fail, in order to test
its robustness.

Testing process
A common practice of software testing is performed by an independent group of testers
after the functionality is developed before it is shipped to the customer. This practice
often results in the testing phase being used as project buffer to compensate for project
delays, thereby compromising the time devoted to testing. Another practice is to start
software testing at the same moment the project starts and it is a continuous process until
the project finishes.

In counterpoint, some emerging software disciplines such as extreme programming and


the agile software development movement, adhere to a "test-driven software
development" model. In this process, unit tests are written first, by the software engineers
(often with pair programming in the extreme programming methodology). Of course
these tests fail initially; as they are expected to. Then as code is written it passes
incrementally larger portions of the test suites. The test suites are continuously updated as
new failure conditions and corner cases are discovered, and they are integrated with any
regression tests that are developed. Unit tests are maintained along with the rest of the
software source code and generally integrated into the build process (with inherently
interactive tests being relegated to a partially manual build acceptance process).

Testing can be done on the following levels:


 Unit testing tests the minimal software component, or module. Each unit (basic
component) of the software is tested to verify that the detailed design for the unit
has been correctly implemented. In an object-oriented environment, this is usually
at the class level, and the minimal unit tests include the constructors and
destructors.
 Integration testing exposes defects in the interfaces and interaction between
integrated components (modules). Progressively larger groups of tested software
components corresponding to elements of the architectural design are integrated
and tested until the software works as a system.
 System testing tests a completely integrated system to verify that it meets its
requirements.
 System integration testing verifies that a system is integrated to any external or
third party systems defined in the system requirements.

Before shipping the final version of software, alpha and beta testing are often done
additionally:

 Alpha testing is simulated or actual operational testing by potential


users/customers or an independent test team at the developers' site. Alpha testing
is often employed for off-the-shelf software as a form of internal acceptance
testing, before the software goes to beta testing.
 Beta testing comes after alpha testing. Versions of the software, known as beta
versions, are released to a limited audience outside of the programming team. The
software is released to groups of people so that further testing can ensure the
product has few faults or bugs. Sometimes, beta versions are made available to
the open public to increase the feedback field to a maximal number of future
users.

Finally, acceptance testing can be conducted by the end-user, customer, or client to


validate whether or not to accept the product. Acceptance testing may be performed as
part of the hand-off process between any two phases of development.

Regression testing

Regression testing

After modifying software, either for a change in functionality or to fix defects, a


regression test re-runs previously passing tests on the modified software to ensure that the
modifications have not unintentionally caused a regression of previous functionality.
Regression testing can be performed at any or all of the above test levels. These
regression tests are often automated.

More specific forms of regression testing are known as sanity testing (which quickly
checks for bizarre behavior) and smoke testing (which tests for basic functionality).
Benchmarks may be employed during regression testing to ensure that the performance
of the newly modified software will be at least as acceptable as the earlier version or, in
the case of code optimization, that some real improvement has been achieved.

Finding faults
Finding faults early

It is commonly believed that the earlier a defect is found the cheaper it is to fix it. The
following table shows the cost of fixing the defect depending on the stage it was found.
For example, if a problem in the requirements is found only post-release, then it would
cost 10–100 times more to fix than if it had already been found by the requirements
review.

Time Detected

System Post-
Requirements Architecture Construction
Test Release

10–
Requirements 1× 3× 5–10× 10×
100×

Time
Introduced Architecture 25–
- 1× 10× 15×
100×

Construction - - 1× 10× 10–25×

Testing Tools

Test automation

Program testing and fault detection can be aided significantly by testing tools and
debuggers. Testing/debug tools include features such as:

 Program monitors, permitting full or partial monitoring of program code


including:
o Instruction Set Simulator, permitting complete instruction level
monitoring and trace facilities
o Program animation, permitting step-by-step execution and conditional
breakpoint at source level or in machine code
o Code coverage reports
 Formatted dump or Symbolic debugging, tools allowing inspection of program
variables on error or at chosen points
 Automated functional GUI testing tools are used to repeat system-level tests
through the GUI
 Benchmarks, allowing run-time performance comparisons to be made
 Performance analysis (or profiling tools) that can help to highlight hot spots and
resource usage

Some of these features may be incorporated into an Integrated Development Environment


(IDE).

Measuring software testing

Usually, quality is constrained to such topics as correctness, completeness, security, but


can also include more technical requirements as described under the ISO standard ISO
9126, such as capability, reliability, efficiency, portability, maintainability, compatibility,
and usability.

There are a number of common software measures, often called "metrics", which are used
to measure the state of the software or the adequacy of the testing.

Testing artifacts

Software testing process can produce several artifacts.

Test plan
A test specification is called a test plan. The developers are well aware what test
plans will be executed and this information is made available to management and
the developers. The idea is makes them more cautious when developing their code
or making additional changes. Some companies have a higher-level document
called a test strategy.
Traceability matrix
A traceability matrix is a table that correlates requirements or design documents
to test documents. It is used to change tests when the source documents are
changed, or to verify that the test results are correct.
Test case
A test case normally consists of a unique identifier, requirement references from a
design specification, preconditions, events, a series of steps (also known as
actions) to follow, input, output, expected result, and actual result. Clinically
defined a test case is an input and an expected result. This can be as pragmatic as
'for condition x your derived result is y', whereas other test cases described in
more detail the input scenario and what results might be expected. It can
occasionally be a series of steps (but often steps are contained in a separate test
procedure that can be exercised against multiple test cases, as a matter of
economy) but with one expected result or expected outcome. The optional fields
are a test case ID, test step, or order of execution number, related requirement(s),
depth, test category, author, and check boxes for whether the test is automatable
and has been automated. Larger test cases may also contain prerequisite states or
steps, and descriptions. A test case should also contain a place for the actual
result. These steps can be stored in a word processor document, spreadsheet,
database, or other common repository. In a database system, you may also be able
to see past test results, who generated the results, and what system configuration
was used to generate those results. These past results would usually be stored in a
separate table.
Test script
The test script is the combination of a test case, test procedure, and test data.
Initially the term was derived from the product of work created by automated
regression test tools. Today, test scripts can be manual, automated, or a
combination of both.
Test suite
The most common term for a collection of test cases is a test suite. The test suite
often also contains more detailed instructions or goals for each collection of test
cases. It definitely contains a section where the tester identifies the system
configuration used during testing. A group of test cases may also contain
prerequisite states or steps, and descriptions of the following tests.
Test data
In most cases, multiple sets of values or data are used to test the same
functionality of a particular feature. All the test values and changeable
environmental components are collected in separate files and stored as test data. It
is also useful to provide this data to the client and with the product or a project.
Test harness
The software, tools, samples of data input and output, and configurations are all
referred to collectively as a test harness.

A sample testing cycle

Although variations exist between organizations, there is a typical cycle for testing:

 Requirements analysis: Testing should begin in the requirements phase of the


software development life cycle. During the design phase, testers work with
developers in determining what aspects of a design are testable and with what
parameters those tests work.
 Test planning: Test strategy, test plan, testbed creation. Since many activities
will be carried out during testing, a plan is needed.
 Test development: Test procedures, test scenarios, test cases, test datasets, test
scripts to use in testing software.
 Test execution: Testers execute the software based on the plans and tests and
report any errors found to the development team.
 Test reporting: Once testing is completed, testers generate metrics and make
final reports on their test effort and whether or not the software tested is ready for
release.
 Test result analysis: Or Defect Analysis, is done by the development team
usually along with the client, in order to decide what defects should be treated,
fixed, rejected (i.e. found software working properly) or deferred to be dealt with
later.
 Defect Retesting: Once a defect has been dealt with by the development team, it
is retested by the testing team.
 Regression testing: It is common to have a small test program built of a subset of
tests, for each integration of new, modified, or fixed software, in order to ensure
that the latest delivery has not ruined anything, and that the software product as a
whole is still working correctly.
 Test Closure: Once the test meets the exit criteria, the activities such as capturing
the key outputs, lessons learned, results, logs, documents related to the project are
archived and used as a reference for future projects.

Certifications
Several certification programs exist to support the professional aspirations of software
testers and quality assurance specialists. No certification currently offered actually
requires the applicant to demonstrate the ability to test software. No certification is based
on a widely accepted body of knowledge. This has led some to declare that the testing
field is not ready for certification. Certification itself cannot measure an individual's
productivity, their skill, or practical knowledge, and cannot guarantee their competence,
or professionalism as a tester.

Software testing certification types

 Exam-based: Formalized exams, which need to be passed; can also be


learned by self-study [e.g., for ISTQB or QAI]
 Education-based: Instructor-led sessions, where each course has to be
passed [e.g., International Institute for Software Testing (IIST)].

Testing certifications

 CATe offered by the International Institute for Software Testing


 Certified Software Tester (CSTE) offered by the Quality Assurance
Institute (QAI)
 Certified Software Test Professional (CSTP) offered by the International
Institute for Software Testing
 CSTP (TM) (Australian Version) offered by K. J. Ross & Associates
 ISEB offered by the Information Systems Examinations Board
 ISTQB Certified Tester, Foundation Level (CTFL) offered by the
International Software Testing Qualification Board
 ISTQB Certified Tester, Advanced Level (CTAL) offered by the
International Software Testing Qualification Board
 TMPF TMap Next Foundation offered by the Examination Institute for
Information Science

Quality assurance certifications

 CSQE offered by the American Society for Quality (ASQ)


 CSQA offered by the Quality Assurance Institute (QAI)
 CQIA offered by the American Society for Quality (ASQ)
 CMSQ offered by the Quality Assurance Institute (QAI)</ref>

Controversy
Some of the major software testing controversies include:

What constitutes responsible software testing?


Members of the "context-driven" school of testing believe that there are no "best
practices" of testing, but rather that testing is a set of skills that allow the tester to
select or invent testing practices to suit each unique situation.
Agile vs. traditional
Should testers learn to work under conditions of uncertainty and constant change
or should they aim at process "maturity"? The agile testing movement has
received growing popularity since 2006 mainly in commercial circles , whereas
government and military software providers are slow to embrace this
methodology, and mostly still hold to CMMI.
Exploratory test vs. scripted
Should tests be designed at the same time as they are executed or should they be
designed beforehand?
Manual testing vs. automated
Some writers believe that test automation is so expensive relative to its value that
it should be used sparingly. Others, such as advocates of agile development,
recommend automating 100% of all tests. More in particular, test-driven
development states that developers should write unit-tests of the XUnit type
before coding the functionality. The tests then can be considered as a way to
capture and implement the requirements.
Software design vs. software implementation
Should testing be carried out only at the end or throughout the whole process?
Who watches the watchmen?
The idea is that any form of observation is also an interaction—the act of testing
can also affect that which is being tested.

Chapter-4
Software development process, Computer-aided
Software engineering Software quality

Software development process


A software development process is a structure imposed on the development of a
software product. Synonyms include software life cycle and software process. There are
several models for such processes, each describing approaches to a variety of tasks or
activities that take place during the process.

Overview
The largely growing body of software development organizations implement process
methodologies. Many of them are in the defense industry, which in the U.S. requires a
rating based on 'process models' to obtain contracts.

The international standard for describing the method of selecting, implementing and
monitoring the life cycle for software is ISO 12207.

A decades-long goal has been to find repeatable, predictable processes that improve
productivity and quality. Some try to systematize or formalize the seemingly unruly task
of writing software. Others apply project management techniques to writing software.
Without project management, software projects can easily be delivered late or over
budget. With large numbers of software projects not meeting their expectations in terms
of functionality, cost, or delivery schedule, effective project management appears to be
lacking.

Organizations may create a Software Engineering Process Group (SEPG), which is the
focal point for process improvement. Composed of line practitioners who have varied
skills, the group is at the center of the collaborative effort of everyone in the organization
who is involved with software engineering process improvement.

Software development activities

The activities of the software development process represented in the waterfall model.
There are several other models to represent this process.
Planning

The important task in creating a software product is extracting the requirements or


requirements analysis. Customers typically have an abstract idea of what they want as an
end result, but not what software should do. Incomplete, ambiguous, or even
contradictory requirements are recognized by skilled and experienced software engineers
at this point. Frequently demonstrating live code may help reduce the risk that the
requirements are incorrect.

Once the general requirements are gleaned from the client, an analysis of the scope of the
development should be determined and clearly stated. This is often called a scope
document. Certain functionality may be out of scope of the project as a function of cost
or as a result of unclear requirements at the start of development. If the development is
done externally, this document can be considered a legal document so that if there are
ever disputes, any ambiguity of what was promised to the client can be clarified.

Design

Domain Analysis is often the first step in attempting to design a new piece of software,
whether it be an addition to an existing software, a new application, a new subsystem or a
whole new system. Assuming that the developers (including the analysts) are not
sufficiently knowledgeable in the subject area of the new software, the first task is to
investigate the so-called "domain" of the software. The more knowledgeable they are
about the domain already, the less work required. Another objective of this work is to
make the analysts, who will later try to elicit and gather the requirements from the area
experts, speak with them in the domain's own terminology, facilitating a better
understanding of what is being said by these experts. If the analyst does not use the
proper terminology it is likely that they will not be taken seriously, thus this phase is an
important prelude to extracting and gathering the requirements. If an analyst hasn't done
the appropriate work confusion may ensue: "I know you believe you understood what you
think I said, but I am not sure you realize what you heard is not what I meant."

Specification

Specification is the task of precisely describing the software to be written, possibly in a


rigorous way. In practice, most successful specifications are written to understand and
fine-tune applications that were already well-developed, although safety-critical software
systems are often carefully specified prior to application development. Specifications are
most important for external interfaces that must remain stable. A good way to determine
whether the specifications are sufficiently precise is to have a third party review the
documents making sure that the requirements and Use Cases are logically sound.

Architecture

The architecture of a software system or software architecture refers to an abstract


representation of that system. Architecture is concerned with making sure the software
system will meet the requirements of the product, as well as ensuring that future
requirements can be addressed. The architecture step also addresses interfaces between
the software system and other software products, as well as the underlying hardware or
the host operating system.

Implementation, testing and documenting

Implementation is the part of the process where software engineers actually program the
code for the project.

Software testing is an integral and important part of the software development process.
This part of the process ensures that bugs are recognized as early as possible.

Documenting the internal design of software for the purpose of future maintenance and
enhancement is done throughout development. This may also include the authoring of an
API, be it external or internal.

Deployment and maintenance

Deployment starts after the code is appropriately tested, is approved for release and sold
or otherwise distributed into a production environment.

Software Training and Support is important because a large percentage of software


projects fail because the developers fail to realize that it doesn't matter how much time
and planning a development team puts into creating software if nobody in an organization
ends up using it. People are often resistant to change and avoid venturing into an
unfamiliar area, so as a part of the deployment phase, it is very important to have training
classes for new clients of your software.

Maintenance and enhancing software to cope with newly discovered problems or new
requirements can take far more time than the initial development of the software. It may
be necessary to add code that does not fit the original design to correct an unforeseen
problem or it may be that a customer is requesting more functionality and code can be
added to accommodate their requests. It is during this phase that customer calls come in
and you see whether your testing was extensive enough to uncover the problems before
customers do. If the labor cost of the maintenance phase exceeds 25% of the prior-phases'
labor cost, then it is likely that the overall quality, of at least one prior phase, is poor. In
that case, management should consider the option of rebuilding the system (or portions)
before maintenance cost is out of control.

Bug Tracking System tools are often deployed at this stage of the process to allow
development teams to interface with customer/field teams testing the software to identify
any real or perceived issues. These software tools, both open source and commercially
licensed, provide a customizable process to acquire, review, acknowledge, and respond to
reported issues.
Models
Iterative processes

Iterative development prescribes the construction of initially small but ever larger
portions of a software project to help all those involved to uncover important issues early
before problems or faulty assumptions can lead to disaster. Iterative processes are
preferred by commercial developers because it allows a potential of reaching the design
goals of a customer who does not know how to define what they want.

Agile software development

Agile software development processes are built on the foundation of iterative


development. To that foundation they add a lighter, more people-centric viewpoint than
traditional approaches. Agile processes use feedback, rather than planning, as their
primary control mechanism. The feedback is driven by regular tests and releases of the
evolving software.

Interestingly, surveys have shown the potential for significant efficiency gains over the
waterfall method. For example, a survey, published in August 2006 by VersionOne and
Agile Alliance and based on polling more than 700 companies claims the following
benefits for an Agile approach. The survey was repeated in August 2007 with about 1,700
respondents.

XP: Extreme Programming

Extreme Programming (XP) is the best-known iterative process. In XP, the phases are
carried out in extremely small (or "continuous") steps compared to the older, "batch"
processes. The (intentionally incomplete) first pass through the steps might take a day or
a week, rather than the months or years of each complete step in the Waterfall model.
First, one writes automated tests, to provide concrete goals for development. Next is
coding (by a pair of programmers), which is complete when all the tests pass, and the
programmers can't think of any more tests that are needed. Design and architecture
emerge out of refactoring, and come after coding. Design is done by the same people who
do the coding. (Only the last feature - merging design and code - is common to all the
other agile processes.) The incomplete but functional system is deployed or demonstrated
for (some subset of) the users (at least one of which is on the development team). At this
point, the practitioners start again on writing tests for the next most important part of the
system.

Waterfall processes

The waterfall model shows a process, where developers are to follow these steps in order:

1. Requirements specification (AKA Verification or Analysis)


2. Design
3. Construction (AKA implementation or coding)
4. Integration
5. Testing and debugging (AKA validation)
6. Installation (AKA deployment)
7. Maintenance

After each step is finished, the process proceeds to the next step, just as builders don't
revise the foundation of a house after the framing has been erected.

There is a misconception that the process has no provision for correcting errors in early
steps (for example, in the requirements). In fact this is where the domain of requirements
management comes in, which includes change control. The counter argument, by critics
to the process, is the significantly increased cost in correcting problems through
introduction of iterations. This is also the factor that extends delivery time and makes this
process increasingly unpopular even in high risk projects.

This approach is used in high risk projects, particularly large defense contracts. The
problems in waterfall do not arise from "immature engineering practices, particularly in
requirements analysis and requirements management." Studies of the failure rate of the
DOD-STD-2167 specification, which enforced waterfall, have shown that the more
closely a project follows its process, specifically in up-front requirements gathering, the
more likely the project is to release features that are not used in their current form.

Often the supposed stages are part of review between customer and supplier, the supplier
can, in fact, develop at risk and evolve the design but must sell off the design at a key
milestone called Critical Design Review (CDR). This shifts engineering burdens from
engineers to customers who may have other skills.

Other models

Capability Maturity Model Integration


The Capability Maturity Model Integration (CMMI) is one of the leading models
and based on best practice. Independent assessments grade organizations on how
well they follow their defined processes, not on the quality of those processes or
the software produced. CMMI has replaced CMM.
ISO 9000
ISO 9000 describes standards for formally organizing processes with
documentation.
ISO 15504
ISO 15504, also known as Software Process Improvement Capability
Determination (SPICE), is a "framework for the assessment of software
processes". This standard is aimed at setting out a clear model for process
comparison. SPICE is used much like CMMI. It models processes to manage,
control, guide and monitor software development. This model is then used to
measure what a development organization or project team actually does during
software development. This information is analyzed to identify weaknesses and
drive improvement. It also identifies strengths that can be continued or integrated
into common practice for that organization or team.
Six sigma
Six Sigma is a methodology to manage process variations that uses data and
statistical analysis to measure and improve a company's operational performance.
It works by identifying and eliminating defects in manufacturing and service-
related processes. The maximum permissible defects is 3.4 per one million
opportunities. However, Six Sigma is manufacturing-oriented and needs further
research on its relevance to software development.
Test Driven Development
Test Driven Development (TDD) is a useful output of the Agile camp but some
suggest that it raises a conundrum. TDD requires that a unit test be written for a
class before the class is written. It might be thought, then, that the class firstly has
to be "discovered" and secondly defined in sufficient detail to allow the write-test-
once-and-code-until-class-passes model that TDD actually uses. This would be
actually counter to Agile approaches, particularly (so-called) Agile Modeling,
where developers are still encouraged to code early, with light design. However,
to get the claimed benefits of TDD a full design down to class and responsibilities
(captured using, for example, Design By Contract) is not necessary. This would
count towards iterative development, with a design locked down, but not iterative
design - as heavy refactoring and re-engineering might negate the usefulness of
TDD.

Formal methods
Formal methods are mathematical approaches to solving software (and hardware)
problems at the requirements, specification and design levels. Examples of formal
methods include the B-Method, Petri nets, Automated theorem proving, RAISE and
VDM. Various formal specification notations are available, such as the Z notation. More
generally, automata theory can be used to build up and validate application behavior by
designing a system of finite state machines.

Finite state machine (FSM) based methodologies allow executable software specification
and by-passing of conventional coding (see virtual finite state machine or event driven
finite state machine).

Formal methods are most likely to be applied in avionics software, particularly where the
software is safety critical. Software safety assurance standards, such as DO178B demand
formal methods at the highest level of categorization (Level A).

Formalization of software development is creeping in, in other places, with the


application of Object Constraint Language (and specializations such as Java Modeling
Language) and especially with Model-driven architecture allowing execution of designs,
if not specifications.
Another emerging trend in software development is to write a specification in some form
of logic (usually a variation of FOL), and then to directly execute the logic as though it
were a program. The OWL language, based on Description Logic, is an example. There is
also work on mapping some version of English (or another natural language)
automatically to and from logic, and executing the logic directly. Examples are Attempto
Controlled English, and Internet Business Logic, which does not seek to control the
vocabulary or syntax. A feature of systems that support bidirectional English-logic
mapping and direct execution of the logic is that they can be made to explain their results,
in English, at the business or scientific level.

The Government Accountability Office, in a 2003 report on one of the Federal Aviation
Administration’s air traffic control modernization programs, recommends following the
agency’s guidance for managing major acquisition systems by

 establishing, maintaining, and controlling an accurate, valid, and current


performance measurement baseline, which would include negotiating all
authorized, unpriced work within 3 months;
 conducting an integrated baseline review of any major contract modifications
within 6 months; and
 preparing a rigorous life-cycle cost estimate, including a risk assessment, in
accordance with the Acquisition System Toolset’s guidance and identifying the
level of uncertainty inherent in the estimate.

Computer-aided software engineering


Computer-Aided Software Engineering (CASE), in the field of Software Engineering
is the scientific application of a set of tools and methods to a software which is meant to
result in high-quality, defect-free, and maintainable software products. It also refers to
methods for the development of information systems together with automated tools that
can be used in the software development process.

Overview
The term "Computer-aided software engineering" (CASE) can refer to the software used
for the automated development of systems software, i.e., computer code. The CASE
functions include analysis, design, and programming. CASE tools automate methods for
designing, documenting, and producing structured computer code in the desired
programming language.

Two key ideas of Computer-aided Software System Engineering (CASE) are:


 the harboring of computer assistance in software development and or software
maintenance processes, and
 An engineering approach to the software development and or maintenance.

Some typical CASE tools are:

 Configuration management tools


 Data modeling tools
 Model transformation tools
 Program transformation tools
 Refactoring tools
 Source code generation tools, and
 Unified Modeling Language

Many CASE tools not only output code but also generate other output typical of various
systems analysis and design methodologies such as

 data flow diagram


 entity relationship diagram
 logical schema
 Program specification
 SSADM.
 User documentation

History of CASE
The term CASE was originally coined by software company, Nastec Corporation of
Southfield, Michigan in 1982 with their original integrated graphics and text editor
GraphiText, which also was the first microcomputer-based system to use hyperlinks to
cross-reference text strings in documents — an early forerunner of today's web page link.
GraphiText's successor product, DesignAid was the first microprocessor-based tool to
logically and semantically evaluate software and system design diagrams and build a data
dictionary.

Under the direction of Albert F. Case, Jr. vice president for product management and
consulting, and Vaughn Frick, director of product management, the DesignAid product
suite was expanded to support analysis of a wide range of structured analysis and design
methodologies, notable Ed Yourdon and Tom DeMarco, Chris Gane & Trish Sarson,
Ward-Mellor (real-time) SA/SD and Warnier-Orr (data driven).

The next entrant into the market was Excelerator from Index Technology in Cambridge,
Mass. While DesignAid ran on Convergent Technologies and later Burroughs Ngen
networked microcomputers, Index launched Excelerator on the IBM PC/AT platform.
While, at the time of launch, and for several years, the IBM platform did not support
networking or a centralized database as did the Convergent Technologies or Burroughs
machines, the allure of IBM was strong, and Excelerator came to prominence. Hot on the
heels of Excelerator were a rash of offerings from companies such as Knowledgeware
(James Martin, Fran Tarkenton and Don Addington), Texas Instrument's IEF and
Accenture's FOUNDATION toolset (METHOD/1, DESIGN/1, INSTALL/1, FCP).

CASE tools were at their peak in the early 1990s. At the time IBM had proposed
AD/Cycle which was an alliance of software vendors centered around IBM's Software
repository using IBM DB2 in mainframe and OS/2:

The application development tools can be from several sources: from IBM, from
vendors, and from the customers themselves. IBM has entered into relationships
with Bachman Information Systems, Index Technology Corporation, and
Knowledgeware, Inc. wherein selected products from these vendors will be
marketed through an IBM complementary marketing program to provide
offerings that will help to achieve complete life-cycle coverage.

With the decline of the mainframe, AD/Cycle and the Big CASE tools died off, opening
the market for the mainstream CASE tools of today. Interestingly, nearly all of the
leaders of the CASE market of the early 1990s ended up being purchased by Computer
Associates, including IEW, IEF, ADW, Cayenne, and Learmonth & Burchett
Management Systems (LBMS).

CASE Topics
Alfonso Fuggetta classified CASE into 3 categories:

1. Tools support only specific tasks in the software process.


2. Workbenches support only one or a few activities.
3. Environments support (a large part of) the software process.

Workbenches and environments are generally built as collections of tools. Tools can
therefore be either stand alone products or components of workbenches and
environments.

CASE tools

CASE tools are a class of software that automates many of the activities involved in
various life cycle phases. For example, when establishing the functional requirements of
a proposed application, prototyping tools can be used to develop graphic models of
application screens to assist end users to visualize how an application will look after
development. Subsequently, system designers can use automated design tools to
transform the prototyped functional requirements into detailed design documents.
Programmers can then use automated code generators to convert the design documents
into code. Automated tools can be used collectively, as mentioned, or individually. For
example, prototyping tools could be used to define application requirements that get
passed to design technicians who convert the requirements into detailed designs in a
traditional manner using flowcharts and narrative documents, without the assistance of
automated design software.

Existing CASE Environments can be classified along 4 different dimensions :

1. Life-Cycle Support
2. Integration Dimension
3. Construction Dimension
4. Knowledge Based CASE dimension

Let us take the meaning of these dimensions along with their examples one by one :

Life-Cycle Based CASE Tools

This dimension classifies CASE Tools on the basis of the activities they support in the
information systems life cycle. They can be classified as Upper or Lower CASE tools.

 Upper CASE Tools: support strategic, planning and construction of conceptual


level product and ignore the design aspect. They support traditional diagrammatic
languages such as ER diagrams, Data flow diagram, Structure charts etc.
 Lower CASE Tools: concentrate on the back end activities of the software life
cycle and hence support activities like physical design, debugging, construction,
testing, integration of software components, maintenance, reengineering and
reverse engineering activities.

Integration Dimension

Three main CASE Integration dimension have been proposed :

1. CASE Framework
2. ICASE Tools
3. Integrated Project Support Environment(IPSE)

CASE Workbenches

Workbenches integrate several CASE tools into one application to support specific
software-process activities. Hence they achieve:

 a homogeneous and consistent interface (presentation integration).


 easy invocation of tools and tool chains (control integration).
 access to a common data set managed in a centralized way (data integration).

CASE workbenches can be further classified into following 8 classes:

1. Business planning and modeling


2. Analysis and design
3. User-interface development
4. Programming
5. Verification and validation
6. Maintenance and reverse engineering
7. Configuration management
8. Project management

CASE Environments

An environment is a collection of CASE tools and workbenches that supports the


software process. CASE environments are classified based on the focus/basis of
integration

1. Toolkits
2. Language-centered
3. Integrated
4. Fourth generation
5. Process-centered

Toolkits

Toolkits are loosely integrated collections of products easily extended by aggregating


different tools and workbenches. Typically, the support provided by a toolkit is limited to
programming, configuration management and project management. And the toolkit itself
is environments extended from basic sets of operating system tools, for example, the
Unix Programmer's Work Bench and the VMS VAX Set. In addition, toolkits' loose
integration requires user to activate tools by explicit invocation or simple control
mechanisms. The resulting files are unstructured and could be in different format,
therefore the access of file from different tools may require explicit file format
conversion. However, since the only constraint for adding a new component is the
formats of the files, toolkits can be easily and incrementally extended.

Language-centered

The environment itself is written in the programming language for which it was
developed, thus enable users to reuse, customize and extend the environment. Integration
of code in different languages is a major issue for language-centered environments. Lack
of process and data integration is also a problem. The strengths of these environments
include good level of presentation and control integration. Interlisp, Smalltalk, Rational,
and KEE are examples of language-centered environments.

Integrated

These environment achieve presentation integration by providing uniform, consistent,


and coherent tool and workbench interfaces. Data integration is achieved through the
repository concept: they have a specialized database managing all information produced
and accessed in the environment. Examples of integrated environment are IBM AD/Cycle
and DEC Cohesion.

Fourth generation

Forth generation environments were the first integrated environments. They are sets of
tools and workbenches supporting the development of a specific class of program:
electronic data processing and business-oriented applications. In general, they include
programming tools, simple configuration management tools, document handling facilities
and, sometimes, a code generator to produce code in lower level languages. Informix
4GL, and Focus fall into this category.

Process-centered

Environments in this category focus on process integration with other integration


dimensions as starting points. A process-centered environment operates by interpreting a
process model created by specialized tools. They usually consist of tools handling two
functions:

 Process-model execution, and


 Process-model production

Examples are East, Enterprise II, Process Wise, Process Weaver, and Arcadia.

Applications
All aspects of the software development life cycle can be supported by software tools,
and so the use of tools from across the spectrum can, arguably, be described as CASE;
from project management software through tools for business and functional analysis,
system design, code storage, compilers, translation tools, test software, and so on.

However, it is the tools that are concerned with analysis and design, and with using
design information to create parts (or all) of the software product, that are most
frequently thought of as CASE tools. CASE applied, for instance, to a database software
product, might normally involve:

 Modelling business / real world processes and data flow


 Development of data models in the form of entity-relationship diagrams
 Development of process and function descriptions
 Production of database creation SQL and stored procedures

Risks and associated controls


Common CASE risks and associated controls include:
 Inadequate Standardization : Linking CASE tools from different vendors (design
tool from Company X, programming tool from Company Y) may be difficult if
the products do not use standardized code structures and data classifications. File
formats can be converted, but usually not economically. Controls include using
tools from the same vendor, or using tools based on standard protocols and
insisting on demonstrated compatibility. Additionally, if organizations obtain
tools for only a portion of the development process, they should consider
acquiring them from a vendor that has a full line of products to ensure future
compatibility if they add more tools.

 Unrealistic Expectations : Organizations often implement CASE technologies to


reduce development costs. Implementing CASE strategies usually involves high
start-up costs. Generally, management must be willing to accept a long-term
payback period. Controls include requiring senior managers to define their
purpose and strategies for implementing CASE technologies.

 Quick Implementation : Implementing CASE technologies can involve a


significant change from traditional development environments. Typically,
organizations should not use CASE tools the first time on critical projects or
projects with short deadlines because of the lengthy training process.
Additionally, organizations should consider using the tools on smaller, less
complex projects and gradually implementing the tools to allow more training
time.

 Weak Repository Controls : Failure to adequately control access to CASE


repositories may result in security breaches or damage to the work documents,
system designs, or code modules stored in the repository. Controls include
protecting the repositories with appropriate access, version, and backup controls.

Software quality
In the context of software engineering, software quality measures how well software is
designed (quality of design), and how well the software conforms to that design (quality
of conformance), although there are several different definitions.

Whereas quality of conformance is concerned with implementation (see Software Quality


Assurance), quality of design measures how valid the design and requirements are in
creating a worthwhile product.

Definition
One of the challenges of Software Quality is that "everyone feels they understand it".

A definition in Steve McConnell's Code Complete divides software into two pieces:
internal and external quality characteristics. External quality characteristics are those
parts of a product that face its users, where internal quality characteristics are those that
do not.

Another definition by Dr. Tom DeMarco says "a product's quality is a function of how
much it changes the world for the better." This can be interpreted as meaning that user
satisfaction is more important than anything in determining software quality.

Another definition, coined by Gerald Weinberg in Quality Software Management:


Systems Thinking, is "Quality is value to some person." This definition stresses that
quality is inherently subjective - different people will experience the quality of the same
software very differently. One strength of this definition is the questions it invites
software teams to consider, such as "Who are the people we want to value our software?"
and "What will be valuable to them?"

History
Software product quality

 Product quality
o conformance to requirements or program specification; related to
Reliability
 Scalability
 Correctness
 Completeness
 Absence of bugs
 Fault-tolerance
o Extensibility
o Maintainability
 Documentation

Source code quality

A computer has no concept of "well-written" source code. However, from a human point
of view source code can be written in a way that has an effect on the effort needed to
comprehend its behavior. Many source code programming style guides, which often
stress readability and usually language-specific conventions are aimed at reducing the
cost of source code maintenance. Some of the issues that affect code quality include:

 Readability
 Ease of maintenance, testing, debugging, fixing, modification and portability
 Low complexity
 Low resource consumption: memory, CPU
 Number of compilation or lint warnings
 Robust input validation and error handling, established by software fault injection

Methods to improve the quality:


 Refactoring
 Code Inspection or software review
 Documenting code

Software reliability
Software reliability is an important facet of software quality. It is defined as "the
probability of failure-free operation of a computer program in a specified environment for
a specified time".

One of reliability's distinguishing characteristics is that it is objective, measurable, and


can be estimated, whereas much of software quality is subjective criteria. This distinction
is especially important in the discipline of Software Quality Assurance. These measured
criteria are typically called software metrics.

History

With software embedded into many devices today, software failure has caused more than
inconvenience. Software errors have even caused human fatalities. The causes have
ranged from poorly designed user interfaces to direct programming errors. An example of
a programming error that lead to multiple deaths is discussed in Dr. Leveson's paper
(PDF). This has resulted in requirements for development of some types software. In the
United States, both the Food and Drug Administration (FDA) and Federal Aviation
Administration (FAA) have requirements for software development.

The goal of reliability

The need for a means to objectively determine software quality comes from the desire to
apply the techniques of contemporary engineering fields to the development of software.
That desire is a result of the common observation, by both lay-persons and specialists,
that computer software does not work the way it ought to. In other words, software is
seen to exhibit undesirable behaviour, up to and including outright failure, with
consequences for the data which is processed, the machinery on which the software runs,
and by extension the people and materials which those machines might negatively affect.
The more critical the application of the software to economic and production processes,
or to life-sustaining systems, the more important is the need to assess the software's
reliability.

Regardless of the criticality of any single software application, it is also more and more
frequently observed that software has penetrated deeply into most every aspect of modern
life through the technology we use. It is only expected that this infiltration will continue,
along with an accompanying dependency on the software by the systems which maintain
our society. As software becomes more and more crucial to the operation of the systems
on which we depend, the argument goes, it only follows that the software should offer a
concomitant level of dependability. In other words, the software should behave in the
way it is intended, or even better, in the way it should.
The challenge of reliability

The circular logic of the preceding sentence is not accidental—it is meant to illustrate a
fundamental problem in the issue of measuring software reliability, which is the difficulty
of determining, in advance, exactly how the software is intended to operate. The problem
seems to stem from a common conceptual error in the consideration of software, which is
that software in some sense takes on a role which would otherwise be filled by a human
being. This is a problem on two levels. Firstly, most modern software performs work
which a human could never perform, especially at the high level of reliability that is often
expected from software in comparison to humans. Secondly, software is fundamentally
incapable of most of the mental capabilities of humans which separate them from mere
mechanisms: qualities such as adaptability, general-purpose knowledge, a sense of
conceptual and functional context, and common sense.

Nevertheless, most software programs could safely be considered to have a particular,


even singular purpose. If the possibility can be allowed that said purpose can be well or
even completely defined, it should present a means for at least considering objectively
whether the software is, in fact, reliable, by comparing the expected outcome to the actual
outcome of running the software in a given environment, with given data. Unfortunately,
it is still not known whether it is possible to exhaustively determine either the expected
outcome or the actual outcome of the entire set of possible environment and input data to
a given program, without which it is probably impossible to determine the program's
reliability with any certainty.

However, various attempts are in the works to attempt to rein in the vastness of the space
of software's environmental and input variables, both for actual programs and theoretical
descriptions of programs. Such attempts to improve software reliability can be applied at
different stages of a program's development, in the case of real software. These stages
principally include: requirements, design, programming, testing, and runtime evaluation.
The study of theoretical software reliability is predominantly concerned with the concept
of correctness, a mathematical field of computer science which is an outgrowth of
language and automata theory.

Reliability in program development

Requirements

A program cannot be expected to work as desired if the developers of the program do not,
in fact, know the program's desired behaviour in advance, or if they cannot at least
determine its desired behaviour in parallel with development, in sufficient detail. What
level of detail is considered sufficient is hotly debated. The idea of perfect detail is
attractive, but may be impractical, if not actually impossible, in practice. This is because
the desired behaviour tends to change as the possible range of the behaviour is
determined through actual attempts, or more accurately, failed attempts, to achieve it.
Whether a program's desired behaviour can be successfully specified in advance is a
moot point if the behaviour cannot be specified at all, and this is the focus of attempts to
formalize the process of creating requirements for new software projects. In situ with the
formalization effort is an attempt to help inform non-specialists, particularly non-
programmers, who commission software projects without sufficient knowledge of what
computer software is in fact capable. Communicating this knowledge is made more
difficult by the fact that, as hinted above, even programmers cannot always know in
advance what is actually possible for software in advance of trying.

Design

While requirements are meant to specify what a program should do, design is meant, at
least at a high level, to specify how the program should do it. The usefulness of design is
also questioned by some, but those who look to formalize the process of ensuring
reliability often offer good software design processes as the most significant means to
accomplish it. Software design usually involves the use of more abstract and general
means of specifying the parts of the software and what they do. As such, it can be seen as
a way to break a large program down into many smaller programs, such that those
smaller pieces together do the work of the whole program.

The purposes of high-level design are as follows. It separates what are considered to be
problems of architecture, or overall program concept and structure, from problems of
actual coding, which solve problems of actual data processing. It applies additional
constraints to the development process by narrowing the scope of the smaller software
components, and thereby—it is hoped—removing variables which could increase the
likelihood of programming errors. It provides a program template, including the
specification of interfaces, which can be shared by different teams of developers working
on disparate parts, such that they can know in advance how each of their contributions
will interface with those of the other teams. Finally, and perhaps most controversially, it
specifies the program independently of the implementation language or languages,
thereby removing language-specific biases and limitations which would otherwise creep
into the design, perhaps unwittingly on the part of programmer-designers.

Programming

The history of computer programming language development can often be best


understood in the light of attempts to master the complexity of computer programs, which
otherwise becomes more difficult to understand in proportion (perhaps exponentially) to
the size of the programs. (Another way of looking at the evolution of programming
languages is simply as a way of getting the computer to do more and more of the work,
but this may be a different way of saying the same thing.) Lack of understanding of a
program's overall structure and functionality is a sure way to fail to detect errors in the
program, and thus the use of better languages should, conversely, reduce the number of
errors by enabling a better understanding.
Improvements in languages tend to provide incrementally what software design has
attempted to do in one fell swoop: consider the software at ever greater levels of
abstraction. Such inventions as statement, sub-routine, file, class, template, library,
component and more have allowed the arrangement of a program's parts to be specified
using abstractions such as layers, hierarchies and modules, which provide structure at
different granularities, so that from any point of view the program's code can be imagined
to be orderly and comprehensible.

In addition, improvements in languages have enabled more exact control over the shape
and use of data elements, culminating in the abstract data type. These data types can be
specified to a very fine degree, including how and when they are accessed, and even the
state of the data before and after it is accessed.

Software Build and Deployment

Many programming languages such as C and Java require the program "source code" to
be translated in to a form that can be executed by a computer. This translation is done by
a program called a compiler. Additional operations may be involved to associate, bind,
link or package files together in order to create a usable runtime configuration of the
software application. The totality of the compiling and assembly process is generically
called "building" the software.

The software build is critical to software quality because if any of the generated files are
incorrect the software build is likely to fail. And, if the incorrect version of a program is
inadvertently used, then testing can lead to false results.

Software builds are typically done in work area unrelated to the runtime area, such as the
application server. For this reason, a deployment step is needed to physically transfer the
software build products to the runtime area. The deployment procedure may also involve
technical parameters, which, if set incorrectly, can also prevent software testing from
beginning. For example, a Java application server may have options for parent-first or
parent-last class loading. Using the incorrect parameter can cause the application to fail to
execute on the application server.

The technical activities supporting software quality including build, deployment, change
control and reporting are collectively known as Software configuration management. A
number of software tools have arisen to help meet the challenges of configuration
management including file control tools and build control tools.

Testing
Software Testing

Software testing, when done correctly, can increase overall software quality of
conformance by testing that the product conforms to its requirements. Testing includes,
but is not limited to:
1. Unit Testing
2. Functional Testing
3. Regression Testing
4. Performance Testing
5. Failover Testing
6. Usability Testing

A number of agile methodologies use testing early in the development cycle to ensure
quality in their products. For example, the test-driven development practice, where tests
are written before the code they will test, is used in Extreme Programming to ensure
quality.

Runtime

runtime reliability determinations are similar to tests, but go beyond simple confirmation
of behaviour to the evaluation of qualities such as performance and interoperability with
other code or particular hardware configurations.

Software Quality Factors


A software quality factor is a non-functional requirement for a software program which is
not called up by the customer's contract, but nevertheless is a desirable requirement
which enhances the quality of the software program. Note that none of these factors are
binary; that is, they are not “either you have it or you don’t” traits. Rather, they are
characteristics that one seeks to maximize in one’s software to optimize its quality. So
rather than asking whether a software product “has” factor x, ask instead the degree to
which it does (or does not).

Some software quality factors are listed here:

Understandability–clarity of purpose. This goes further than just a statement of purpose;


all of the design and user documentation must be clearly written so that it is easily
understandable. This is obviously subjective in that the user context must be taken into
account: for instance, if the software product is to be used by software engineers it is not
required to be understandable to the layman.

Completeness–presence of all constituent parts, with each part fully developed. This
means that if the code calls a subroutine from an external library, the software package
must provide reference to that library and all required parameters must be passed. All
required input data must also be available.

Conciseness–minimization of excessive or redundant information or processing. This is


important where memory capacity is limited, and it is generally considered good practice
to keep lines of code to a minimum. It can be improved by replacing repeated
functionality by one subroutine or function which achieves that functionality. It also
applies to documents.
Portability–ability to be run well and easily on multiple computer configurations.
Portability can mean both between different hardware—such as running on a PC as well
as a smartphone—and between different operating systems—such as running on both
Mac OS X and GNU/Linux.

Consistency–uniformity in notation, symbology, appearance, and terminology within


itself.

Maintainability–propensity to facilitate updates to satisfy new requirements. Thus the


software product that is maintainable should be well-documented, should not be complex,
and should have spare capacity for memory, storage and processor utilization and other
resources.

Testability–disposition to support acceptance criteria and evaluation of performance.


Such a characteristic must be built-in during the design phase if the product is to be easily
testable; a complex design leads to poor testability.

Usability–convenience and practicality of use. This is affected by such things as the


human-computer interface. The component of the software that has most impact on this is
the user interface (UI), which for best usability is usually graphical (i.e. a GUI).

Reliability–ability to be expected to perform its intended functions satisfactorily. This


implies a time factor in that a reliable product is expected to perform correctly over a
period of time. It also encompasses environmental considerations in that the product is
required to perform correctly in whatever conditions it finds itself (sometimes termed
robustness).

Structuredness–organisation of constituent parts in a definite pattern. A software product


written in a block-structured language such as Pascal will satisfy this characteristic.

Efficiency–fulfillment of purpose without waste of resources, such as memory, space and


processor utilization, network bandwidth, time, etc.

Security–ability to protect data against unauthorized access and to withstand malicious or


inadvertent interference with its operations. Besides the presence of appropriate security
mechanisms such as authentication, access control and encryption, security also implies
resilience in the face of malicious, intelligent and adaptive attackers.

Measurement of software quality factors

There are varied perspectives within the field on measurement. There are a great many
measures that are valued by some professionals—or in some contexts, that are decried as
harmful by others. Some believe that quantitative measures of software quality are
essential. Others believe that contexts where quantitative measures are useful are quite
rare, and so prefer qualitative measures. Several leaders in the field of software testing
have written about the difficulty of measuring what we truly want to measure well,
including Dr. Cem Kaner (PDF) and Douglass Hoffman (PDF).

One example of a popular metric is the number of faults encountered in the software.
Software that contains few faults is considered by some to have higher quality than
software that contains many faults. Questions that can help determine the usefulness of
this metric in a particular context include:

1. What constitutes “many faults?” Does this differ depending upon the purpose of
the software (e.g., blogging software vs. navigational software)? Does this take
into account the size and complexity of the software?
2. Does this account for the importance of the bugs (and the importance to the
stakeholders of the people those bugs bug)? Does one try to weight this metric by
the severity of the fault, or the incidence of users it affects? If so, how? And if
not, how does one know that 100 faults discovered is better than 1000?
3. If the count of faults being discovered is shrinking, how do I know what that
means? For example, does that mean that the product is now higher quality than it
was before? Or that this is a smaller/less ambitious change than before? Or that
fewer tester-hours have gone into the project than before? Or that this project was
tested by less skilled testers than before? Or that the team has discovered that
fewer faults reported is in their interest?

This last question points to an especially difficult one to manage. All software quality
metrics are in some sense measures of human behavior, since humans create
software(PDF). If a team discovers that they will benefit from a drop in the number of
reported bugs, there is a strong tendency for the team to start reporting fewer defects.
That may mean that email begins to circumvent the bug tracking system, or that four or
five bugs get lumped into one bug report, or that testers learn not to report minor
annoyances. The difficulty is measuring what we mean to measure, without creating
incentives for software programmers and testers to consciously or unconsciously “game”
the measurements.

Software quality factors cannot be measured because of their vague definitions. It is


necessary to find measurements, or metrics, which can be used to quantify them as non-
functional requirements. For example, reliability is a software quality factor, but cannot
be evaluated in its own right. However, there are related attributes to reliability, which
can indeed be measured. Some such attributes are mean time to failure, rate of failure
occurrence, and availability of the system. Similarly, an attribute of portability is the
number of target-dependent statements in a program.

A scheme that could be used for evaluating software quality factors is given below. For
every characteristic, there are a set of questions which are relevant to that characteristic.
Some type of scoring formula could be developed based on the answers to these
questions, from which a measurement of the characteristic can be obtained.
Understandability

Are variable names descriptive of the physical or functional property represented? Do


uniquely recognisable functions contain adequate comments so that their purpose is
clear? Are deviations from forward logical flow adequately commented? Are all elements
of an array functionally related?..

Completeness

Are all necessary components available? Does any process fail for lack of resources or
programming? Are all potential pathways through the code accounted for, including
proper error handling?

Conciseness

Is all code reachable? Is any code redundant? How many statements within loops could
be placed outside the loop, thus reducing computation time? Are branch decisions too
complex?

Portability

Does the program depend upon system or library routines unique to a particular
installation? Have machine-dependent statements been flagged and commented? Has
dependency on internal bit representation of alphanumeric or special characters been
avoided? How much effort would be required to transfer the program from one
hardware/software system or environment to another?

Consistency

Is one variable name used to represent different logical or physical entities in the
program? Does the program contain only one representation for any given physical or
mathematical constant? Are functionally similar arithmetic expressions similarly
constructed? Is a consistent scheme used for indentation, nomenclature, the color palette,
fonts and other visual elements?

Maintainability

Has some memory capacity been reserved for future expansion? Is the design cohesive—
i.e., does each module have distinct, recognisable functionality? Does the software allow
for a change in data structures (object-oriented designs are more likely to allow for this)?
If the code is procedure-based (rather than object-oriented), is a change likely to require
restructuring the main program, or just a module?
Testability

Are complex structures employed in the code? Does the detailed design contain clear
pseudo-code? Is the pseudo-code at a higher level of abstraction than the code? If tasking
is used in concurrent designs, are schemes available for providing adequate test cases?

Usability

Is a GUI used? Is there adequate on-line help? Is a user manual provided? Are
meaningful error messages provided?

Reliability

Are loop indexes range-tested? Is input data checked for range errors? Is divide-by-zero
avoided? Is exception handling provided?

Structuredness

Is a block-structured programming language used? Are modules limited in size? Have the
rules for transfer of control between modules been established and followed?

Efficiency

Have functions been optimized for speed? Have repeatedly used blocks of code been
formed into subroutines? Has the program been checked for memory leaks or overflow
errors?

Security

Does the software protect itself and its data against unauthorized access and use? Does it
allow its operator to enforce security policies? Are security mechanisms appropriate,
adequate and correctly implemented? Can the software withstand attacks that can be
anticipated in its intended environment? Is the software free of errors that would make it
possible to circumvent its security mechanisms? Does the architecture limit the potential
impact of yet unknown errors?

Software maintenance
Software maintenance in software engineering is the modification of a software product
after delivery to correct faults, to improve performance or other attributes, or to adapt the
product to a modified environment (ISO/IEC 14764).
Overview
This international standard describes the 6 software maintenance processes as:

1. The implementation processes contains software preparation and transition


activities, such as the conception and creation of the maintenance plan, the
preparation for handling problems identified during development, and the follow-
up on product configuration management.
2. The problem and modification analysis process, which is executed once the
application has become the responsibility of the maintenance group. The
maintenance programmer must analyze each request, confirm it (by reproducing
the situation) and check its validity, investigate it and propose a solution,
document the request and the solution proposal, and, finally, obtain all the
required authorizations to apply the modifications.
3. The process considering the implementation of the modification itself.
4. The process acceptance of the modification, by checking it with the individual
who submitted the request in order to make sure the modification provided a
solution.
5. The migration process (platform migration, for example) is exceptional, and is not
part of daily maintenance tasks. If the software must be ported to another platform
without any change in functionality, this process will be used and a maintenance
project team is likely to be assigned to this task.
6. Finally, the last maintenance process, also an event which does not occur on a
daily basis, is the retirement of a piece of software.

There are a number of processes, activities and practices that are unique to maintainers,
for example:

 Transition: a controlled and coordinated sequence of activities during which a


system is transferred progressively from the developer to the maintainer;
 Service Level Agreements (SLAs) and specialized (domain-specific) maintenance
contracts negotiated by maintainers;
 Modification Request and Problem Report Help Desk: a problem-handling
process used by maintainers to prioritize, documents and route the requests they
receive;
 Modification Request acceptance/rejection: modification request work over a
certain size/effort/complexity may be rejected by maintainers and rerouted to a
developer.

A common perception of maintenance is that it is merely fixing bugs. However, studies


and surveys over the years have indicated that the majority, over 80%, of the maintenance
effort is used for non-corrective actions (Pigosky 1997). This perception is perpetuated
by users submitting problem reports that in reality are functionality enhancements to the
system.
Software maintenance and evolution of systems was first addressed by Meir M. Lehman
in 1969. Over a period of twenty years, his research led to the formulation of eight Laws
of Evolution (Lehman 1997). Key findings of his research include that maintenance is
really evolutionary developments and that maintenance decisions are aided by
understanding what happens to systems (and software) over time. Lehman demonstrated
that systems continue to evolve over time. As they evolve, they grow more complex
unless some action such as code refactoring is taken to reduce the complexity.

The key software maintenance issues are both managerial and technical. Key
management issues are: alignment with customer priorities, staffing, which organization
does maintenance, estimating costs. Key technical issues are: limited understanding,
impact analysis, testing, maintainability measurement.

Categories of maintenance in ISO/IEC 14764


E.B. Swanson initially identified three categories of maintenance: corrective, adaptive,
and perfective. These have since been updated and ISO/IEC 14764 presents:

 Corrective maintenance: Reactive modification of a software product performed


after delivery to correct discovered problems.
 Adaptive maintenance: Modification of a software product performed after
delivery to keep a software product usable in a changed or changing environment.
 Perfective maintenance: Modification of a software product after delivery to
improve performance or maintainability.
 Preventive maintenance: Modification of a software product after delivery to
detect and correct latent faults in the software product before they become
effective faults.

Software configuration management


In software engineering, software configuration management (SCM) is the task of
tracking and controlling changes in the software. Configuration management practices
include revision control and the establishment of baselines.

SCM concerns itself with answering the question "Somebody did something, how can
one reproduce it?" Often the problem involves not reproducing "it" identically, but with
controlled, incremental changes. Answering the question thus becomes a matter of
comparing different results and of analysing their differences. Traditional configuration
management typically focused on controlled creation of relatively simple products. Now,
implementers of SCM face the challenge of dealing with relatively minor increments
under their own control, in the context of the complex system being developed.

Terminology
The history and terminology of SCM (which often varies) has given rise to controversy.
Roger Pressman, in his book Software Engineering: A Practitioner's Approach, states
that SCM is a "set of activities designed to control change by identifying the work
products that are likely to change, establishing relationships among them, defining
mechanisms for managing different versions of these work products, controlling the
changes imposed, and auditing and reporting on the changes made."

Source configuration management is a related practice often used to indicate that a


variety of artifacts may be managed and versioned, including software code, documents,
design models, and even the directory structure itself.

Atria (later Rational Software, now a part of IBM), used "SCM" to mean "software
configuration management". Gartner uses the term software change and configuration
management.

Purposes
The goals of SCM are generally:

 Configuration identification - Identifying configurations, configuration items and


baselines.
 Configuration control - Implementing a controlled change process. This is usually
achieved by setting up a change control board whose primary function is to
approve or reject all change requests that are sent against any baseline.
 Configuration status accounting - Recording and reporting all the necessary
information on the status of the development process.
 Configuration auditing - Ensuring that configurations contain all their intended
parts and are sound with respect to their specifying documents, including
requirements, architectural specifications and user manuals.
 Build management - Managing the process and tools used for builds.
 Process management - Ensuring adherence to the organization's development
process.
 Environment management - Managing the software and hardware that host our
system.
 Teamwork - Facilitate team interactions related to the process.
 Defect tracking - Making sure every defect has traceability back to the source.

Potrebbero piacerti anche