Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Abstract
We accept that testing the software is an integral part of building a system.
However, if the software is based on inaccurate requirements, then despite well
written code, the software will be unsatisfactory. The newspapers are full of stories
about catastrophic software failures. What the stories don't say is that most of the
defects can be traced back to wrong, missing, vague or incomplete requirements. We
have learnt the lesson of testing software. Now we have to learn to implement a
system of testing the requirements before building a software solution.
Requirements seem to be ephemeral. They flit in and out of projects; they are
capricious, intractable, unpredictable and sometimes invisible. When gathering
requirements we are searching for all of the criteria for a system's success. We throw
out a net and try to capture all these criteria. Using Blitzing, Rapid Application
Development (RAD), Joint Application Development (JAD), Quality Function
Deployment (QFD), interviewing, apprenticing, data analysis and many other
techniques [6], we try to snare all of the requirements in our net.
"The idea is for each requirement to have a quality measure that makes it
possible to divide all solutions to the requirement into two classes: those for which we
agree that they fit the requirement and those for which we agree that they do not fit
the requirement."
Quantifiable Requirements
Consider a requirement that says "The system must respond quickly to
customer enquiries". First we need to find a property of this requirement that provides
us with a scale for measurement within the context. Let's say that we agree that we
will measure the response using minutes. To find the quality measure we ask: "under
what circumstances would the system fail to meet this requirement?" The
stakeholders review the context of the system and decide that they would consider it
a failure if a customer has to wait longer than three minutes for a response to his
enquiry. Thus "three minutes" becomes the quality measure for this requirement.
Any solution to the requirement is tested against the quality measure. If the
solution makes a customer wait for longer than three minutes then it does not fit the
requirement. So far so good: we have defined a quantifiable quality measure. But
specifying the quality measure is not always so straightforward. What about
requirements that do not have an obvious scale?
Non-quantifiable Requirements
Suppose a requirement is "The automated interfaces of the system must be
easy to learn". There is no obvious measurement scale for "easy to learn". However if
we investigate the meaning of the requirement within the particular context, we can
set communicable limits for measuring the requirement.
Again we can make use of the question: "What is considered a failure to meet
this requirement?" Perhaps the stakeholders agree that there will often be novice
users, and the stakeholders want novices to be productive within half an hour. We can
define the quality measure to say "a novice user must be able to learn to successfully
complete a customer order transaction within 30 minutes of first using the system".
This becomes a quality measure provided a group of experts within this context is
able to test whether the solution does or does not meet the requirement.
Requirements Test 1
Does each requirement have a quality measure that can be used to test
whether any solution meets the requirement?
Keeping Track
Figure 1 is an example of how you can keep track of your knowledge about each
requirement.
Figure 1: This requirements micro spec makes your requirements knowledge visible. It
must be recorded so that it is easy for several people to compare and discuss
individual requirements and to look for duplicates and contradictions.
Requirements Test 2
I point you in the direction of abstract data modeling principles [7] which
provide many guidelines for naming subject matter and for defining the meaning of
that subject matter. As a result of doing the necessary analysis, the term "viewer"
could be defined as follows:
Viewer
A person who lives in the area which receives transmission of television programmers
from our channel.
Viewer Name
Viewer Address
Viewer Age Range
Viewer Sex
Viewer Salary Range
Viewer Occupation Type
Viewer Socio-Economic Ranking
When the allowable values for each of the attributes are defined it provides data
that can be used to test the implementation.
Defining the meaning of "viewer" has addressed one part of the coherency
problem. We also have to be sure that every use of the term "viewer" is consistent
with the meaning that has been defined.
Requirements Test 3
Completeness
We want to be sure that the requirements specification contains all the
requirements that are known about. While we know that there will be evolutionary
changes and additions, we would like to restrict those changes to new requirements,
and not have to play "catch-up" with requirements that we should have known about
in the first place. Thus we want to avoid omitting requirements just because we did
not think of asking the right questions. If we have set a context [10, 11] for our
project, then we can test whether the context is accurate. We can also test whether
we have considered all the likely requirements within that context.
The context defines the problem that we are trying to solve. The context
includes all the requirements that we must eventually meet: it contains anything that
we have to build, or anything we have to change. Naturally if our software is going to
change the way people do their jobs, then those jobs must be within the context of
study. The most common defect is to limit the context to the part of the system that
will be eventually automated [3]. The result of this restricted view is that nobody
correctly understands the organization’s culture and way of working. Consequently
there is misfit between the eventual computer system and the rest of the business
system and the people that it is intended to help.
Requirements Test 4
Of course this is easy to say, but we still have to be able to test whether or not
the context is large enough to include the complete business system, not just the
software. ("Business" in this sense should be means not just a commercial business,
but whatever activity - scientific, engineering, artistic - the organization is doing.) We
do this test by observing the questions asked by the systems analysts: Are they
considering the parts of the system that will be external to the software? Are
questions being asked that relate to people or systems that are shown as being
outside the context? Are any of the interfaces around the boundary of the context
being changed?
Another test for completeness is to question whether we have captured all the
requirements that are currently known. The obstacle is that our source of
requirements is people. And every person views the world differently according to his
own job and his own idea of what is important, or what is wrong with the current
system. It helps to consider the types of requirements that we are searching for:
• Conscious Requirements
• Unconscious Requirements
• Undreamed of Requirements
Requirements Test 5
Relevance
When we cast out the requirements gathering net and encourage people to tell
us all their requirements, we take a risk. Along with all the requirements that are
relevant to our context we are likely to pick up impostors. These irrelevant
requirements are often the result of a stakeholder not understanding the goals of the
project. In this case people, especially if they have had bad experiences with another
system, are prone to include requirements "just in case we need it". Another reason
for irrelevancy is personal bias. If a stakeholder is particularly interested or affected
by a subject then he might think of it as a requirement even if it is irrelevant to this
system.
Requirements Test 6
To test for relevance, check the requirement against the stated goals for the
system. Does this requirement contribute to those goals? If we exclude this
requirement then will it prevent us from meeting those goals? Is the requirement
concerned with subject matter that is within the context of our study? Are there any
other requirements that are dependent on this requirement? Some irrelevant
requirements are not really requirements, instead they are solutions.
Requirement or Solution?
When one of your stakeholders tells you he wants a graphic user interface and a
mouse, he is presenting you with a solution not a requirement. He has seen other
systems with graphic user interfaces, and he wants what he considers to be the most
up-to-date solution. Or perhaps he thinks that designing the system is part of his role.
Or maybe he has a real requirement that he has mentally solved by use of a graphic
interface. When solutions are mistaken for requirements then the real requirement is
often missed. Also the eventual solution is not as good as it could be because the
designer is not free to consider all possible ways of meeting the requirements.
Requirements Test 7
It is not always easy to tell the difference between a requirement and a solution.
Sometimes there is a piece of technology within the context and the stakeholders
have stated that the new system must use this technology. Things like: "the new
system must be written in COBOL because that is the only language our programmers
know", "the new system must use the existing warehouse layout because we don't
want to make structural changes" are really requirements because they are genuine
constraints that exist within the context of the problem.
Stakeholder Value
There are two factors that affect the value that stakeholders place on a
requirement. The grumpiness that is caused by bad performance, and the happiness
that is caused by good performance. Failure to provide a perfect solution to some
requirements will produce mild annoyance. Failure to meet other requirements will
cause the whole system to be a failure. If we understand the value that the
stakeholders put on each requirement, we can use that information to determine
design priorities.
Requirements Test 8
Pardee [9] suggests that we use scales from 1 to 5 to specify the reward for
good performance and the penalty for bad performance. If a requirement is absolutely
vital to the success of the system then it has a penalty of 5 and a reward of 5. A
requirement that would be nice to have but is not really vital might have a penalty of
1 and a reward of 3. The overall value or importance that the stakeholders place on a
requirement is the sum of penalty and reward. In the first case a value of 10 in the
second a value of 4.
Traceability
We want to be able to prove that the system that we build meets each one of
the specified requirements. We need to identify each requirement so that we can
trace its progress through detailed analysis, design and eventual implementation.
Each stage of system development shapes, repartitions and organizes the
requirements to bring them closer to the form of the new system. To insure against
loss or corruption, we need to be able to map the original requirements to the solution
for testing purposes.
Requirements Test 9
Figure 2: The event/use case provides a natural grouping for keeping track of the
relationships between requirements.
Requirements Test 10
Is each requirement tagged to all parts of the system where it is used? For
any change to requirements, can you identify all parts of the system where
this change has an effect?
Conclusions
The requirements specification must contain all the requirements that are to be
solved by our system. The specification should objectively specify everything our
system must do and the conditions under which it must perform. Management of the
number and complexity of the requirements is one part of the task.
Testing starts at the beginning of the project, not at the end of the coding. We
apply tests to assure the quality of the requirements. Then the later stages of the
project can concentrate on testing for good design and good code. The advantages of
this approach are that we minimize expensive rework by minimizing requirements-
related defects that could have been discovered, or prevented, early in the project's
life.
References
1. Christopher Alexander. Notes On The Synthesis Of Form. Harvard Press.
Massachusetts, 1964.
2. Donald Gause and Gerald Weinberg. Exploring Requirements. Dorset House. New
York, 1989.
6. Neil Maiden and Gordon Rugg. Acre: selecting methods for requirements
acquisition. Software Engineering Journal, May 1966.
7. Steve Mellor and Sally Schlaer. Object-Oriented Systems Analysis: Modelling the
World in Data. Prentice Hall, New Jersey, 1988
8. Steve McMenamin and John Palmer. Essential Systems Analysis. Yourdon Press. New
York, 1984.
9. William J. Pardee. How To Satisfy & Delight Your Customer. Dorset House. New York,
1996.
10. James Robertson. On Setting the Context. The Atlantic Systems Guild, 1996.
11. James and Suzanne Robertson. Complete Systems Analysis: the Workbook, the
Textbook, the Answers. Dorset House. New York, 1994.
12. James and Suzanne Robertson. Requirements Template. The Atlantic Systems
Guild. London, 1966.