Sei sulla pagina 1di 39

Chapter 15

 Review Techniques
Slide Set to accompany
Software Engineering: A Practitioner’s Approach, 7/e
by Roger S. Pressman

Slides copyright © 1996, 2001, 2005, 2009 by Roger S. Pressman

For non-profit educational use only


May be reproduced ONLY for student use at the university level when used in conjunction
with Software Engineering: A Practitioner's Approach, 7/e. Any other reproduction or use is
prohibited without the express written permission of the author.

All copyright information MUST appear if these slides are posted on a website for student
use.

1
Chapter Learning Objectives
 Introduction
 Cost Impact of Software Defects
 Defect Amplification and Removal
 Review Metrics and their Use
 Review Process and Review Types

SS 2011 – SE-II – Software Quality Assurance 2


Introduction

3
Reviews

... there is no particular reason


why your friend and colleague
cannot also be your sternest critic.
Jerry Weinberg

Technical reviews are one of many actions that are required as


part of good software engineering practice.

SS 2011 – SE-II – Software Quality Assurance 4


What Are Reviews?
 a meeting conducted by technical people for
technical people
 a technical assessment of a work product
created during the software engineering
process
 a software quality assurance mechanism
 a training ground

SS 2011 – SE-II – Software Quality Assurance 5


What Reviews Are Not

 A project summary or progress assessment


 A meeting intended solely to impart information
 A mechanism for political or personal reprisal!

SS 2011 – SE-II – Software Quality Assurance 6


Cost Impact of Software Defects

7
What Do We Look For?
 Errors and defects
 Error—a quality problem found before the software is released to
end users
 Defect—a quality problem found only after the software has been
released to end-users
 We make this distinction because errors and defects have very
different economic, business, psychological, and human impact
 However, the temporal distinction made between errors and
defects in this book is not mainstream thinking

SS 2011 – SE-II – Software Quality Assurance 8


Cost Impact of Defects
 Early discovery of errors so that they do not
propagate to the next step in the software
process
 Between 50 and 65 percent of all errors are
introduced during design activities
 Review techniques have been up to 75 percent
effective in uncovering design flaws

The primary objective of an formal technical review (FTR) is


to find errors before they are passed on to another SE
activity or released to the end user

SS 2011 – SE-II – Software Quality Assurance 9


Defect Amplification And Removal

10
Defect Amplification
 A defect amplification model [IBM81] can be used to illustrate
the generation and detection of errors during the design and
code generation actions of a software process.

Defects Detection
Errors from Errors passed through
Previous step Percent Errors passed
Amplified errors 1:x Efficiency To next step

Newly generated errors

Development step

SS 2011 – SE-II – Software Quality Assurance 11


Defect Amplification
 a software process that does NOT include reviews,
 yields 94 errors at the beginning of testing and

 Releases 12 latent defects to the field

 a software process that does include reviews,


 yields 24 errors at the beginning of testing and

 releases 3 latent defects to the field

 A cost analysis indicates that the process with NO


reviews costs approximately 3 times more than the
process with reviews, taking the cost of correcting the
latent defects into account

SS 2011 – SE-II – Software Quality Assurance 12


Defect Amplification - no Reviews

SS 2011 – SE-II – Software Quality Assurance 13


Defect Amplification - with Reviews

SS 2011 – SE-II – Software Quality Assurance 14


Home Exercise
 Use the both previous examples and show
(after calculation) the difference in the total
cost for development and maintenance for
these examples using values given in book
section 15.2. (Pressman 7th edition)

SS 2011 – SE-II – Software Quality Assurance 15


Review Metrics and Their Use

16
Review Metrics
 Metric is a measurement of unit for calculating the Quality
of project, process..etc
 Metrics help in understanding or measuring the
effectiveness of software engineering actions

 The total review effort and the total number of errors


discovered are defined as:
• Ereview = Ep + Ea + Er
• Errtot = Errminor + Errmajor
 Defect density represents the errors found per unit of work
product reviewed.
• Defect density = Errtot / WPS
 where …
SS 2011 – SE-II – Software Quality Assurance 17
Metrics
 Preparation effort, Ep—the effort (in person-hours) required to
review a work product prior to the actual review meeting
 Assessment effort, Ea— the effort (in person-hours) that is
expending during the actual review
 Rework effort, Er— the effort (in person-hours) that is
dedicated to the correction of those errors uncovered during
the review
 Work product size, WPS—a measure of the size of the work
product that has been reviewed (e.g., the number of UML
models, or the number of document pages, or the number of
lines of code)
 Minor errors found, Errminor—the number of errors found that
can be categorized as minor (requiring less than some pre-
specified effort to correct)
 Major errors found, Errmajor— the number of errors found that
can be categorized as major (requiring more than some pre-
specified effort to correct)
SS 2011 – SE-II – Software Quality Assurance 18
An Example—I
 If past history indicates that
 the average defect density for a requirements model
is 0.6 errors per page, and a new requirement model
is 32 pages long,
 a rough estimate suggests that your software team
will find about 19 or 20 errors during the review of the
document.
 If you find only 6 errors, you’ve done an extremely
good job in developing the requirements model or
your review approach was not thorough enough.

SS 2011 – SE-II – Software Quality Assurance 19


An Example—II
 The effort required to correct a minor model error (immediately after the
review) was found to require 4 person-hours.
 The effort required for a major requirement error was found to be 18
person-hours.
 Examining the review data collected, you find that minor errors occur
about 6 times more frequently than major errors. Therefore, you can
estimate that the average effort to find and correct a requirements error
during review is about 6 person-hours.
 Requirements related errors uncovered during testing require an
average of 45 person-hours to find and correct. Using the averages
noted, we get:
 Effort saved per error = Etesting – Ereviews
 45 – 6 = 39 person-hours/error
 Since 20 errors were found during the review of the requirements model,
a saving of about 780 person-hours of testing effort would be achieved.
And that’s just for requirements-related errors.

SS 2011 – SE-II – Software Quality Assurance 20


Overall
 Effort expended with and without reviews

SS 2011 – SE-II – Software Quality Assurance 21


Review Types

22
Reference Model
Each of the reference model
characteristics helps to define
the level of review formality.

Technical reviews should be applied with a level of formality that is


appropriate for the product to be built, the project time line, and the
people who are doing the work.
SS 2011 – SE-II – Software Quality Assurance 23
Informal Reviews
 Informal reviews include:
 a simple desk check of a software engineering work product
with a colleague
 a casual meeting (involving more than 2 people) for the
purpose of reviewing a work product, or
 the review-oriented aspects of pair programming
 pair programming encourages continuous review as a
work product (design or code) is created.
The benefit is immediate discovery of errors and better work
product quality as a consequence.
 The effectiveness of informal reviews is considerably lower than
more formal approaches.

SS 2011 – SE-II – Software Quality Assurance 24


Formal Technical Reviews
 The objectives of an FTR are:
 to uncover errors in function, logic, or implementation for
any representation of the software
 to verify that the software under review meets its
requirements
 to ensure that the software has been represented according to
predefined standards
 to achieve software that is developed in a uniform manner
 to make projects more manageable
 FTR serves as training ground
 The FTR is actually a class of reviews that includes
walkthroughs and inspections.

SS 2011 – SE-II – Software Quality Assurance 25


Software Inspections
“Software Inspections are a disciplined engineering
practice for detecting and correcting defects in
software artifacts, and preventing their leakage into
field operations.”

Don O'Neill, Don O' Neill Consulting for SEI

SS 2011 – SE-II – Software Quality Assurance 26


Software Walkthroughs
A form of software peer review "in which a designer or
programmer leads members of the development team
and other interested parties through a software
product, and the participants ask questions and make
comments about possible errors, violation of
development standards, and other problems"

(IEEE Std. 1028-1997, IEEE Standard for Software


Reviews, clause 38.

SS 2011 – SE-II – Software Quality Assurance 27


What’s the Difference?
 An inspection is a more formal process than a
walkthrough used to collect metrics or statistics
about the software process

 Walkthrough is a more informal version of an


inspection

SS 2011 – SE-II – Software Quality Assurance 28


Review Process
(walkthrough guidelines)

29
The Inspection Process

SS 2011 – SE-II – Software Quality Assurance


The Review Meeting
 Regardless of the FTR format, every review
meeting should abide by following constraints:
 Between three and five people (typically) should
be involved in the review.
 Advance preparation should occur but should
require no more than two hours of work for each
person.
 The duration of the review meeting should be less
than two hours.
 Focus is on a work product (e.g., a portion of a
requirements model, a detailed component design,
source code for a component)
SS 2011 – SE-II – Software Quality Assurance 31
The Players
review
leader standards bearer (SQA)

producer

maintenance
oracle

recorder reviewer
user rep

SS 2011 – SE-II – Software Quality Assurance 32


The Players
 Producer—the individual who has developed the work
product
 informs the project leader that the work product is complete
and that a review is required
 Review leader—evaluates the product for readiness,
generates copies of product materials, distributes them
to two or three reviewers for advance preparation, and
establishes an agenda for review meeting.
 Reviewer(s)—expected to spend between one and two
hours reviewing the product, making notes, and
otherwise becoming familiar with the work.
 Recorder—reviewer who records (in writing) all important
issues raised during the review.

SS 2011 – SE-II – Software Quality Assurance 33


Review Guidelines
 Review the product, not the producer.
 Set an agenda and maintain it.
 Limit debate and rebuttal.
 Express clearly problem areas, but don't attempt to solve
every problem noted.
 Take written notes.
 Limit the number of participants and insist upon advance
preparation.
 Develop a checklist for each product that is likely to be
reviewed.
 Allocate resources and schedule time for FTRs.
 Conduct meaningful training for all reviewers.
 Review your early reviews.
SS 2011 – SE-II – Software Quality Assurance 34
Sample-Driven Reviews (SDRs)
 SDRs attempt to quantify those work products that are
primary targets for full FTRs.
To accomplish this …
 Inspect a fraction ai of each software work product, i.
Record the number of faults, fi found within ai.
 Develop a gross estimate of the number of faults within
work product i by multiplying fi by 1/ai.
 Sort the work products in descending order according to
the gross estimate of the number of faults in each.
 Focus available review resources on those work products
that have the highest estimated number of faults.

35
SS 2011 – SE-II – Software Quality Assurance
Review Options Matrix
IPR * WT IN RRR
trained leader no yes yes yes
agenda established maybe yes yes yes
reviewers prepare in advance maybe yes yes yes
producer presents product maybe yes no no
“reader” presents product no no yes no
recorder takes notes maybe yes yes yes
checklists used to find errors no no yes no
errors categorized as found no no yes no
issues list created no yes yes yes
team must sign-off on result no yes yes maybe

* IPR—informal peer review WT—Walkthrough


IN—Inspection RRR—round robin review
SS 2011 – SE-II – Software Quality Assurance 36
Metrics Derived from Reviews
inspection time per page of documentation
inspection time per KLOC or FP
inspection effort per KLOC or FP
errors uncovered per reviewer hour
errors uncovered per preparation hour
errors uncovered per SE task (e.g., design)
number of minor errors (e.g., typos)
number of major errors
(e.g., nonconformance to req.)
number of errors found during preparation

SS 2011 – SE-II – Software Quality Assurance 37


Homework
 Discuss, why Pair programming is be
characterized as a continuous desk check?
 Is pair programming cost effective than one
person doing the same job alone?
 Is pair programming is waste of expensive
resources due to its inherent redundancy?
 Search Review Checklists (p. 425)

SS 2011 – SE-II – Software Quality Assurance 38


Assignment 1
 Prepare 500 words note on RRR.
 Take some requirements and at least one
diagram from each modeling from RA model
and perform TR on it against some self-
created checklist .
 Individual work
 Deadline for CR: 28.04.2011, 11.59 pm
 All assignment will be submitted by CR in
single zip file.

SS 2011 – SE-II – Software Quality Assurance 39

Potrebbero piacerti anche