Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Function Points (FP) - the functionality of the software as measure of its complexity say, a house can be
measured using LOC in square meters or FP how many bedrooms or bathrooms.
Converting to LOC = adjustment function points x LOC per adjustment faction point
= AFP x LOC/AFP
*
refer to separate table/file
1
5. Refinement process of elaboration where the designer provides successively more details for
each design component
6. Control Hierarchy or Program Structure
7. Data Structure logical relationship representation among data elements
8. Software Procedure precise specification of processing
9. Refinement elaboration of detail for all abstractions
10. Refactoring reorganization technique that simplifies the design
Functional Independence:
1. Cohesion one and only one function; single component forming a meaningful unit;
characteristics of an individual module
2. Coupling degree of connectivity or relationship to other modules of the system
Refinement process of elaboration producing correct programs and simplifying existing programs
that enable formal verification.
Refactoring a change that doesnt alter the behavior of the code or design yet improves the
internal structure. The existing designed is examined for: redundancy, unused design elements,
inefficient algorithms, poor or inappropriate data structures or design failure that can yield a better
design.
2
Product Metrics, Process and Project Metrics
Measure quantitative indication of extent, amount, dimension, capacity or size of some attribute
of a product or process
Metric a quantitative measure of the degree to which a system, component or process possesses a
given attribute
Indicator a metric or combination of metrics that provide insight into a software process, project
or product itself
Measurement Process:
Formulation derivation of software measures and metrics appropriate for the
representation of the software that is being considered.
Collection mechanism used to accumulate data required to derive the formulated metrics.
Analysis computation of metrics and application of mathematical tools
Interpretation evaluation of metrics to gain insight into the quality of representation.
Feedback recommendations derived from interpretation.
3
5. Completeness indirect implication about the degree to which the abstraction or design
component can be reused.
6. Cohesion to achieve a single, well-defined purpose
7. Volatility likelihood that a change will occur
Characteristics by Berard:
1. Localization way in which information is concentrated in a program
2. Encapsulation packaging of data and processing
3. Information Hiding operational details is hidden by a secure interface
4. Abstraction focus on essential details
Class-Oriented Metrics by Chidamber & Kemerer:
1. Weighted methods per class
2. Depth of the inheritance tree
3. Number of children
4. Coupling between object classes
5. Lack of cohesion in methods
Class-Oriented Metrics by Lorenz & Kidd:
1. Class size
2. Number of operations overridden by a subclass
3. Number of operations added by a subclass
4. Specialization of index
Class-Oriented Metrics by Mood Metrics Suite:
1. Method inheritance
2. Coupling factor
3. Polymorphism factor
Operation-Oriented Metrics by Lorenz & Kidd:
1. Average operation size
2. Operation complexity
3. Average number of parameters per operation
Component-Level Design Metrics:
1. Cohesion function of data objects and the focus of their definition
2. Coupling function of input and output parameters, global variables and modules
3. Complexity e.g. Cyclomatic complexity
Interface Design Metrics: Layout appropriateness function of layout entities, geographic position
Code Metrics: Halsteads Laws a comprehensive collection of metrics all predicted on the number
(count and occurrence) of operators and operands within a component or program.
Maintenance Metrics:
Software Maturity Index (SMI) = MT [MT - (Fa + Fc + Fd)]/MT
As SMI approaches 1.0 the product begins to stabilize
Where: MT = number of modules in the current release
Fa = number of modules in current release that have been added
Fc = number of modules in the current release that have been changed
Fd = number of modules from the preceding release that were deleted in the current release
Why do we measure?
Assess the status of an ongoing project
Track potential risk
4
Uncover problem areas before they go critical
Adjust workflow or tasks
Evaluate the project teams ability to control quality of software work products
Process of Measurement: measure the efficacy of the software process indirectly based on derived
set of metrics which can be derived from the process.
Process Metrics:
Quality-related focused on the quality of work products and deliverables
Product-related the production of work products related to effort expended
Statistical SQA Data error categorization and analysis
Defect Removal Efficiency propagation of errors from process activity to activity
Reuse Data number of components produced and their degree of reusability
Project Metrics:
Inputs measures the resources
Outputs measures the deliverables or products created
Results effectiveness of the deliverables
Typical Project Metrics:
Effort/time per software engineering task
Errors uncovered per review hour
Scheduled vs Actual milestone dates
Changes (number) and their characteristics
Distribution of effort on software engineering tasks
Typical Metrics [Size-Oriented in KLOC] or [Function-Oriented in FP] Metrics:
Errors
Defects
Price
Pages of documentation
KLOC or FP per person-month
Object-Oriented Metrics:
Number of scenario scripts or use-cases
Number of support classes
Average number of support classes per key class or analysis class
Number of subsystems
Measuring Quality:
Correctness operates according to the specification
Maintainability amenable to change
Integrity impervious to outside attack
Usability ease of use
Defect Removal Efficiency (DRE) = E / (E+D)
Where: E is the number errors found before the delivery of the software to the end-user
D is the number of defects found after delivery
Test-Related Attributes:
A part of the larger Validation (are we building the right product?) and
Verification (are we building the product right?) process
5
Design a Testable Software:
Operability operates cleanly
Observability results of each test case are readily observed
Controllability degree to which it can be automated and optimized
Decomposability can be targeted
Simplicity reduce complex architecture and logic
Stability few changes are requested during testing
Understandability of the design
A good test is:
Not redundant
Has high probability of finding an error
best of breed
Neither simple or complex
Testing can be viewed either internally (internal operations are performed according to specification) or
externally (demonstration of each function is fully operational).
White-box Testing: testing the internal workings of the software (i.e. loop operations, program
sequences, conditional operations, data structures)
Goal is to ensure that all statements and conditions have been executed at least once
Logic errors and incorrect assumptions are inversely proportional to a paths execution
probability
Cyclomatic Complexity a number of industry studies have indicated that the higher the V(G), the
higher the probability of errors. The upper bound (max) on the number of test must be designed and
executed to guarantee coverage of all program statements. The number of regions corresponds to the
Cyclomatic Complexity.
Cyclomatic Complexity = V(G) = (E-N) +2
Where: E = number of flow graph edges indicated by arrows
N = number of flow graph nodes
Regions = are the areas enclosed by the edges
V(G) = P+1
Where: P = number of predicate nodes contained in the flow graph G.
Independent Paths any path through the program that introduces at least a new set of processing
statements or new condition. It must move along at least one edge that has not been traversed before.
Black-box Testing: a method of software testing that examines the functionality of an application
without peering into its internal structures or workings.
It addresses how a functional validity is tested, how a system behavior and performance is tested, what
classes of input will make a good test case, is the system particularly sensitive to certain input values,
how are the boundaries of a data class is isolated, what data rates and volume can the system tolerate,
what effect will specific combination of data have on system operation.
Boundary Value Analysis (BVA): the likelihood of the error occurring is greater in the boundaries of
inputs rather than its center. It complements Equivalence Partitioning.
6
Equivalence Partitioning - test technique that uses optimal number of test inputs. Ideally, you use lower
and upper boundaries and the median as your test inputs.
Say, if you want to test for numbers 0 - 999, no need to test for all the numbers,
0...499...999 enough as they are all numbers.
Orthogonal Array Testing: mapping all possible combination or values of the test case.
Model Based Testing: creating a test model, noting the outputs and comparing actual outputs against
the expected results and then making necessary corrective actions.
Performance-related Attributes:
Correctness satisfies the functional requirements specifications; it either an absolute yes or no quality
Reliability failure-free operation, common metrics include: defect density and mean time to failure.
Reliability = MTBF / (1+MTBF)
Availability = MTBF/ (MTBF+MTTR)
where: MTBF = Mean Time Between Failure; MTTR = Mean Time To Repair
Usability aka user-friendliness, expected users find the system easy to use.
Computer Performance amount of work accomplished by a computer system (i.e. short response time,
high throughput, low utilization, high bandwidth, short transmission time, high availability of system).
Performance Tuning improvement of system performance (e.g. removal of bottleneck aka critical part)
Performance Equation total amount of time (t) required to execute a particular benchmark.
T=N*C/f or P = I*f/N
Where: P = 1/t is the performance in terms of time-to-execute
N = number of instructions actually executed (the instruction path length)
f = clock frequency in cycles per second
C = 1/I average cycles per instructions (CPI) for this benchmark
I = 1/C average cycles per instructions (IPC) for this benchmark
7
Product-level:
1. CSCI (Computer Software Configuration Item) - Major Functional Element, Software development
project phase, coding, integration
COE Attributes:
1. Adaptability - respond to changes in the internal and external configuration
2. Architecture - software-reliant structure of the system which composes software components
3. Functional - capabilities
4. Interoperability - interoperation among different systems, versions, dev't environment
5. Development & Test - dev't and test environments for both the candidate COE and applications
developed to execute within the COE
6. Hardware & Software Platform - computing environment upon which the COE will execute
7. Information Assurance - secured operation of the COE
Data Collection - most time and labor intensive step in empirical study to gather or collect essential data
to maintain integrity of the research; collect useful data to utilized to answer/test hypotheses
**Hawthorne effect - people act differently when they are being observed
***Ensure: Reliability, Validity
8
Principles of Software Project Scheduling:
1. Compartmentalization decamped into manageable activities and tasks.
2. Interdependency relationship between tasks.
3. Time Allocation each task must be allocated a number of time units i.e. start and completion
dates.
4. Effort Validation defined number of staff.
5. Responsibilities each task should be given to a specific member.
6. Outcomes - task should have defined result.
7. Milestones should be associated with a milestone.
Slack Slack
(SL) (SL)
3. Program Evaluation and Review Technique (PERT) a mathematical analysis that uses average
to calculate activity durations. Also called as three time estimates: optimistic, pessimistic and
most likely times.
te (Expected) = (a+4*m+b)/6
(SD) = (b-a)/6
2 (Variance) = square of SD)
VarianceCP = sum of all Variance
STDEV = CP