Sei sulla pagina 1di 9

Software Engineering Reviewer - Midterm:

Size Related Attributes:


Source Lines of Code (SLOC) - software metric used to measure the size of the software program by
counting the number lines of codes in the text of the program's source code. SLOC will be used to
predict the amount of effort required to develop a program.

Types of SLOC measures:


1. Physical - counting actual lines of codes
2. Logical SLOC - measure the number of statements in the program

Function Points (FP) - the functionality of the software as measure of its complexity say, a house can be
measured using LOC in square meters or FP how many bedrooms or bathrooms.

FPs could be Simple (3), Average (4) or Complex (6)

Function Point Computation:


1. Count the Functions in Each Category
2. Multiply the Weighing Factors
3. Get the summation to arrive with the Raw FPs

Complexity Adjustment Factor (CAF) = 0.65 + (0.01 x N)


Where: N = summation of the Weighted Environmental Factors

Adjusted Function Points (AFP) = FP (raw) x CAF

Converting to LOC = adjustment function points x LOC per adjustment faction point
= AFP x LOC/AFP
*
refer to separate table/file

Design Specification Models:


1. Software Design - blueprint of the software
2. Data Design - transforming analysis information model into data structures
3. Architectural Design - relationships among the major structural elements; the design patterns that can
be used to achieve the requirements that have been defined for the system and the constraints that
affect it.
4. Interface Design - how software elements communicate with each other, the system and users i.e.
data flow, control flow

Design Fundamental Concepts:


1. Abstraction focus on solving a problem without being concerned about irrelevant lower level
details. Aimed to reduce duplication of information
2. Software Architecture overall structure of the software components
3. Modularity examining the components independently of one another; think divide and
conquer
4. Information Hiding data and procedure contained within a module that is inaccessible to
modules that no need for such information

1
5. Refinement process of elaboration where the designer provides successively more details for
each design component
6. Control Hierarchy or Program Structure
7. Data Structure logical relationship representation among data elements
8. Software Procedure precise specification of processing
9. Refinement elaboration of detail for all abstractions
10. Refactoring reorganization technique that simplifies the design

Functional Independence:
1. Cohesion one and only one function; single component forming a meaningful unit;
characteristics of an individual module
2. Coupling degree of connectivity or relationship to other modules of the system

Cohesions could be:


Coincidental parts are unrelated [lowest range of cohesion]
Logical have similar functions
Temporal related by time
Procedural related by order of functions
Communicational access the same data
Sequential output is the input of the other
Functional sequential with complete, related functions [highest range of cohesion]

Coupling could be:


No dependencies
Loosely coupled meaning, some dependencies
Highly coupled meaning, many dependencies

Range of Coupling: [low to high]


Uncoupled independent modules
Data communicating via parameter passing which the recipient only needs [best type]
Stamp communicating via a data structure passed as a parameter which hold more info
than the recipient needs
Control 2 modules communicating via a control flag
External tied to an environment external to the software
Common - 2 modules communicating via global data
Content when a module uses and/or alters data in another module [worst type]

Refinement process of elaboration producing correct programs and simplifying existing programs
that enable formal verification.

Refactoring a change that doesnt alter the behavior of the code or design yet improves the
internal structure. The existing designed is examined for: redundancy, unused design elements,
inefficient algorithms, poor or inappropriate data structures or design failure that can yield a better
design.

2
Product Metrics, Process and Project Metrics
Measure quantitative indication of extent, amount, dimension, capacity or size of some attribute
of a product or process
Metric a quantitative measure of the degree to which a system, component or process possesses a
given attribute
Indicator a metric or combination of metrics that provide insight into a software process, project
or product itself

Measurement Process:
Formulation derivation of software measures and metrics appropriate for the
representation of the software that is being considered.
Collection mechanism used to accumulate data required to derive the formulated metrics.
Analysis computation of metrics and application of mathematical tools
Interpretation evaluation of metrics to gain insight into the quality of representation.
Feedback recommendations derived from interpretation.

Goal-Oriented Software Measurement:


Analyze <name of activity or attribute to be measured>
For the purpose of <overall objective>
With respect to <aspects of the activity or attribute to be considered>
From the viewpoint of <people of interest in the measurement>
In the context of <environment which the measurement takes place>

Metrics should be:


Simple and computable
Empirically and intuitively persuasive
Consistent and objective
Consistent in units and dimension
Programming language independent
Effective mechanism for quality feedback

Metrics for the Requirements Model:


1. Function-based metrics function point as a factor of the specification
2. Specification metrics number of requirements by type

Metrics for Object-Oriented Design by Whitmire:


1. Size defined in four view:
a. population
b. volume
c. length
d. functionality
2. Complexity how classes of Object-oriented design are interrelated to one another
3. Coupling physical connections between elements of the OO design
4. Sufficiency degree which an abstraction possesses the features required for it or to which
a design component possess features in its abstraction from the point of view of the current
application.

3
5. Completeness indirect implication about the degree to which the abstraction or design
component can be reused.
6. Cohesion to achieve a single, well-defined purpose
7. Volatility likelihood that a change will occur
Characteristics by Berard:
1. Localization way in which information is concentrated in a program
2. Encapsulation packaging of data and processing
3. Information Hiding operational details is hidden by a secure interface
4. Abstraction focus on essential details
Class-Oriented Metrics by Chidamber & Kemerer:
1. Weighted methods per class
2. Depth of the inheritance tree
3. Number of children
4. Coupling between object classes
5. Lack of cohesion in methods
Class-Oriented Metrics by Lorenz & Kidd:
1. Class size
2. Number of operations overridden by a subclass
3. Number of operations added by a subclass
4. Specialization of index
Class-Oriented Metrics by Mood Metrics Suite:
1. Method inheritance
2. Coupling factor
3. Polymorphism factor
Operation-Oriented Metrics by Lorenz & Kidd:
1. Average operation size
2. Operation complexity
3. Average number of parameters per operation
Component-Level Design Metrics:
1. Cohesion function of data objects and the focus of their definition
2. Coupling function of input and output parameters, global variables and modules
3. Complexity e.g. Cyclomatic complexity
Interface Design Metrics: Layout appropriateness function of layout entities, geographic position
Code Metrics: Halsteads Laws a comprehensive collection of metrics all predicted on the number
(count and occurrence) of operators and operands within a component or program.

Maintenance Metrics:
Software Maturity Index (SMI) = MT [MT - (Fa + Fc + Fd)]/MT
As SMI approaches 1.0 the product begins to stabilize
Where: MT = number of modules in the current release
Fa = number of modules in current release that have been added
Fc = number of modules in the current release that have been changed
Fd = number of modules from the preceding release that were deleted in the current release

Why do we measure?
Assess the status of an ongoing project
Track potential risk

4
Uncover problem areas before they go critical
Adjust workflow or tasks
Evaluate the project teams ability to control quality of software work products
Process of Measurement: measure the efficacy of the software process indirectly based on derived
set of metrics which can be derived from the process.

Process Metrics:
Quality-related focused on the quality of work products and deliverables
Product-related the production of work products related to effort expended
Statistical SQA Data error categorization and analysis
Defect Removal Efficiency propagation of errors from process activity to activity
Reuse Data number of components produced and their degree of reusability
Project Metrics:
Inputs measures the resources
Outputs measures the deliverables or products created
Results effectiveness of the deliverables
Typical Project Metrics:
Effort/time per software engineering task
Errors uncovered per review hour
Scheduled vs Actual milestone dates
Changes (number) and their characteristics
Distribution of effort on software engineering tasks
Typical Metrics [Size-Oriented in KLOC] or [Function-Oriented in FP] Metrics:
Errors
Defects
Price
Pages of documentation
KLOC or FP per person-month
Object-Oriented Metrics:
Number of scenario scripts or use-cases
Number of support classes
Average number of support classes per key class or analysis class
Number of subsystems
Measuring Quality:
Correctness operates according to the specification
Maintainability amenable to change
Integrity impervious to outside attack
Usability ease of use
Defect Removal Efficiency (DRE) = E / (E+D)
Where: E is the number errors found before the delivery of the software to the end-user
D is the number of defects found after delivery

Test-Related Attributes:
A part of the larger Validation (are we building the right product?) and
Verification (are we building the product right?) process

5
Design a Testable Software:
Operability operates cleanly
Observability results of each test case are readily observed
Controllability degree to which it can be automated and optimized
Decomposability can be targeted
Simplicity reduce complex architecture and logic
Stability few changes are requested during testing
Understandability of the design
A good test is:
Not redundant
Has high probability of finding an error
best of breed
Neither simple or complex
Testing can be viewed either internally (internal operations are performed according to specification) or
externally (demonstration of each function is fully operational).

White-box Testing: testing the internal workings of the software (i.e. loop operations, program
sequences, conditional operations, data structures)
Goal is to ensure that all statements and conditions have been executed at least once
Logic errors and incorrect assumptions are inversely proportional to a paths execution
probability

Cyclomatic Complexity a number of industry studies have indicated that the higher the V(G), the
higher the probability of errors. The upper bound (max) on the number of test must be designed and
executed to guarantee coverage of all program statements. The number of regions corresponds to the
Cyclomatic Complexity.
Cyclomatic Complexity = V(G) = (E-N) +2
Where: E = number of flow graph edges indicated by arrows
N = number of flow graph nodes
Regions = are the areas enclosed by the edges

V(G) = P+1
Where: P = number of predicate nodes contained in the flow graph G.

Independent Paths any path through the program that introduces at least a new set of processing
statements or new condition. It must move along at least one edge that has not been traversed before.

Black-box Testing: a method of software testing that examines the functionality of an application
without peering into its internal structures or workings.

It addresses how a functional validity is tested, how a system behavior and performance is tested, what
classes of input will make a good test case, is the system particularly sensitive to certain input values,
how are the boundaries of a data class is isolated, what data rates and volume can the system tolerate,
what effect will specific combination of data have on system operation.

Boundary Value Analysis (BVA): the likelihood of the error occurring is greater in the boundaries of
inputs rather than its center. It complements Equivalence Partitioning.

6
Equivalence Partitioning - test technique that uses optimal number of test inputs. Ideally, you use lower
and upper boundaries and the median as your test inputs.
Say, if you want to test for numbers 0 - 999, no need to test for all the numbers,
0...499...999 enough as they are all numbers.

Orthogonal Array Testing: mapping all possible combination or values of the test case.

Model Based Testing: creating a test model, noting the outputs and comparing actual outputs against
the expected results and then making necessary corrective actions.

Performance-related Attributes:
Correctness satisfies the functional requirements specifications; it either an absolute yes or no quality
Reliability failure-free operation, common metrics include: defect density and mean time to failure.
Reliability = MTBF / (1+MTBF)
Availability = MTBF/ (MTBF+MTTR)
where: MTBF = Mean Time Between Failure; MTTR = Mean Time To Repair
Usability aka user-friendliness, expected users find the system easy to use.
Computer Performance amount of work accomplished by a computer system (i.e. short response time,
high throughput, low utilization, high bandwidth, short transmission time, high availability of system).
Performance Tuning improvement of system performance (e.g. removal of bottleneck aka critical part)
Performance Equation total amount of time (t) required to execute a particular benchmark.
T=N*C/f or P = I*f/N
Where: P = 1/t is the performance in terms of time-to-execute
N = number of instructions actually executed (the instruction path length)
f = clock frequency in cycles per second
C = 1/I average cycles per instructions (CPI) for this benchmark
I = 1/C average cycles per instructions (IPC) for this benchmark

Software Effort Attributes


Type of Labor:
1. Direct [charged directly]
2. Indirect [not directly charged]
Hour Information - attribute describes the wage or pay type of employees.
Could be Salaried or Hourly; Compensated or Uncompensated.

Employment class - reporting organization and contractor categories.


Could be Full or Part-time; Contractual, Sub-contractual, Consultants

Labor Class - functional job positions

Activity Class could be:


Development - new software development project
Maintenance - after a new software has been released, problem repair

7
Product-level:
1. CSCI (Computer Software Configuration Item) - Major Functional Element, Software development
project phase, coding, integration

2. Build-level Functions (Customer release) = e.g. alpha, beta releases

3. System-level Functions - training, support

Common Operating Environment (COE) - promotes interoperability and cross-platform capabilities


among the organization's devices. Achieve a balance between unconstrained innovation and
standardization.

COE Attributes:
1. Adaptability - respond to changes in the internal and external configuration
2. Architecture - software-reliant structure of the system which composes software components
3. Functional - capabilities
4. Interoperability - interoperation among different systems, versions, dev't environment
5. Development & Test - dev't and test environments for both the candidate COE and applications
developed to execute within the COE
6. Hardware & Software Platform - computing environment upon which the COE will execute
7. Information Assurance - secured operation of the COE

Data Collection - most time and labor intensive step in empirical study to gather or collect essential data
to maintain integrity of the research; collect useful data to utilized to answer/test hypotheses

Data Collection methods:


1. Qualitative - experiences, emotions, social phenomena [e.g. interviews, participant observations,
FGDs, Questionnaires/Testing]
2. Quantitative - aggregation, interferences [surveys, polls, automatic methods: raw, aggregated data,
inferred data]
*Mixed method - merging of interviews and analysis of log files, sequencing participant observation
followed by simulation

**Hawthorne effect - people act differently when they are being observed
***Ensure: Reliability, Validity

SOFTWARE DEVELOPMENT RESOURCE ESTIMATION


Project performed by people, constrained by limited resources, planned, executed and controlled.
It is also a temporary endeavor undertaken to create a unique product or service.
Project Plan a formal, approved document used to guide both project execution (expected results) and
project control (monitors and takes the corrective action when problem arises).
Project Scheduling how the project will be organized as separate tasks will it be executed. It is the
estimated calendar time needed to complete the task, effort required and who will work on that tasks
that have been identified.

8
Principles of Software Project Scheduling:
1. Compartmentalization decamped into manageable activities and tasks.
2. Interdependency relationship between tasks.
3. Time Allocation each task must be allocated a number of time units i.e. start and completion
dates.
4. Effort Validation defined number of staff.
5. Responsibilities each task should be given to a specific member.
6. Outcomes - task should have defined result.
7. Milestones should be associated with a milestone.

Tools and Techniques for Project Scheduling:


1. Critical Path Method (CPM) a mathematical analysis that calculates the early start, early finish,
late start and late finish dates. Focuses on float or slack to find out which activity is the least
flexible. A delay in the any activity in the CPM will cause the delay in project complement.

EarlyStart <Activity>2 EarlyFinish


(ES) (EF)

Slack Slack
(SL) (SL)

LateStart Duration LateFinish


(LS) (D) (LF)

Forward Pass = calculate the ES and EF (ES+D)


Backward Pass = calculate the LS (LF-D) and LF
Compute the SL = LF-EF = LS-ES
*Those with Zero Slack is part of your Critical Path
**Exceeding the value on the value of your Slack will cause a delay in project completion

2. Graphical Evaluation and Review Technique (GERT)

3. Program Evaluation and Review Technique (PERT) a mathematical analysis that uses average
to calculate activity durations. Also called as three time estimates: optimistic, pessimistic and
most likely times.

te (Expected) = (a+4*m+b)/6
(SD) = (b-a)/6
2 (Variance) = square of SD)
VarianceCP = sum of all Variance
STDEV = CP

4. Duration Compression could either be: a. Crashing b. Fast Tracking

Potrebbero piacerti anche