Sei sulla pagina 1di 31

Data-Driven Management

Niko Simamora
Attributes of Good Metrics
Rose (1995) lists the following attributes of good metrics:

 customer centered and focused on indicators that provide value to customers,


(such as product quality, service dependability, and timeliness of delivery, or are
associated with internal work processes that address system cost reduction, waste
reduction, coordination and team work, innovation, and customer satisfaction.)

 measure performance across time, which shows trends rather than snapshots.

 provide direct information at the level at which they are applied. No further
processing or analysis is required to determine meaning.

 linked with the organization’s mission, strategies, and actions. They contribute to
organizational direction and control.

 collaboratively developed by teams of people who provide, collect, process, and


use the data.
Performance Measurement Model
(Rose, 1995)
 Step 1: Performance category—This category is
the fundamental division of organizational
performance that answers the question: What do
we do? Sources for determining performance
categories include an organization’s strategic
vision, core competencies, or mission statement.
An organization will probably identify several
performance categories. These categories define
the organization at the level at which it is being
measured.
Performance Measurement Model
(Rose, 1995)
Step 2: Performance goal—The goal statement is
an operational definition of the desired state of
the performance category. It provides the target
for the performance category and, therefore,
should be expressed in explicit, action oriented
terms. An initial goal statement might be right on
the mark, so complex that it needs further
division of the performance category, or so
narrowly drawn that it needs some combination
of performance categories.
Performance Measurement Model
(Rose, 1995)
 Step 3: Performance indicator—This is the most
important step in the model because this is where
progress toward the performance goal is disclosed.
Here irrelevant measures are swept aside if they do not
respond to an organizational goal. This is where the
critical measures—those that communicate what is
important and set the course toward organizational
success—are established. Each goal will have one or
more indicators, and each indicator must include an
operational definition that prescribes the indicator’s
intent and makes its role in achieving the performance
goal clear.
Performance Measurement Model
(Rose, 1995)
Step 4: Elements of measure—These
elements are the basic components that
determine how well the organization meets
the performance indicator. They are the
measurement data sources—what is actually
measured—and are controlled by the
organization.
Performance Measurement Model
(Rose, 1995)
Step 5: Parameters—These are the external
considerations that influence the elements of
measure in some way, such as context,
constraint, and boundary. They are not
controlled by the organization but are
powerful factors in determining how the
elements of measure will be used.
Performance Measurement Model
(Rose, 1995)
Step 6: Means of measurement—This step
makes sense out of the preceding pieces. A
general, how-to action statement is written
that describes how the elements of measure
and their associated parameters will be
applied to determine the achievement level in
the performance indicator. This statement can
be brief, but clarifying intent is more
important than the length.
Performance Measurement Model
(Rose, 1995)
• Step 7: Notional metrics—In this step,
conceptual descriptions of possible metrics
resulting from the previous steps are put in
writing. This step allows everyone to agree on
the concept of how the information compiled
in the previous steps will be applied to
measuring organizational performance. It
provides a basis for validating the process and
for subsequently developing specific metrics.
Performance Measurement Model
(Rose, 1995)
 Step 8: Specific metrics—In this final step, an
operational definition and a functional description of
the metrics to be applied are written. The definition
and description describe the data, how they are
collected, how they are used, and, most importantly,
what the data mean or how they affect organizational
performance. A prototype display of real or imaginary
data and a descriptive scenario that shows what
actions might be taken as a result of the measurement
are also made. It must identify what things need to be
done and disclose conditions in sufficient detail to
enable subsequent improvement actions.
Application of Performance
Measurement Model (Rose, 1995)
The Balanced Scorecard
Balanced scorecards help the organization
maintain perspective by providing a concise
display of performance metrics in four areas
that correspond roughly to the major
stakeholders—customer, financial, internal
processes, and learning and growth (Kaplan and
Norton, 1992).
The Balanced Scorecard
Measuring Causes and Effects
Y = f (X)
Y = f (X1,X2)

The Balanced Scorecard
 Measuring Causes and Effects (Example)
Corn Crisp Recipe (12 servings)
• 3/4 cup yellow stone-ground cornmeal
• 1 cup boiling water
• 1/2 teaspoon salt
• 3 tablespoons melted butter
Process:
Preheat the oven to 400°F. Stir the cornmeal and boiling water together in a large glass measuring cup.
Add the salt and melted butter. Mix well and pour onto a cookie sheet. Using a spatula, spread the
batter out as thin as you possibly can—the thinner the crisper. Bake the cornmeal for half an hour or
until crisp and golden brown. Break into 12 roughly equal pieces.
Here the Big Y is the customer’s overall satisfaction with the finished corn crisp. Little Ys would include
flavor ratings, “crunchiness” rating, smell, freshness, and other customer-derived metrics that drive the
Big Y. Xs that drive the little Ys might include thinness of the chips, the evenness of the salt, the size of
each chip, the color of the chip, and other measurements on the finished product. Xs could also be
determined at each major step, for example, actual measurement of the ingredients, the oven
temperature, the thoroughness of stirring, how much the water cools before it is stirred with the
cornmeal, actual bake time, etc. Xs would also include the oven used, the cookware, utensils, etc.
The Balanced Scorecard
 Flowdown of strategies to drivers and
projects
Customer Perspective
 Quality—How well do you keep your promises by
delivering error free service or defect free product? Did
I receive what I ordered? Was it undamaged? Are your
promised delivery times accurate? Do you honor your
warranty or pay your claims without a hassle?
 Timeliness—How fast is your service? How long does it
take to have my order delivered? Do improvements
appear in a timely manner?
 Performance and service—How do your products and
services help me? Are they dependable?
 Value—What is the cost of buying and owning your
product or service? Is it worth it?
Customer Perspective
Internal Process Perspective
Customer Value Proposition versus Core
Competency
Innovation and Learning Perspective
 Can we continue to improve and create value?
Success is a moving target.
 Building shareholder value is especially dependent on the
company’s ability to innovate, improve, and learn. The
intrinsic value of a business is the discounted value of the
cash that can be taken out of the business during its
remaining life (Buffett, 1996).
 Intrinsic value is directly related to a company’s ability to
create new products and processes, to improve operating
efficiency, to discover and develop new markets, and to
increase revenues and margins.
 Innovation and learning were the areas addressed by the
continuous improvement (CI)
Financial Perspective
Financial results are determined by a
combination of customer satisfaction and the
way the organization runs its internal
operations, if we focus on these factors the
financial performance will follow in due
course.
Cost of Poor Quality
Quality costs general—description. [From Campanella (1999)
 PREVENTION COSTS
The costs of all activities specifically designed to prevent poor quality in products or services. Examples are the costs of
new product review, quality planning, supplier capability surveys, process capability evaluations, quality improvement
team meetings, quality improvement projects, quality education and training.
 APPRAISAL COSTS
The costs associated with measuring, evaluating or auditing products or services to ensure conformance to quality
standards and performance requirements. These include the costs of incoming and source inspection/test of purchased
material, in process and final inspection/test, product, process, or service audits, calibration of measuring and test
equipment, and the costs of associated supplies and materials.
 FAILURE COSTS
The costs resulting from products or services not conforming to requirements or customer/user needs. Failure costs are
divided into internal and external failure cost categories.
 INTERNAL FAILURE COSTS
Failure costs occurring prior to delivery or shipment of the product, or the furnishing of a service, to the customer.
Examples are the costs of scrap, rework, reinspection, retesting, material review, and down grading.
 EXTERNAL FAILURE COSTS
Failure costs occurring after delivery or shipment of the product, and during or after furnishing of a service, to the
customer. Examples are the costs of processing customer complaints, customer returns, warranty claims, and product
recalls.
 TOTAL QUALITY COSTS
The sum of the above costs. It represents the difference between the actual cost of a product or service, and what the
reduced cost would be if there was no possibility of substandard service, failure of products, or defects in their
manufacture.
Strategy Deployment Plan
Organization’s differentiators are:
1. Cost per unit
2. Revenues from new sources
3. (Customer) service relationship
4. Product introductions, (new product)
revenues
5. Research deployment time
Dashboard Design
Dashboard metrics should embody the principles of good
metrics discussed earlier:
1. Display performance over time.
2. Include statistical guidelines to help separate signal
(variation from an identifiable cause) from noise
(variation similar to random fluctuations).
3. Show causes of variation when known.
4. Identify acceptable and unacceptable performance
(defects).
5. Be linked to higher-level dashboards (goals and
strategies) or lower-level dashboards(drivers) to guide
strategic activity within the organization.
Dashboard Design
Information Systems Requirements
Data warehousing
Online analytic processing (OLAP)
Data mining
Benchmarking

can be defined as measuring your


performance against that of best-in-class
companies, determining how the best-in-class
achieve those performance levels, and using
the information as the basis for your own
company’s targets, strategies, and
implementation.
Benchmarking Process
1. Planning
• 1.1. Identify what is to be benchmarked
• 1.2. Identify comparative companies
• 1.3. Determine data collection method and collect data
2. Analysis
• 2.1. Determine current performance “gap”
• 2.2. Project future performance levels
3. Integration
• 3.1. Communicate benchmark findings and gain acceptance
• 3.2. Establish functional goals
4. Action
• 4.1. Develop action plans
• 4.2. Implement specific actions and monitor progress
• 4.3. Recalibrate benchmarks
5. Maturity
• 5.1. Leadership position attained
• 5.2. Practices fully integrated into process
Why Benchmarking Efforts Fail
• Lack of sponsorship—A team should submit to management a one- to fourpage
benchmarking project proposal that describes the project, its objectives,and potential
costs. If the team can’t gain approval for the project or get
Wrong people on team—Who are the right people for a benchmarking team?
Individuals involved in benchmarking should be the same ones who own or work in
the process. It’s useless for a team to address problems in business areas that are
unfamiliar or where the team has no control or influence.
• Teams don’t understand their work completely—If the benchmarking team didn’t
map, flowchart, or document its work process, and if it didn’t benchmark with
organizations that also documented their processes, there can’t be an effective
transfer of techniques. The intent in every benchmarking project is for a team to
understand how its process works and compare it to another company’s process at a
detailed level. The exchange of process steps is essential for improved performance.
• Teams take on too much—The task a team undertakes is often so broad that it
becomes unmanageable. This broad area must be broken into smaller, more
manageable projects that can be approached logically. A suggested approach is to
create a functional flowchart of an entire area, such as production or marketing, and
identify its processes. Criteria can then be used to select a process to be benchmarked
that would best contribute to the organization’s objectives.
Why Benchmarking Efforts Fail
• Lack of long-term management commitment—Since managers aren’t as familiar
with specific work issues as their employees, they tend to underestimate the time,
cost, and effort required to successfully complete a benchmarking project. Managers
should be informed that while it’s impossible to know the exact time it will take for a
typical benchmarking project, there is a rule of thumb that a team of four or five
individuals requires a third of their time for 5 months to complete a project.
• • Focus on metrics rather than processes—Some firms focus their benchmarking
efforts on performance targets (metrics) rather than processes. Knowing that a
competitor has a higher return on assets doesn’t mean that its performance alone
should become the new target (unless an understanding exists about how the
competitor differs in the use of its assets and an evaluation of its process reveals
that it can be emulated or surpassed).
• • Not positioning benchmarking within a larger strategy—Benchmarking is one of
many Six Sigma tools—such as problem solving, process improvement, and
process reengineering—used to shorten cycle time, reduce costs, and minimize
variation. Benchmarking is compatible with and complementary to these tools,
and they should be used together for maximum value.
Why Benchmarking Efforts Fail
• Misunderstanding the organization’s mission, goals, and
objectives—All benchmarking activity should be launched by
management as part of an overall strategy to fulfill the organization’s
mission and vision by first attaining the short-term objectives and then
the long-term goals.
• Assuming every project requires a site visit—Sufficient information
is often
available from the public domain, making a site visit unnecessary. This
speeds
the benchmarking process and lowers the cost considerably.
• Failure to monitor progress—Once benchmarking has been
completed for a specific area or process benchmarks have been
established and process changes implemented, managers should
review progress in implementation and results.

Potrebbero piacerti anche