Sei sulla pagina 1di 20

FAILURE MODES & EFFECTS ANALYSIS

MEASUREMENT SYSTEMS ANALYSIS


AND VALIDATION
FAILURE MODES AND EFFECTS ANALYSIS (FMEA)
The FMEA Process is a structured approach that has the goal of
linking the FAILURE MODES to an EFFECT over time for the
purpose of prevention. The structure of FMEA is as follows:
Preparation FMEA Process Improvement
a. Select the team
b. Develop the process map and steps
c. List key process outputs to satisfy internal and external
customer requirements
d. Define the relationships between outputs and process variables
e. Rank inputs according to importance.
FAILURE MODES AND EFFECTS ANALYSIS
Preparation FMEA Process Improvement
a. Identify the ways in which process inputs can vary (causes)
and identify associated FAILURE MODES. These are ways
that critical customer requirements might not be met.
b. Assign severity, occurrence and detection ratings to each
cause and calculate the RISK PRIORITY NUMBERS (RPNs).
c. Determine recommended actions to reduce RPNs.
d. Estimate time frames for corrective actions.
e. Take actions and put controls in place.
f. Recalculate all RPNs.
FAILURE MODES AND EFFECTS ANALYSIS
Preparation FMEA Process Improvement

Vocabulary:
FAILURE MODE: How a part or process can fail to meet
specifications.
CAUSE: A deficiency that results in a failure mode
sources of variation
EFFECT: Impact on customer if the failure mode is not
prevented or corrected.
FMEA Standardized Rating System
1 < RPN = (Degree of Severity)*(Likelihood of Occurrence)*(Ability to Detect) < 1000
RATING DEGREE OF SEVERITY LIKELIHOOD OF ABILITY TO DETECT
OCCURRENCE

1 Customer will not notice the adverse effect or it is Likelihood of occurrence is Sure that the potential failure will be found or
insignificant. remote. prevented before reaching the next customer.

2 Customer will probably experience slight Low failure rate with Almost certain that the potential failure will be
annoyance. supporting documentation. found or prevented before reaching the next
customer.

3 Customer will experience annoyance due to slight Low failure rate without Low likelihood that the potential failure will
degradation of performance. supporting documentation. reach the next customer undetected.

4 Customer dissatisfaction due to reduced Occasional failures. Controls may not detect or prevent the potential
performance. failure from reaching the next customer.

5 Customer is made uncomfortable or their Relatively moderate failure Moderate likelihood that the potential failure will
productivity is reduced by the continued rate with supporting reach the next customer.
degradation of the effect. documentation.

6 Warranty repair or significant manufacturing or Moderate failure rate without Controls are unlikely to detect or prevent the
assembly complaint. supporting documentation. potential failure from reaching the next
customer.

7 High degree of customer dissatisfaction due to Relatively high failure rate Poor likelihood that the potential failure will be
component failure without complete loss of with supporting detected or prevented before reaching the next
function. Productivity impacted by high scrap or documentation. customer.
rework levels.

8 Very high degree of dissatisfaction due to the loss High failure rate without Very poor likelihood that the potential failure will
of function without a negative impact on safety or supporting documentation. be detected or prevented before reaching the
governmental regulations. next customer.

9 Customer endangered due to the adverse effect on Failure is almost certain based Current controls probably will not even detect
safe system performance with warning before on warranty data or significant the potential failure.
failure or violation of governmental regulations. DV testing.

10 Customer endangered due to the adverse effect on Assured of failure based on Absolute certainty that the current controls will
safe system performance without warning before warranty data or significant not detect the potential failure.
failure or violation of governmental regulations. DV testing
Failure Modes and Effects Analysis
(FMEA)

Process or
Prepared by: Page ____ of ____
Product Name:

Responsible: FMEA Date (Orig) ______________ (Rev) _____________

H o w s e v e re is th e e ffe c t o n th e

H o w o fte n d o e s th e C a u s e o r

H o w w e ll c a n y o u d e te c t th e
C a u s e o r F a ilu re M o d e ?
F a ilu re M o d e o c c u r?
Process S O D R
Step/Part E C E P Actions

c u s to m e r?

C a lc u la te d
Number Potential Failure Mode Potential Failure Effects V Potential Causes C Current Controls T N Recommended Resp.
What are the In what ways can the What is the impact of the What are the causes of the What are the existing controls What are the actions Who is
process steps? process step go wrong? Failure Mode on the Failure Mode? and procedures that prevent the for reducing the responsible for
customer? Cause or Failure Mode? occurrence, the
decreasing severity or recommended
improving detection? action?

0
FAILURE MODES AND EFFECTS ANALYSIS (FMEA)
Preparation FMEA Process Improvement

Develop and implement plans to reduce RPNs.


Measurement System Analysis & Validation:
Define Performance Standards: Numbers & Units
Translate customer needs into clearly defined measurable traits.
OPERATIONAL DEFINITION: This is a precise description
that removes any ambiguity about a process and provides a clear
way to measure that process. An operational definition is a key
step towards getting a value for the CTQ that is being measured.
USEFUL TOOL: Outside-In-Thinking

Measurement System Analysis & Validation


OUTSIDE-IN-THINKING
Outside-In-Thinking refers to understanding a process from a
customer perspective, a key element that feeds customer
satisfaction.

The idea is to enable customers to feel and experience Six


Sigma.

This requires so-called wing-to-wing thinking. Wing-to-wing


thinking assists in discovery of the customers scope of the
process.

In other words, according to the customer, when does the


process start and stop? This is the wing-to-wing perspective.

An example of wing-to-wing thinking follows.


Measurement System Analysis & Validation
OUTSIDE-IN-THINKING
A Green Belt was focusing on reducing the cycle time to complete a
change request to the email system. She began by focusing on the
following scope: change request ticket open to change request ticket
closed.

Upon talking to the customer (anyone who submits a change request),


the Green Belt realized there is more to this process than just opening
and closing the ticket. Before the ticket is opened, the customer fills
out a request form and e-mails it to the appropriate mailbox. The
customer does not know that the work is complete until s/he receives
a call verifying completion. Based on this information, the Green Belt
changed her scope to: customer submits a request form to user
receives call that work is complete.

Measurement System Analysis & Validation


OUTSIDE-IN-THINKING
HOW DO I DO IT?
Identify your customer. Although the concept of Outside-In-Thinking
is typically used in conjunction with an external customer, the same
theory applies for a project that has an internal customer.
Understand the process from the customers perspective. You can talk
to the customer directly or to experts on your team who have direct
contact with the customer. Use a wing-to-wing perspective, that is:
according to the customer, when does the process start and stop?
Your team can evaluate the process start / stop and decide if this is an
appropriate scope the team should focus on for improvement. Be
realistic in this decision. Make sure that the scope isnt too big and
that you can realistically influence the improvement effort.
Measurement System Analysis & Validation
OUTSIDE-IN-THINKING
Tips: These questions can assist in becoming more customer-centric:

What does the customer need from the process?


How is our process performance from the customers perspective?
How does the customer measure the process?
How does the customer view the process?
What can we do better?
How would the customer like for our process to perform?

Tip: Whether or not you have direct contact with an external customer,
your project must be customer-focused. Identify the customer of your
process and understand the pain they feel. This will help drive your
improvement efforts so that the customer feels the impact of Six Sigma.

Measurement System Analysis & Validation


Measurement System Analysis & Validation
Measure: Define Performance Standards: Numbers & Units (Cont.)
TARGET PERFORMANCE: Where a process or product
characteristic is aimed If there were no variation in the
product/ process then this is the value that would always occur.
SPECIFICATION LIMIT: The amount of variation that the
customer is willing to tolerate in a process or product. This is
usually shown by upper and lower boundaries which, if
crossed, will cause the customer to reject the process or product.
DEFECT DEFINITION: Any process or product characteristic
that deviates outside of specification limits.
Measure
3. Establish Data Collection Plan, Validate the
Measurement System, and Collect Data.
A Good Data Collection Plan:
a. Provides a clearly documented strategy for collecting
reliable data;
b. Gives all team members a common reference;
c. Helps to ensure that resources are used effectively to collect
only critical data. The cost of obtaining new data should be
weighed vs. its benefit. There may be viable historical data
available.

Measurement System Analysis & Validation


Measurement System Analysis & Validation

Measure: 3. Establish Data Collection Plan, Validate


the Measurement System, and Collect Data.

We refer to actual process variation and measure


actual output:
a.what is the measurement process used?
b. describe that procedure
c. what is the precision of the system?
d. how was precision determined
e. what does the gage supplier state about:
* Accuracy * Precision * Resolution
f. Do we have results of either a:
* Test-Retest Study or * Gage R&R Study?
Measure:
3. Establish Data Collection Plan, Validate the Measurement
System, and Collect Data.

Note that our measurement process may also have variation.

a. Gage Variability:
Precision: Accuracy: Both:

Measurement System Analysis & Validation


Measure: 3. Establish Data Collection Plan, Validate
Validate the Measurement System, and Collect Data.

b. Operator Variability: Differences between operators


related to measurement.
c. Other Variability: Many possible sources.

Repeatability: Assess effects within ONE unit of your


measurement system, e.g., the variation in the measurements
of ONE device.
Reproducibility: Assesses the effects across the measurement
process, e.g., the variation between different operators.
Resolution: The incremental aspect of the measurement device.

Measurement System Analysis & Validation


Measure:
3. Establish Data Collection Plan, Validate the
Measurement System, & Collect Data.

GAGE R&R (Repeatability & Reproducibility) STUDY:

a. Operators at least 3 recommended;


b. Part the product or process being measured. At
least 10 representative parts per study reflecting the
range of parts possible are recommended with each
operator measuring the same parts.
c. Trial each time the item is measured. There
should be at least 3 trials per part, per customer.
Measurement System Analysis & Validation
Measure: 3. Establish Data Collection Plan, Validate the
Measurement System, & Collect Data.
GAGE R&R (Repeatability & Reproducibility) STUDY:

Source of Variation % Contribution


Total Gage Repeatability & Reproducibility R 1 + R2
Repeatability R1
Reproducibility R2
Part-to-Part 100% - (R1 + R2)
Total Variation 100%

Measurement System Analysis & Validation


FAILURE MODES & EFFECTS ANALYSIS
MEASUREMENT SYSTEMS ANALYSIS
AND VALIDATION
End of Session

Potrebbero piacerti anche