Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
treqna.com
First Edition
C ontents
DEFINE Step D1 Map Project Step D2 Approve Project MEASURE Step M1 CTQ Characteristics & Standards Step M2 Measurement System Analysis Step M3 Data Collection Plan ANALYZE Step A1 Baseline Process Step A2 Performance Objective Step A3 Identify drivers of Variation IMPROVE Improve Step I1 Screen for Vital Xs Improve Step I2 Study Interaction between Xs Improve Step I3 Define Improved Process CONTROL Control Step C1 MSA on Xs Control Step C2 Improved Process Capability Control Step C3 Establish Control Plan Page 4 Page 36 Page 45 Page 82 Page 116
First Edition
First Edition
Define Phase
DEFINE MEASURE ANALYZE IMPROVE CONTROL Step D.2
Approve project Project feasibility CBA FMEA
Step D.1
Step Deliverables Map project Project Charter COPIS Survey, Focus Group, Interview, Charter, COPIS
Tools
First Edition
Topics
About Projects Customer, CTQs and VOC COPIS Project Charter
First Edition
First Edition
First Edition
Topics
About Projects Customer, CTQs and VoC COPIS Project Charter
First Edition
Customer receives the output of a process Internal customer: E.g. Marketing person for manufacturing company is an internal customer.
First Edition
External Customer
Internal Customer
Customer to CTQ
Identify Customer
What is a CTQ? A CTQ (Critical to Quality) also known as a KPI (Key Process Indicator) is a metric that measure some aspect of a product or process which is critical to the customer. The customer defines acceptable levels for CTQs using specification limits.
First Edition
10
First Edition
11
Focus Groups
Advantages: Help gather qualitative and in depth information Excellent for understanding and getting CTQ definitions Not prone to understanding and interpretation gaps Disadvantages: Bias due to limited participants Limited data due to qualitative focus Can have a lot of anecdotal information
First Edition
12
Customer Complaints
Advantages: Specific feedback Provides opportunity to respond appropriately to dissatisfied customer Disadvantages: Sample size issue Situation specific, might be anecdotal Prone to small sample bias
First Edition
13
Survey Development
Objectives Why is the survey needed? Who will use the survey results? What specific information is required with the help of this survey? Who needs to be surveyed? Logistics How much time is available to administer the survey? Who will conduct the survey? What is the medium to conduct the survey? What are the financial resources available to conduct the survey? Design Draft questions and validate against objectives. Devise measurement scale. Design report out structure Administration Determine sample size and sampling strategy Pilot survey on selected group (ensure the methodology)
Designing survey question: Survey results will be meaningless unless you ask the right questions. The question must contain enough specifics so the respondent can give a meaningful answer. Types of questions: open questions: Open questions allow the customer to respond to a question in his or her own words. closed questions. Closed questions offer the customer a choice of specific responses from which to select. Multiple choice, rating scale and yes/no questions are examples of closed questions. Bias from Design of Questions: The questions you ask your customers must be properly worded in order to achieve good end results. Avoid the following: Leading questions-They inject interviewer bias. Compound questions-They may generate a partial or no response. Judging questions-They can lead to guarded or partial responses. Ambiguous or vague questions-They produce meaningless responses Acronyms and jargon-These may be unknown to the respondent. Double negatives-They may create misunderstanding. Long surveys-They discourage respondent participation. Other sources of bias in survey: Question sequence Sample Bias No-response bias Unclear questions Interviewer / facilitator induced bias
First Edition
14
First Edition
15
Topics
About Projects Customer, CTQs and VoC COPIS Project Charter
First Edition
16
COPIS
Puts process and customer in perspective Depicts process flow Highlights process boundaries and interdependencies COPIS maps customer and process interaction by defining customer requirements and steps taken to deliver the desired output
First Edition
17
COPIS - Components
C Customer O Output P Process I Input S Supplier
Customer: Recipient of the process output. Output: Anything produced for the customer (internal or external). Outcome of the process. Process: Group of activities required to transform inputs in to customer desired output Input: Material or knowledge required to produce the desired output Supplier: Source that supplies the input Metric: A measure of compliance with customer expectation or established standard
First Edition
18
COPIS - Components
Step1: Identify Customers, Outputs, Inputs, Suppliers and Process Name
C Customer
O Output
P Process
I Input
S Supplier
First Edition
19
Topics
About Projects Customer, CTQs and VoC COPIS Project Charter
First Edition
20
Project Charter
Project charter is a written roadmap that defines key questions or issues to be addressed by the project. It defines the project's purpose and its intended outcomes. The critical elements of a project charter are: Business Case Problem and Goal Statement Project Scope and Boundaries Project Team Project Timelines Communication Plan
First Edition
21
Problem and Goal Statement: Project problem and improvement goal in distinct and measurable terms
Project Team: Who will be involved in the project and what role will he or she play?
First Edition
22
Business case also talks about: Brief introduction to the process/business where project is undertaken How does the project help in achieving business goals.
First Edition
23
Questions that a problem statement must answer What is the problem? Where is the problem? What is the magnitude of the problem? Over what period has the problem been recorded? What is the impact of the problem?
First Edition
24
First Edition
25
First Edition
26
The goal statement would usually start with the verb, e.g. reduce, increase, eliminate etc.
First Edition
27
Goal Statement: Reduce the percentage of delayed monthly credit card account statement from 26% to 2% by 26th Nov 2004. Well defined goal statement is specific, measurable and time bound.
First Edition
28
While defining the scope of a project, you must identify Processes under study Resources available to the project team Constraints, if any, on the project team
You must also define boundaries (start and stop points) for each process under study to avoid ambiguity
First Edition
29
Translate customer expectations to process CTQs which process CTQs best represent customer want
Map process detail the process(es) under study to identify areas of specific focus
Identify resources what are the resources available to the project team
Understand constraints what are the other CTQs or parameter that can be influenced by the process but need to remain as is
First Edition
30
Project Plan: An activity by activity plan to help meet project timelines. Identifies order of execution for critical tasks Identifies interdependencies, if any, between tasks Gives an estimate of time requirement Helps identify potential roadblocks and plan contingency measures
First Edition
31
Communication Plan
Communication plan: To prevent sudden surprises: Keep all stakeholders informed about project status To ensure that timely communication of vital information to those who need it To help efficient management of information logistics
Good communication plan would address the following: Who would communicate? (Project Owner) What to communicate? (Project Status, schedule, plan, challenges, successes, failures) When and how frequently? (Tollgates, weekly, monthly) Reason for communication? (importance) Recipients of communication? (Project Manager, Process Owner, Champion/Sponsor, Process Team, Cross-functional Teams, Process Teams etc..) Other things to be covered in the plan include the choice of media, information residence, information source and comments regarding details of information
First Edition
32
Team Roles
Roles which should be defined in the Project Charter: Project Manager GB/BB or MBB Mentor - BB or MBB Project Champion Project Sponsor Project team members
While defining roles: The project champion and the sponsor might be the same person Team members are usually from the process or related to the product for which the project is undertaken. Involvement of support staff is also common. It is not unusual to have team members specific to a particular activity or phase
Green belt - An employee of an organization who has been trained on Six Sigma and executes projects as a part of his/her full time job. Green belts are usually supervised or mentored by a Black belt. Black belt An employee of an organization who has been trained on Six Sigma and executes/mentors projects. Black belts are Six Sigma specialists and mentor Green Belts, Black belts are usually mentored by Master Black Belts Master Black Belt Master Black Belts are Six Sigma Quality experts that are responsible for the strategic deployment of Six Sigma resources within an organization. Master Black Belt main responsibilities include training and mentoring BBs and GBs; selection, execution and support of Six Sigma projects.Black belts usually report into a Master Black Belt. Champion A member from the leadership team who reviews the progress of projects that he/she champions. The champion is also responsible for supporting the project and help it overcome any challenges Sponsor Usually the process owner of the process on which a project is being done, he/she provides the necessary resources for execution of the project
First Edition
33
Potential Pitfalls
Common mistakes made while selecting a project team Time commitment from team members is not clearly defined
Incorrect mix lack of influence (hierarchy), knowledge (subject matter expertise) or competence (statistical or technical skills)
First Edition
34
Topics covered
About Projects
Characteristics of a good project Sources for project ideas
First Edition
35
Define Phase
DEFINE MEASURE ANALYZE IMPROVE CONTROL Step D.2
Approve project Project feasibility CBA FMEA
Step D.1
Step Deliverables Map project Project Charter COPIS Survey, Focus Group, Interview, Charter, COPIS
Tools
First Edition
36
Topics
Project Risks Cost Benefit Analysis Project Go/No Go decision
First Edition
37
Project Risks
Brainstorm to identify potential risks associated with each phase of the project risks that restrict its execution or cause it to fail
Evaluate risks associated with external factors influencing the process or product under study
Evaluate project feasibility under light of associated risks and contingency plans
First Edition
38
Topics
Project Risks Cost Benefit Analysis Project Go/No Go decision
First Edition
39
A Cost Benefits Analysis (CBA) is a calculation which is done in order to evaluate the costs that a project/change may incur and if it will have generate enough benefits to justify the costs.
First Edition
40
Topics
Project Risks Cost Benefit Analysis Project Go/No Go decision
First Edition
41
The champions commitment to mitigate risks and drive contingency measures, if any require his influence
First Edition
42
Topics Covered
Project Risks
Identify and plan for risks that may delay the project
First Edition
43
First Edition
44
Measure Phase
DEFINE MEASURE Step M.1
Step CTQ characteristics and standards Identify project Y metric Establish performance standards Process Map, QFD, FMEA, Pareto
ANALYZE
IMPROVE
Step M.2
Measurement system analysis Identify gage error
Deliverables
Tools
Continuous Gage R&R, Short Form Gage, Test/ Retest Study, Attribute Gage R&R
First Edition
45
Topics
Selecting CTQ Characteristics Process mapping CTQ Drill down tree QFD FMEA
Performance Standards
First Edition
46
Process mapping
Process Mapping: Is a graphical representation of activities, steps, information, resources and their interactions.
Process Mapping is a first step in understanding how and why a process behaves the way it does.
First Edition
47
Understanding a process
A process is a series of logical steps that transform inputs (raw materials) in to customer defined outputs.
Process Customer Output Input Supplier
A process may be largely affected by one or more of the following factors: personnel who operate the processes; materials which are used as inputs (including information); machines or equipment being used in the process (in process execution or monitoring / measurement; methods (including criteria and various documentations used along the process); work environment
First Edition
48
Components of a process
Process Controls
Process Boundary defines the process limit. It helps understand the scope of the process and its constraints. Process controls help ensure the process is consistent in behavior Metrics and Measurements are the means to measure conformance with requirements placed by customers on outputs and processes on inputs.
First Edition
49
Study interaction and interdependencies between different departments (highlight hand off points)
First Edition
50
Arrange the process steps in the order that they are followed
Identify the inputs, the suppliers, and the processs key requirements
First Edition
51
Process step
Decision box
Input to a process
Document
Multidocument
First Edition
52
Alternate path process map: used for complex and/or large process. Alternate paths replace decision boxes in the map. The percentage depiction of alternate paths makes 80% this map more informative.
20%
Cross functional process map: used when the process has many handoffs between different departments. It is also known as deployment map.
Human Resource Operations Finance
First Edition
53
Topics
Selecting CTQ Characteristics Process mapping CTQ Drill down tree QFD FMEA
Performance Standards
First Edition
54
Timely Process
Knowledgeable
Accurate Billing
A CTQ drill down tree tool assists in choosing a project metric. It shows linkage between the project metric and the company goals. Once established it can be used in projects to help finalize relevant project metrics. A Y is a dependent output variable. The higher the Y in a hierarchy of Y the bigger it is considered to be. A X is an independent variable that effects the Y.
First Edition
55
Topics
Selecting CTQ Characteristics Process mapping CTQ Drill down tree QFD FMEA
Performance Standards
First Edition
56
First Edition
57
QFD -Objectives
Understand Quality Function Deployment (QFD) as a tool.
First Edition
58
Definition of QFD
QFD is a methodology and tool to identify and translate customer needs and wants into measurable features and requirements converting the whats in to hows
QFD links the needs of the customer (end user) with design, development, manufacturing, and service functions.
First Edition
59
QFD Matrix
Correlation between measures
9 9
Time to prepare
9
Temperature (cold) Ingredient cost
Ingredient mix 9 36
Quantity
9 3 9
9 39 48 18 42
Elements of QFD matrix: Rows: The rows of the QFD matrix represent the customer needs (VOC) Columns: The columns of the QFD matrix represent the CTQs that measure the customers need. The cross section matrix cells are filled with values 1,3 and 9 depending on the correlation between the customer needs and the CTQs. 1 represents low correlation, 3 medium correlation, and 9 high correlation. The cell is left blank where no correlation exists. The column next to needs contains the relative importance given to needs by customer. The roof of the house is used to depict correlation between various measures When the matrix is complete, to prioritize the CTQs or requirement: Multiply the strength of each relationship (1 for weak, 3 for moderate and 9 for strong) by the priority number (1 to 5) for each corresponding customer expectation. Add the results and enter the sum for each requirement at the bottom of the matrix. This is the output of the QFD exercise. It is not necessary to complete every cell of the QFD matrix. Typically, a QFD matrix would have 3050% of the cells filled. Caution: An empty row indicates that the need is not captured by any measure. An empty column indicates that the measure does not fulfill any need and is redundant.
First Edition
60
QFD can be used in iteration to translate customer needs to process CTQs. For instance - customer needs can be translated to functional requirements using a QFD. A second level of QFD drill down can translate these requirements to product characteristics. A third level QFD can then translate the characteristics to process output measures.
First Edition
61
Topics
Selecting CTQ Characteristics Process mapping CTQ Drill down tree QFD FMEA
Performance Standards
First Edition
62
FMEA - Objectives
Understanding what an FMEA does
Types of FMEAs
When to use
First Edition
63
First Edition
64
Types of FMEAs
System FMEA: Used to analyze potential failure modes in systems during the design or concept stage.
Design FMEA: Used to analyze potential failure modes in products before they are released to production.
Process FMEA: Used to analyze assembly line, manufacturing, transactional or other such processes.
First Edition
65
Evaluate potential risk when existing systems, processes or products are changed
Proactive tool to evaluate and contain risk associated with any process or change in the process.
FMEA can be used at different phases for different purpose. Measure CTQs identification Improve Risk analysis on solution Control To develop process control plan
First Edition
66
Preparing a FMEA
Identify process and map the process. Identify input and output of the process Understand the effect of inputs (process variables) on output of the process Rank inputs according to importance List ways in which inputs can vary and the associated failure modes and effects on outputs Assign severity, occurrence and detection to each of the items and calculate risk priority number Prioritize causes with high RPN and develop control plan to minimize risk
FMEA: Is a team exercise Living document and should be updated regularly to evaluate risk and make control plans
First Edition
67
First Edition
68
Identifying causes
Causes can be identified using: Brainstorming Cause and effect diagram Ask the question - What can cause process to fail or what can cause a failure to impact the customer?
First Edition
69
Occurrence How frequent the cause leading to failure mode can occur?
Risk Priority Number RPN signifies the amount of risk associated with failure mode RPN = Severance * Occurrence * Detection
First Edition
70
Rating Scale
Many scales are available for rating severity, occurrence and detection. For simplicity (recommended for GBs) one can use the following scale:
Rating
1
Severity
Low- e.g. Low or no effect customer Medium
Occurrence
Low e.g. remote likelihood Medium-
Detection
High- e.g.. always detectable Medium
First Edition
71
Date S Potential Potential Potential E Process Step Causes failure mode failure effect V.
Calculate RPN and give recommendations for reducing RPN. Calculate new RPN post implementation of actions
First Edition
72
Topics
Selecting CTQ Characteristics Process mapping CTQ Drill down tree QFD FMEA
Performance Standards
First Edition
73
Performance Standard
A Performance Standard defines
E.g. loan approval within 24 hours, first call resolution A Performance Standard translates the Voice of Customer a measurable metric.
First Edition
74
CTQ Definition
CTQ
Unit of Measure
Days
Performance Standard
CTQ Target
3 Days
Specification Limits
Defect Definition
First Edition
75
Operational Definition
A precise description that clarifies: What the CTQ is And how to measure it.
Why do we need to have operational definitions? It removes ambiguity in understanding between team members It clearly defines a standard way to measure a CTQ Ensure that CTQ representation is correct and independent of different times and operators
First Edition
76
E.g. VOC - average time to process a loan should be 3 days but it should not exceed 7 days. Would translate into : Target: 3 days Upper SL: 7 days (situation where LSL is not required!)
First Edition
77
First Edition
78
Definitions
Unit (U) The output of a process.
Defect (D) Anything that results in a non-conformance with set quality standards
Note: All Opportunities should be independent of each other and must be of significance to the customer
First Edition
79
Formulas
Defects per Unit Total Defects per Unit. If there is more than 1 opportunity per unit then the Defects produced may be more than the number of units. A strict measure of quality.
First Edition
80
Topics Covered
Selecting CTQ Characteristics Process mapping CTQ Drill down tree QFD FMEA
Performance Standards
First Edition
81
Measure Phase
DEFINE MEASURE Step M.1
Step CTQ characteristics and standards Identify project Y metric Establish performance standards Process Map, QFD, FMEA, Pareto
ANALYZE
IMPROVE
Step M.2
Measurement system analysis Identify gage error
Deliverables
Tools
Continuous Gage R&R, Short Form Gage, Test/ Retest Study, Attribute Gage R&R
First Edition
82
Topics
Measurement System Analysis - Concepts
Types of MSA
First Edition
83
Evaluated the measurement system and established variation induced by it in process data.
First Edition
84
Total Variation = True Process Variation + Measurement System Variation In this step we try to reduce the variation induced by measurement system. If the measurement system is not studied and calibrated:
Analysis of data may give misleading results The variation problem may get fixed by fixing measurement system and the project may not be required One might disturb the process operations without realizing that the cause of variation is measurement system
A Gage study tells us: Amount of measurement error Source of measurement error Once the measurement error is quantified using gage study the next step is to minimize to the level when it becomes acceptable. The gage study is repeated until the level of acceptance is achieved. Until the problem of measurement variation is fixed the data can not be used for analysis and project purpose.
First Edition
85
Process Variation
Process True Variation producing parts different lengths for parts
Input Process Output
True Process Variation Variation Induced by measurement system Perceived total Variation
Because our knowledge about a problem is limited by the data available, it becomes critical to study variation induced by measurement system - the data is only as good as the measurement system that produces it.
First Edition
86
Sources of Variation
Observed Process Variation
Measurement Variation
Sample Variation
Gage Variation
Accuracy (Bias)
Precision (Reproducibility)
Stability
Linearity`
First Edition
87
First Edition
88
First Edition
89
Target
==
At target and precise
First Edition
90
Fishbone may be used as a tool to identify Sources of variation in the measurement system
Some other terms associated with gage: Repeatability Variation when one operator repeatedly measures the same unit with the same measuring equipment. ReproducibilityVariation when two or more people measure the same unit with the same measuring equipment. StabilityVariation obtained when the same person measures the same unit with the same equipment over a large gap of time. LinearityThe consistency of the accuracy of gage across the entire range of the measurement system.
First Edition
91
Topics
Measurement System Analysis - Concepts
Types of MSA
First Edition
92
Types of MSA
Attribute Gage: Done when discrete data is involved
Gage ANOVA: with continuous data and the most exhaustive gage study. Requires resource commitment and time. Continuous data can also be run through short form gage and test-retest.
Destructive Gage: Done when the event is destructive in nature and can not be repeated.
First Edition
93
First Edition
94
Attribute GAGE
Checks gage for: Repeatability
Reproducibility
Accuracy
First Edition
95
Accuracy: 90% of all individual measures by across all operators match with the standard
First Edition
96
Reproducibility test is more sensitive if the same unit is measured by more operators. For instance, it is better to have 10 units measured by 9 operators than to have 30 units measured by 3 operators.
Accuracy is calculated by taking each measurement as a data point. Therefore, whether we have 30 units measured 3 times by 3 operators or 10 units measured 3 times by 9 operators or 10 units measured 9 times by 3 operators, there are 270 data points to calculate accuracy.
First Edition
97
A desired resolution of 0.5% means that the gage needs to distinguish a difference of half percentage point. If the sample size is hundred, the gage is good to distinguish a difference of one percentage point only (1/100), which may not solve the purpose.
First Edition
98
Repeatability: % times each operator matches on all three measures for the same unit. Total 90 opportunity exist in the example.
Reproducibility: % times all operators match on all repeated measures for the same unit. Total 30 opportunities exist in the example
Accuracy % times each individual measure matches with the standard. Since there are total 270 measures, there are 270 opportunities
First Edition
99
First Edition
100
Accuracy: Can be only measured when one knows the standard value.
First Edition
101
104, 95, 101, 99, 95, 97, 96, 105, 101, 106, 94, 100, 98, 103, 98, 96, 100, 98, 95, 103
First Edition
102
Run Chart of C1
107.5 105.0 102.5 C1 100.0 97.5 95.0 2 4 6
13 11.00000 3 0.82094 0.17906
10 12 Observation
14
16
16 13.00000 2 0.95238 0.04762
18
20
Number of runs about median: Expected number of runs: Longest run about median: Approx P-Value for Clustering: Approx P-Value for Mixtures:
Number of runs up or down: Expected number of runs: Longest run up or down: A pprox P-Value for Trends: A pprox P-Value for O scillation:
First Edition
103
Summary for C1
A nderson-D arling N ormality T est A -S quared P -V alue M ean S tD ev V ariance S kew ness K urtosis N M inimum 1st Q uartile M edian 3rd Q uartile M aximum 97.505 96.235 9 5 % C onfide nce I nte r v a ls
Mean Median 96 97 98 99 100 101
0.36 0.425 99.200 3.622 13.116 0.367278 -0.959324 20 94.000 96.000 98.500 102.500 106.000 100.895 101.000 5.290
94
96
98
100
102
104
106
95% C onfidence I nterv al for M ean 95% C onfidence Interv al for M edian 95% C onfidence Interv al for S tD ev 2.754
First Edition
104
EV and AV
Total variation (R&R) in a measurement system is further classified as: Equipment Variation: The variation within operator, within equipment or gage, within the method. This variation comes from the parts of measurement system or process. This is also termed as Repeatability variation. Appraiser Variation: The variation introduced between different operators, different parts, and different methods. This is also termed as Reproducibility variation.
The mathematical equation used for representing the relationship between total variation, EV and AV is (Std. Dev (R&R))2 = (Std. Dev (Repeatability))2 + (Std. Dev. (Reproducibility))2
First Edition
105
The gage variation might cause this data to pass on lower specification limit
The gage variation might cause this data to pass on upper specification limit
Centre circle is the true data point and the limits on left and right show the measurement variation
First Edition
106
First Edition
107
First Edition
108
First Edition
109
First Edition
110
P-value < 0.05 says significant variation. In the example Parts, operator and the interaction all are significant factors
Source Total Gage R&R Repeatability Reproducibility Operator Operator*Part Part-To-Part Total Variation
Look for percentage contribution for Total Gage R&R, which is addition of Repeatability and Reproducibility percentage contribution. In the example it is 13.3% which is more than acceptable 10%
Percentage contribution: It represents the percentage of variation contributed by gage in the process. Recommended percentage contribution from Total Gage R&R should ideally be less than 10%. However one should consult with BB/MBB if the value is between 10-15%, where one might accept the gage error and gets go ahead with the project. Project should be evaluated by the mentor and acceptance would depend on the process, business and the project champion. If tolerance is beyond 15% it is recommended that the gage be corrected before repeating the gage study. No. of Distinct Categories: This number represents the number of distinct groups in the data. For example if you have 20 part being evaluated and the number of distinct categories are 2. This means that the most of the parts are not different enough, and the data can be divided into two groups. In such a case the precision of the gage is not enough for the process. Minimum number of distinct categories should be at least 5.
First Edition
111
50
Gage R&R
Repeat
Reprod
Part-to-Part
5 Part
10
R Chart by Operator
16 Sample Range 1 2 3 UCL=15.36 100 75 _ R=4.7 LCL=0 50 1
Weight by Operator
2 Operator
50
5 6 Part
10
X bar graph by operator shows most of the points out of control, which says most of the variation is due to parts
First Edition
112
Destructive Testing
Destructive testing is unique in the sense that the measured characteristic is different after the measurement process. Few examples of destructive events are: Crash testing Tensile strength of material Call quality evaluation for call centre (unless it is done on recorded call)
Destructive Testing is a unique case since an operator can measure a part multiple times in the earlier case where we used Gage ANOVA
First Edition
113
First Edition
114
Topics Covered
MSA Concepts Understand Accuracy and Precision and their components. Understand sources of variation Perceived vs. True process variation Equipment vs. appraiser variation Why gage error is a challenge Gage standards Types of MSAs Attribute gage Test retest Gage R&R
First Edition
115
Measure Phase
DEFINE MEASURE Step M.1
Objective CTQ characteristics and standards Identify project Y metric Establish performance standards Process Map, QFD, FMEA, Pareto
ANALYZE
IMPROVE
Step M.2
Measurement system analysis Identify gage error
Deliverables
Tools
Continuous Gage R&R, Short Form Gage, Test/ Retest Study, Attribute Gage R&R
First Edition
116
Topics
Data Collection Plan
Data Segmentation
Sampling
First Edition
117
Establish operational definitions Establish data collection procedures Pilot operational definitions and procedures and refine Establish sampling strategy
Collect data Monitor for data consistency and make final adjustments
First Edition
118
First Edition
119
First Edition
120
Topics
Data Collection Plan
Data Segmentation
Sampling
First Edition
121
Data Segmentation
What is Data Segmentation? Segmentation involves dividing data into logical categories for analyzing data. For instance, while recording the errors made by a data entry process, the project manager may choose to capture the step at which the error occurred, the operator who made the error and so on.
Tools Used for Data Segmentation: Brainstorming- Members of the process and project team SME- Subject matter expert for the process can give valuable inputs
First Edition
122
Data Segmentation
Example: A project is being done on Accuracy of Application processing, the possible segmentation factors are
Work type Touch time Vintage of processor
Accuracy
Day of Week
No of Processors
These segmentation factors are logical groups that may point to be significant xs during the analyze phase.
First Edition
123
Topics
Data Collection Plan
Data Segmentation
Sampling
First Edition
124
Sampling - Objectives
Understand sampling and why is it required
Advantages of sampling
First Edition
125
Sample: The part or subset of a population, the universal subset signifies the sample mean X s signifies standard deviation of a sample
Sampling tries to answer few critical questions asked while doing a project: How much data is required? How and when to collect data? Sampling also reduces the cost involved in collecting data by reducing the amount of data required.
First Edition
126
Sampling
Sampling is the process of collecting subset or portion of data from population and predicting population characteristics
Population N=1000
Sample N=50
First Edition
127
Why Sample ?
When to sample The cost associated with data is very high We are measuring a high volume process
When not to sample When the subset of data may not be able to predict population characteristics. E.g.. If each unit of data is unique
Apart from other reasons to do sampling, one of the critical reasons is the time required to collect population data. Sampling is an efficient way to collect data for any kind of project or analysis.
First Edition
128
Representative Sample
Sample is Representative: When the sample has same statistics as of the population
How to guarantee a representative sample: Suitable sampling strategy based on the process Understand the nature of process Understand the characteristics of population
Sample must be representative of the population, as the data collected otherwise would not deliver any results and might give misleading inferences about the population.
First Edition
129
Sampling Bias
Bias occurs when systematic differences are introduced into the sample as a result of the selection process.
A sample that is biased will lead to incorrect conclusions about the population
A biased sample would have incorrect information about the population and will lead to biased conclusions. A bias in the sample can not be eliminated with increasing the sample size.
First Edition
130
First Edition
131
Measurement Bias: Measurement bias could arise if the operational definitions are not correct or the data collectors have interpreted the operational definition differently.
Non-Response Bias: Non-response bias happens when the respondents to a survey tend to differ systematically from respondents to the survey.
First Edition
132
Sample
Production Line
Samples picked up from assembly line without any order or system. Each item has equal probability of getting selected in the sample.
This is the most commonly used method of sampling. Use random sampling when information about stratification is unknown. Random sampling is a population approach and so care must be taken when the process is cyclical or if the data can be stratified. In random sampling each unit has equal probability of being selected in the sample. Using random sampling method avoids bias being introduced in the sampling process. For practical purpose one can number the data coming out of process and generate random list in excel or on minitab and accordingly sample data.
First Edition
133
Production Line
Assembly data stratified in to two categories and random samples collected for both
Stratified random sampling is used when the population has different groups (strata). In such situation it is necessary that both groups are represented in the sample and samples are collected from each group (it is like doing random sampling for each group). The size of the sample depends on population size of each the group. E.g. To sample from a assembly line preparing nuts of two sizes A and B respectively. The two sample sizes for nuts sized A and B would be calculated based on production of size A and size B nuts respectively.
First Edition
134
Systematic Sampling
Systematic Sampling:
Random sampling and stratified sampling are done with historical data. When we have real time data coming in systematic sampling is done. Unlike random sampling the frequency of collecting data is fixed in systematic sampling. E.g. selecting every fourth call in the call centre for call barging, selecting 2 application every alternate hour for quality check. Care must be taken and process must be studied for any underlying structures.
First Edition
135
Rational sub-grouping is a process based sampling strategy. The rational subgroups depend on the nature and type of process from where the data is being selected. Rational sub-grouping assists us to understand the shift, which is the difference between the long term variability and short term variability of the process. We will study this in capability analysis under Analyze phase.
First Edition
136
Sample more frequently for unstable process and less frequently for stable process
Sample more frequently for process with short cycle time and less frequently for process with long cycle time
To understand the sampling frequency one should understand the objective of data collection The most important issue to remember when considering sample frequency is the data collection objective. The sampling frequency is driven by the objective of data collection, e.g. if the data collected is for monitoring process one might want to sample data daily. However if the objective is to collect data for the same process to study capability one might want to collect data for few months by sampling few data points each week or month.
First Edition
137
CI
X Z
/ 2
/2
/2
Where,
error or precision required from sample in representing the population is standard deviation N is sample size The value of Z/2 depends on confidence level we choose, the corresponding value can taken out from Z table. Finite Population Correction The central limit theorem and the standard errors of the mean and of the proportion are based on the premise that the samples selected are chosen with replacement. However, virtually in all scenarios finite population sampling is conducted without replacement from populations that are of a finite size N. When sample is more than 5% of population size we use Finite population correction factor which is: The sample size is than: N n N (finite) = n(1 + n/N) FPC =
N 1
First Edition
138
= (UCL X) / 3 or (X LCL) / 3
Collect some data from population and calculate std deviation (recommended 30 data points)
First Edition
139
Estimate
Take estimate from business. It may be based on the resolution of measurement. For instance, if the business measures cycle time in days, do not set in minutes or hours Collect some data from population and calculate using the sample
First Edition
140
/2
1 P
Z is the Z score (normal data) and P represents proportionate defective and a represents confidence required from sample. Za/2 is 1.96 for 95% confidence. This uses the confidence interval formula for attribute (discrete) data:
P
/2
1 P n
Again if the sample is more than 5% of population finite population correction should be used
First Edition
141
Estimating p and
Estimate p
Calculate proportionate defective using small sample Use historical control charts to estimate Use subject matter expertise to estimate
Estimate
Use subject matter expertise to estimate Estimate using formula: = Z
P
B
/2
(1 P
n
First Edition
142
Topics Covered
Data Collection Plan Data Segmentation What is segmentation? Why segment? Sampling What is sampling? Why sample? Sampling challenges Types of Bias Types of Sampling
First Edition
143
First Edition
144
Analyze Phase
DEFINE MEASURE
Step A.1
Objective Deliverables Baseline Process Process capability for project Y -Study data for shape, stability and normality -Capability analysis -Short Term vs. Long Term sigma -Understanding shift, Common cause variation, special cause variation Descriptive test, run chart, capability pack Tools
ANALYZE
Step A.2
IMPROVE
CONTROL
Step A.3
Performance Objective Improvement goal for project Y -Concept of Benchmarking -Benchmarking Types -Benchmarking as process
Identify drivers of variation List all statistically significant Xs -Root Cause Analysis (RCA) -Process map analysis -Graphical analysis -Statistical Analysis
Activities
Benchmarking
1-Sample T-test, 2-Sample T-test, Oneway ANOVA, Moods Median, Homogeneity of Variance, Simple Linear Regression, Correlation/Scatter Diagrams, Chi SquareTest of Independence, Chi SquareTest for Goodness of Fit, Pareto, Cause and Effect diagram, VA/NVA analysis
First Edition
145
Descriptive statistics Nature of distribution Understand Specification Limits and Centering or Target Values Calculate: probability of a defect and process capability Use Minitab for these purposes
First Edition
146
Normality
Tool Used: Descriptive test, Normality Test Check if the data follows a normal distributions
Shape
Tool Used: Descriptive Stats, Histogram Study distribution characteristics
First Edition
147
Important thing to remember is that data should be in a time order before a run chart is plotted. If the data is sorted or it does not follow the time order in any other way, run chart loses its significance. Run chart studies the stability of process over a period of time by studying any abnormality is data like trends, clusters and so on. Run chart can be plotted for: Individual values Means or Medians of subgroups (if subgroups are present) Apart from looking for process stability over a period of time, run chart can also give graphical representation of pre and post process changes comparison as it is plotted on time scale. The only difference between run chart and control chart comes from the control limits.
First Edition
148
First Edition
149
The graph you see in the output window of Minitab is the run chart. Run chart is used for stability and helps in identifying if there is any special cause in the process and when does it appear on time scale. Below the graph you would see hypothesis tests for different measures. Run Chart provides two tests for randomness: Based on the number of runs about the median (Mixtures and Clusters) Based on the number of runs up or down. (Trends and Oscillations) P-values for the following special cases should be read from run chart out put: Trend: Ho: No fewer runs observed than expected Ha: Fewer runs observed than expected A trend is a sustained drift in the data, either up or down. Might indicate if the process is about to go out of control or is out of control. Oscillations: Ho: No more runs observed than expected Ha: More runs observed than expected Oscillation occurs when the data fluctuates up and down rapidly, indicating that the process is not steady Clusters: Ho: No fewer runs observed than expected. Ha: Fewer runs observed than expected Clusters are group to group variability which may indicate variation due to special causes, such as measurement problems or sampling from a bad group of parts. Mixtures: Ho: No more runs observed than expected. Ha: More runs observed than expected Mixture can be identified by absence of points near the center line. Mixtures often indicate combined data from two populations, or two processes operating at different levels.`
First Edition
150
Studying Stability
Ho: Data is random, special causes not present Ha: Data is not random, special causes present
First Edition
151
Studying Stability
First Edition
152
If you have done rational subgrouping in your data. It can be used for run chart as well. In the input window you can mention subgroup size or column representing subgrouping in the data. The run chart would than plot mean or median depending on your choice. It also plots the variation within the subgroup apart from the stability over a period of time.
First Edition
153
Descriptive stats helps studying the shape of data distribution, measures of central tendency and dispersion. It also helps to find out what kind of issue we have with the process: whether it is issue with centering or issue with variation ?
First Edition
154
Tabular Output
1.Double click on C1 Agent A 2.Click Ok
Results for: DescriptiveStats.MTW Descriptive Statistics: Agent A Variable N N* Mean SE Mean StDev Minimum Q1 Median Q3
First Edition
155
Graphical Output
Click on Stat > Basic Statistics > Graphical Summary
Graphical summary from minitab lists all the basic statistics of the data which includes measures of central tendency, variation(spread) and there graphical representation. Following are the key statistic output from graphical summary. Normality P value Mean Median Standard Deviation Variance Kurtosis (measure of skew ness of data) Quartiles In graphical representation: Histogram (with o without normal curve) Box plot and Dot plot In the following pages we would cover few basis statistic tools.
First Edition
156
Different Shapes
Normal
Skewed
First Edition
157
Different Shapes
Long Tailed
Bi-modal
Such situation suggests data coming from two processes and not one. One should go back and have a look at the processes, separate the data by process and than analyze.
First Edition
158
Normality
One can test normality of data using: Descriptive test Normality test Hypothesis used for the test is: H0: data follow a normal distribution H1: data do not follow a normal distribution
First Edition
159
68.26%
95.46%
99.73%
- 3
- 2
- 1
0
Standard Deviations
The probability of a randomly selected value of being within Deviations from the mean is 95.46%
+ -2
Standard
First Edition
160
Z - Value
For any value of X in a distribution there is an equivalent Z value. The Z value is a measure of the number of Standard Deviations that will fit between X and the Mean. Statistically,
Z =
The Z-value is used to transform any normal distribution in terms of the standard normal distribution with a Mean of 0 and Standard Deviation of 1.The Z value is important as it allows us to compare to dissimilar distributions by using standard deviation as the unit of measure. Note : Z Value and Z capability are not the same.
First Edition
161
Z - Value
LSL LSL
USL USL
ZLSL
ZUSL
-3 4
-2 6
-1 8
0 10
1 12
2 14
3 16 18
X = 10
= 2
ZUSL =
USL - X s
X - LSL s
14 - 10 4 = =2 2 2
= 10 - 4 6 = =3 2 2
ZLSL =
A Simple Z Capability Calculation For this calculation assume USL = 14 LSL = 4 Nominal = 10 Note : Nominal is also called Target Value How the calculation changes for LSL and USL, this is done to ensure that the resulting number is always positive. The total area under any distribution is always 1. The Shaded Region represents the probability of a value being outside specification limits.
First Edition
162
P(d)
LSL
P(d)
USL
LSL
X T
USL
P(d)
Total
= P(d)
LSL
+ P(d)
USL
The Total Probability of a defect is calculated by adding the probability of defects that will occur at values lesser than the LSL and greater than USL. These values can be obtained by using the Z capability values you calculated earlier for both ZLSL and ZUSL.. Using the Z table, then find the Z-value that corresponds to this total probability of defect, this is ZBench and it represent long term process capability. To calculate process capability for the short term, just add Z Shift to the long term capability. Note : Z Shift is empirically set at 1.5 If it is not mentioned whether the data used is for sort or long term, always assume it to be long term. If the data is for short term, your Z Bench will be short term capability, then subtract Z shift to obtain Z long term. 1 Z bench is the probability of a conformance also called yield, this yield can directly be converted to Sigma by using a sigma conversion table.
First Edition
163
Single-Tail Z Table A
(Values of Z from 0.00 to 4.99)
Z
0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 1.10 1.20 1.30 1.40 1.50 1.60 1.70 1.80 1.90 2.00 2.10 2.20 2.30 2.40 2.50 2.60 2.70 2.80 2.90 3.00 3.10 3.20 3.30 3.40 3.50 3.60 3.70 3.80 3.90 4.00 4.10 4.20 4.30 4.40 4.50 4.60 4.70 4.80 4.90
0.00
5.00e-001 4.60e-001 4.21e-001 3.82e-001 3.45e-001 3.09e-001 2.74e-001 2.42e-001 2.12e-001 1.84e-001 1.59e-001 1.36e-001 1.15e-001 9.68e-002 8.08e-002 6.68e-002 5.48e-002 4.46e-002 3.59e-002 2.87e-002 2.28e-002 1.79e-002 1.39e-002 1.07e-002 8.20e-003 6.21e-003 4.66e-003 3.47e-003 2.56e-003 1.87e-003 1.35e-003 9.68e-004 6.87e-004 4.83e-004 3.37e-004 2.33e-004 1.59e-004 1.08e-004 7.23e-005 4.81e-005 3.17e-005 2.07e-005 1.33e-005 8.54e-006 5.41e-006 3.40e-006 2.11e-006 1.30e-006 7.93e-007 4.79e-007
0.01
4.96e-001 4.56e-001 4.17e-001 3.78e-001 3.41e-001 3.05e-001 2.71e-001 2.39e-001 2.09e-001 1.81e-001 1.56e-001 1.33e-001 1.13e-001 9.51e-002 7.93e-002 6.55e-002 5.37e-002 4.36e-002 3.51e-002 2.81e-002 2.22e-002 1.74e-002 1.36e-002 1.04e-002 7.98e-003 6.04e-003 4.53e-003 3.36e-003 2.48e-003 1.81e-003 1.31e-003 9.35e-004 6.64e-004 4.66e-004 3.25e-004 2.24e-004 1.53e-004 1.04e-004 6.95e-005 4.61e-005 3.04e-005 1.98e-005 1.28e-005 8.16e-006 5.17e-006 3.24e-006 2.01e-006 1.24e-006 7.55e-007 4.55e-007
0.02
4.92e-001 4.52e-001 4.13e-001 3.74e-001 3.37e-001 3.02e-001 2.68e-001 2.36e-001 2.06e-001 1.79e-001 1.54e-001 1.31e-001 1.11e-001 9.34e-002 7.78e-002 6.43e-002 5.26e-002 4.27e-002 3.44e-002 2.74e-002 2.17e-002 1.70e-002 1.32e-002 1.02e-002 7.76e-003 5.87e-003 4.40e-003 3.26e-003 2.40e-003 1.75e-003 1.26e-003 9.04e-004 6.41e-004 4.50e-004 3.13e-004 2.16e-004 1.47e-004 9.96e-005 6.67e-005 4.43e-005 2.91e-005 1.89e-005 1.22e-005 7.80e-006 4.94e-006 3.09e-006 1.92e-006 1.18e-006 7.18e-007 4.33e-007
0.03
4.88e-001 4.48e-001 4.09e-001 3.71e-001 3.34e-001 2.98e-001 2.64e-001 2.33e-001 2.03e-001 1.76e-001 1.52e-001 1.29e-001 1.09e-001 9.18e-002 7.64e-002 6.30e-002 5.16e-002 4.18e-002 3.36e-002 2.68e-002 2.12e-002 1.66e-002 1.29e-002 9.90e-003 7.55e-003 5.70e-003 4.27e-003 3.17e-003 2.33e-003 1.69e-003 1.22e-003 8.74e-004 6.19e-004 4.34e-004 3.02e-004 2.08e-004 1.42e-004 9.57e-005 6.41e-005 4.25e-005 2.79e-005 1.81e-005 1.17e-005 7.46e-006 4.71e-006 2.95e-006 1.83e-006 1.12e-006 6.83e-007 4.11e-007
0.04
4.84e-001 4.44e-001 4.05e-001 3.67e-001 3.30e-001 2.95e-001 2.61e-001 2.30e-001 2.00e-001 1.74e-001 1.49e-001 1.27e-001 1.07e-001 9.01e-002 7.49e-002 6.18e-002 5.05e-002 4.09e-002 3.29e-002 2.62e-002 2.07e-002 1.62e-002 1.25e-002 9.64e-003 7.34e-003 5.54e-003 4.15e-003 3.07e-003 2.26e-003 1.64e-003 1.18e-003 8.45e-004 5.98e-004 4.19e-004 2.91e-004 2.00e-004 1.36e-004 9.20e-005 6.15e-005 4.07e-005 2.67e-005 1.74e-005 1.12e-005 7.12e-006 4.50e-006 2.81e-006 1.74e-006 1.07e-006 6.49e-007 3.91e-007
0.05
4.80e-001 4.40e-001 4.01e-001 3.63e-001 3.26e-001 2.91e-001 2.58e-001 2.27e-001 1.98e-001 1.71e-001 1.47e-001 1.25e-001 1.06e-001 8.85e-002 7.35e-002 6.06e-002 4.95e-002 4.01e-002 3.22e-002 2.56e-002 2.02e-002 1.58e-002 1.22e-002 9.39e-003 7.14e-003 5.39e-003 4.02e-003 2.98e-003 2.19e-003 1.59e-003 1.14e-003 8.16e-004 5.77e-004 4.04e-004 2.80e-004 1.93e-004 1.31e-004 8.84e-005 5.91e-005 3.91e-005 2.56e-005 1.66e-005 1.07e-005 6.81e-006 4.29e-006 2.68e-006 1.66e-006 1.02e-006 6.17e-007 3.71e-007
0.06
4.76e-001 4.36e-001 3.97e-001 3.59e-001 3.23e-001 2.88e-001 2.55e-001 2.24e-001 1.95e-001 1.69e-001 1.45e-001 1.23e-001 1.04e-001 8.69e-002 7.21e-002 5.94e-002 4.85e-002 3.92e-002 3.14e-002 2.50e-002 1.97e-002 1.54e-002 1.19e-002 9.14e-003 6.95e-003 5.23e-003 3.91e-003 2.89e-003 2.12e-003 1.54e-003 1.11e-003 7.89e-004 5.57e-004 3.90e-004 2.70e-004 1.85e-004 1.26e-004 8.50e-005 5.67e-005 3.75e-005 2.45e-005 1.59e-005 1.02e-005 6.50e-006 4.10e-006 2.56e-006 1.58e-006 9.68e-007 5.87e-007 3.52e-007
0.07
4.72e-001 4.33e-001 3.94e-001 3.56e-001 3.19e-001 2.84e-001 2.51e-001 2.21e-001 1.92e-001 1.66e-001 1.42e-001 1.21e-001 1.02e-001 8.53e-002 7.08e-002 5.82e-002 4.75e-002 3.84e-002 3.07e-002 2.44e-002 1.92e-002 1.50e-002 1.16e-002 8.89e-003 6.76e-003 5.08e-003 3.79e-003 2.80e-003 2.05e-003 1.49e-003 1.07e-003 7.62e-004 5.38e-004 3.76e-004 2.60e-004 1.78e-004 1.21e-004 8.16e-005 5.44e-005 3.59e-005 2.35e-005 1.52e-005 9.77e-006 6.21e-006 3.91e-006 2.44e-006 1.51e-006 9.21e-007 5.58e-007 3.35e-007
0.08
4.68e-001 4.29e-001 3.90e-001 3.52e-001 3.16e-001 2.81e-001 2.48e-001 2.18e-001 1.89e-001 1.64e-001 1.40e-001 1.19e-001 1.00e-001 8.38e-002 6.94e-002 5.71e-002 4.65e-002 3.75e-002 3.01e-002 2.39e-002 1.88e-002 1.46e-002 1.13e-002 8.66e-003 6.57e-003 4.94e-003 3.68e-003 2.72e-003 1.99e-003 1.44e-003 1.04e-003 7.36e-004 5.19e-004 3.62e-004 2.51e-004 1.72e-004 1.17e-004 7.84e-005 5.22e-005 3.45e-005 2.25e-005 1.46e-005 9.34e-006 5.93e-006 3.73e-006 2.32e-006 1.43e-006 8.76e-007 5.30e-007 3.18e-007
0.09
4.64e-001 4.25e-001 3.86e-001 3.48e-001 3.12e-001 2.78e-001 2.45e-001 2.15e-001 1.87e-001 1.61e-001 1.38e-001 1.17e-001 9.85e-002 8.23e-002 6.81e-002 5.59e-002 4.55e-002 3.67e-002 2.94e-002 2.33e-002 1.83e-002 1.43e-002 1.10e-002 8.42e-003 6.39e-003 4.80e-003 3.57e-003 2.64e-003 1.93e-003 1.39e-003 1.00e-003 7.11e-004 5.01e-004 3.49e-004 2.42e-004 1.65e-004 1.12e-004 7.53e-005 5.01e-005 3.30e-005 2.16e-005 1.39e-005 8.93e-006 5.67e-006 3.56e-006 2.22e-006 1.37e-006 8.34e-007 5.04e-007 3.02e-007
First Edition
164
Single-Tail Z Table A
(Values of Z from 5.00 to 9.99)
Z
5.00 5.10 5.20 5.30 5.40 5.50 5.60 5.70 5.80 5.90 6.00 6.10 6.20 6.30 6.40 6.50 6.60 6.70 6.80 6.90 7.00 7.10 7.20 7.30 7.40 7.50 7.60 7.70 7.80 7.90 8.00 8.10 8.20 8.30 8.40 8.50 8.60 8.70 8.80 8.90 9.00 9.10 9.20 9.30 9.40 9.50 9.60 9.70 9.80 9.90
0.00
2.87e-007 1.70e-007 9.96e-008 5.79e-008 3.33e-008 1.90e-008 1.07e-008 5.99e-009 3.32e-009 1.82e-009 9.87e-010 5.30e-010 2.82e-010 1.49e-010 7.77e-011 4.02e-011 2.06e-011 1.04e-011 5.23e-012 2.60e-012 1.28e-012 6.24e-013 3.01e-013 1.44e-013 6.81e-014 3.19e-014 1.48e-014 6.80e-015 3.10e-015 1.39e-015 6.22e-016 2.75e-016 1.20e-016 5.21e-017 2.23e-017 9.48e-018 3.99e-018 1.66e-018 6.84e-019 2.79e-019 1.13e-019 4.52e-020 1.79e-020 7.02e-021 2.73e-021 1.05e-021 4.00e-022 1.51e-022 5.63e-023 2.08e-023
0.01
2.72e-007 1.61e-007 9.44e-008 5.48e-008 3.15e-008 1.79e-008 1.01e-008 5.65e-009 3.12e-009 1.71e-009 9.28e-010 4.98e-010 2.65e-010 1.40e-010 7.28e-011 3.76e-011 1.92e-011 9.73e-012 4.88e-012 2.42e-012 1.19e-012 5.80e-013 2.80e-013 1.34e-013 6.31e-014 2.96e-014 1.37e-014 6.29e-015 2.86e-015 1.29e-015 5.74e-016 2.53e-016 1.11e-016 4.79e-017 2.05e-017 8.70e-018 3.65e-018 1.52e-018 6.26e-019 2.55e-019 1.03e-019 4.12e-020 1.63e-020 6.39e-021 2.48e-021 9.53e-022 3.63e-022 1.37e-022 5.10e-023 1.88e-023
0.02
2.58e-007 1.53e-007 8.95e-008 5.19e-008 2.98e-008 1.69e-008 9.55e-009 5.33e-009 2.94e-009 1.61e-009 8.72e-010 4.68e-010 2.49e-010 1.31e-010 6.81e-011 3.52e-011 1.80e-011 9.09e-012 4.55e-012 2.26e-012 1.11e-012 5.40e-013 2.60e-013 1.24e-013 5.86e-014 2.74e-014 1.27e-014 5.82e-015 2.64e-015 1.19e-015 5.29e-016 2.33e-016 1.02e-016 4.40e-017 1.88e-017 7.98e-018 3.35e-018 1.39e-018 5.72e-019 2.33e-019 9.40e-020 3.76e-020 1.49e-020 5.82e-021 2.26e-021 8.66e-022 3.29e-022 1.24e-022 4.62e-023 1.70e-023
0.03
2.45e-007 1.45e-007 8.48e-008 4.91e-008 2.82e-008 1.60e-008 9.01e-009 5.02e-009 2.77e-009 1.51e-009 8.20e-010 4.39e-010 2.33e-010 1.23e-010 6.38e-011 3.29e-011 1.68e-011 8.48e-012 4.25e-012 2.10e-012 1.03e-012 5.02e-013 2.41e-013 1.15e-013 5.43e-014 2.54e-014 1.17e-014 5.38e-015 2.44e-015 1.10e-015 4.87e-016 2.15e-016 9.36e-017 4.04e-017 1.73e-017 7.32e-018 3.07e-018 1.27e-018 5.23e-019 2.13e-019 8.58e-020 3.42e-020 1.35e-020 5.29e-021 2.05e-021 7.86e-022 2.99e-022 1.12e-022 4.18e-023 1.54e-023
0.04
2.33e-007 1.37e-007 8.03e-008 4.65e-008 2.66e-008 1.51e-008 8.50e-009 4.73e-009 2.61e-009 1.43e-009 7.71e-010 4.13e-010 2.19e-010 1.15e-010 5.97e-011 3.08e-011 1.57e-011 7.92e-012 3.96e-012 1.96e-012 9.61e-013 4.67e-013 2.24e-013 1.07e-013 5.03e-014 2.35e-014 1.09e-014 4.97e-015 2.25e-015 1.01e-015 4.49e-016 1.98e-016 8.61e-017 3.71e-017 1.59e-017 6.71e-018 2.81e-018 1.17e-018 4.79e-019 1.95e-019 7.83e-020 3.12e-020 1.23e-020 4.82e-021 1.86e-021 7.14e-022 2.71e-022 1.02e-022 3.79e-023 1.39e-023
0.05
2.21e-007 1.30e-007 7.60e-008 4.40e-008 2.52e-008 1.43e-008 8.02e-009 4.46e-009 2.46e-009 1.34e-009 7.24e-010 3.87e-010 2.05e-010 1.08e-010 5.59e-011 2.88e-011 1.47e-011 7.39e-012 3.69e-012 1.83e-012 8.95e-013 4.34e-013 2.08e-013 9.91e-014 4.67e-014 2.18e-014 1.00e-014 4.59e-015 2.08e-015 9.33e-016 4.14e-016 1.82e-016 7.92e-017 3.41e-017 1.46e-017 6.15e-018 2.57e-018 1.07e-018 4.38e-019 1.78e-019 7.15e-020 2.85e-020 1.12e-020 4.38e-021 1.69e-021 6.48e-022 2.46e-022 9.22e-023 3.43e-023 1.26e-023
0.06
2.10e-007 1.23e-007 7.20e-008 4.16e-008 2.38e-008 1.35e-008 7.57e-009 4.21e-009 2.31e-009 1.26e-009 6.81e-010 3.64e-010 1.92e-010 1.01e-010 5.24e-011 2.69e-011 1.37e-011 6.90e-012 3.44e-012 1.70e-012 8.33e-013 4.03e-013 1.94e-013 9.20e-014 4.33e-014 2.02e-014 9.30e-015 4.25e-015 1.92e-015 8.60e-016 3.81e-016 1.68e-016 7.28e-017 3.14e-017 1.34e-017 5.64e-018 2.36e-018 9.76e-019 4.00e-019 1.62e-019 6.52e-020 2.59e-020 1.02e-020 3.99e-021 1.54e-021 5.89e-022 2.23e-022 8.36e-023 3.10e-023 1.14e-023
0.07
1.99e-007 1.17e-007 6.82e-008 3.94e-008 2.25e-008 1.27e-008 7.14e-009 3.96e-009 2.18e-009 1.19e-009 6.40e-010 3.41e-010 1.81e-010 9.45e-011 4.90e-011 2.52e-011 1.28e-011 6.44e-012 3.21e-012 1.58e-012 7.75e-013 3.75e-013 1.80e-013 8.53e-014 4.01e-014 1.87e-014 8.60e-015 3.92e-015 1.77e-015 7.93e-016 3.51e-016 1.54e-016 6.70e-017 2.88e-017 1.23e-017 5.17e-018 2.16e-018 8.93e-019 3.66e-019 1.48e-019 5.95e-020 2.37e-020 9.31e-021 3.63e-021 1.40e-021 5.35e-022 2.02e-022 7.57e-023 2.81e-023 1.03e-023
0.08
1.89e-007 1.11e-007 6.46e-008 3.72e-008 2.13e-008 1.20e-008 6.73e-009 3.74e-009 2.05e-009 1.12e-009 6.01e-010 3.21e-010 1.69e-010 8.85e-011 4.59e-011 2.35e-011 1.19e-011 6.01e-012 2.99e-012 1.48e-012 7.21e-013 3.49e-013 1.67e-013 7.91e-014 3.72e-014 1.73e-014 7.95e-015 3.63e-015 1.64e-015 7.32e-016 3.24e-016 1.42e-016 6.16e-017 2.65e-017 1.13e-017 4.74e-018 1.98e-018 8.17e-019 3.34e-019 1.35e-019 5.43e-020 2.16e-020 8.47e-021 3.30e-021 1.27e-021 4.85e-022 1.83e-022 6.86e-023 2.54e-023 9.32e-024
0.09
1.79e-007 1.05e-007 6.12e-008 3.52e-008 2.01e-008 1.14e-008 6.35e-009 3.52e-009 1.93e-009 1.05e-009 5.65e-010 3.01e-010 1.59e-010 8.29e-011 4.29e-011 2.20e-011 1.12e-011 5.61e-012 2.79e-012 1.37e-012 6.71e-013 3.24e-013 1.55e-013 7.34e-014 3.44e-014 1.60e-014 7.36e-015 3.35e-015 1.51e-015 6.75e-016 2.98e-016 1.31e-016 5.66e-017 2.43e-017 1.03e-017 4.35e-018 1.81e-018 7.48e-019 3.06e-019 1.24e-019 4.95e-020 1.96e-020 7.71e-021 3.00e-021 1.16e-021 4.40e-022 1.66e-022 6.21e-023 2.30e-023 8.43e-024
First Edition
165
This Is Known As The Standard Normal Distribution Which Has A Mean Of Zero (= 0) And A Standard Deviation of One (s = 1)
First Edition
166
This Is Known As The Standard Normal Distribution Which Has A Mean Of Zero (= 0) And A Standard Deviation of One (s = 1)
First Edition
167
The Total Probability of a defect is calculated by adding the probability of defects that will occur at values lesser than the LSL and greater than USL. These values can be obtained by using the Z capability values you calculated earlier for both ZLSL and ZUSL.. Using the Z table, then find the Z-value that corresponds to this total probability of defect, this is ZBench and it represent long term process capability. To calculate process capability for the short term, just add Z Shift to the long term capability.
Note : Z Shift is empirically set at 1.5 If it is not mentioned whether the data used is for sort or long term, always assume it to be long term. If the data is for short term, your Z Bench will be short term capability, then subtract Z shift to obtain Z long term. 1 Z bench is the probability of a conformance also called yield, this yield can directly be converted to Sigma by using a sigma conversion table.
First Edition
168
No
Yes
Read ZLT from Capability Analysis report and 1.5 to arrive at ZST
To convert continuous data to discrete follow the following steps 1. Minitab>Stat>Tables>Tally Individual variables 2. Under Variable select column that has data on project Y 3. Select Count and Cumulative percent 4. Look for yield @ USL or LSL in session window 5. Calculate Z value using Abridge table
First Edition
169
sigma
First Edition
170
If subgroup size = 1, the short term sigma value reported in the Capability Analysis report is invalid as there are no sub groups and hence no sub group variation. If upper and lower specification limits are provided without a target, Minitab assumes that the target is the mid point of the specification range. If only one specification limit is entered and the target is left blank, then Minitab approximates the target with the mean of the data.
First Edition
171
First Edition
172
First Edition
173
First Edition
174
Always check for S Chart stability to judge whether rational sub grouping is acceptable - S chart in control indicates sub groups are acceptable - S chart out of control indicates sampling is not acceptable -If the graph shows an R Chart instead of S then, If R chart is out of control, Xbar chart interpretation may not be reliable as the control limits on the Xbar chart are arrived at using the Rbar value from the R chart. If the R chart is not in control, the rational sub grouping is not correct. Within subgroup variation is higher than between sub group variation.
First Edition
175
3.4
5 8 10 20 30 40 70 100 150
0.34
0.5 0.8 1 2 3 4 7 10 15
99.9770%
5.0
99.3790%
4.0
6,210
8,190 10,700 13,900 17,800 22,700 28,700 35,900 44,600 54,800
230
93.320%
3.0
66,800
80,800 96,800 115,000 135,000 158,000 184,000 212,000 242,000 274,000
6,680
8,080 9,680 11,500 13,500 15,800 18,400 21,200 24,200 27,400
621
23
62.1
81.9 107 139 178 227 287 359 446 548
2.3
0.23
0.33 0.48 0.68 0.96 1.35 1.86 2.55 3.46 4.66 8.19 10.7 13.9 17.8 22.7 28.7 35.9 44.6 54.8 80.8 96.8 115 135 158 184 212 242 274 344 382 420 460 500 540 570 610 650 720 750 780 810 840 860 880 900 920
0.023
6.21
0.621
69.20%
2.0
308,000
344,000 382,000 420,000 460,000 500,000 540,000 570,000 610,000 650,000 720,000 750,000 780,000 810,000 840,000 860,000 880,000 900,000 920,000
30,800
34,400 38,200 42,000 46,000 50,000 54,000 57,000 61,000 65,000 72,000 75,000 78,000 81,000 84,000 86,000 88,000 90,000 92,000
3,080
3,440 3,820 4,200 4,600 5,000 5,400 5,700 6,100 6,500 7,200 7,500 7,800 8,100 8,400 8,600 8,800 9,000 9,200
668
66.8
6.68
8.08 9.68 11.5 13.5 15.8 18.4 21.2 24.2 27.4 34.4 38.2 42 46 50 54 57 61 65
308
30.8
31%
28% 25% 22% 19% 16% 14% 12% 10% 8%
1.0
690,000
69,000
6,900
690
69
72 75 78 81 84 86 88 90 92
First Edition
176
* Discussed Later
ZST assumes: Theoretically, process performance is more consistent in the short term than the long term, hence short term sigma is always better than the long term sigma.
First Edition
177
The Shift
Centered Distribution
LSL
6s
m m
USL
LSL
4.5s
USL
The assumed shift of 1.5 sigma comes from manufacturing industry. Shift compensates for the variation which is non-random in nature and does not get captured while calculating short term sigma. It accounts for the changes in process capability over many cycles of that process.
Therefore, six sigma is a measure of the short term capability of a process. If a normal process is centered at 6 standard deviations from each specification limit (or is on target) in the short term, assuming standard shift, the process would tend to move 1.5 standard deviations towards either specification limit in the long term. This means, the process will be 4.5 standard deviations from one specification limit and 7.5 standard deviations from the other.
First Edition
178
Time = t
Time = t + x
Long-Term capability
LSL
USL
As discussed in the previous slide, the process mean typically shifts by 1.5 standard deviations around the target. Short term sigma is a measure of the process at a particular point in time whereas long term sigma is a measure of the process over a period of time.
First Edition
179
Types of Variation
Common Cause variation Random process variation Also called as inherent variation of any given process Variation comes from People Materials Methods Machines Measurements Mother Nature Special Cause variation Non-random process variation Assignable cause Can generate outliers
First Edition
180
Rational Subgrouping
Rational Subgroups Common Cause Variation - Within Group
Time
The objective behind doing Rational Subgrouping is to have subgroups with only common cause variation and minimal special cause variation within them. The attempt is made to collect samples of data in such a way that special cause accounts for variation between different subgroups.
A subgroup should only contain common cause variation. There is variation between different subgroups. This variation is accounted primarily by special causes (though this might include some common cause variation also) In other words, short term capability should have common cause variation only. Long term capability has common cause and special cause variation
First Edition
181
Rational Subgroups
Characteristics of good rational subgroup: Samples to be collected over a large period of time so that each sample represents a rational subgroup defined by some logic(which usually is a one process cycle) Attempt to minimize variation within the subgroup and maximize the variation between subgroups Examples of sampling to create rational subgroups: Sample each hour, 3 applications Sample each day, 3 applications Sample each shift, 3 applications
In high-volume situations, use the Xbar and R-chart to evaluate stability of short/long-term stability. In low-volume situations, process measurements typically come one-at-a-time. Use the X & RM chart.
First Edition
182
Components of Variation
[
SS T SS T Total Total
SSW
}
SST
SSB
[
SS W SS W Within Within
2 +
(X
K N
j=1
i=1
ij
j=1
(X
K N
j=1
i=1
ij
)2
SSTotal : deviation of each individual point from the overall mean SSBetween: deviation of each subgroup mean from the overall mean (variation of the means) SSWithin : deviation of each individual point from its corresponding group mean (variation of the individuals within group) Xij : Individual observation Xj X : Subgroup mean : Overall Mean
First Edition
183
(X
n i
X 1
(n
g
Long-Term
LT
j= 1
i= 1
(X
gn
ij
Where n is the subgroup size and g is the number of subgroups. Numerator = denotes SST. X represents overall mean.
Short-Term
=
(X
g n j= 1 i= 1
ij
ST
g (n 1
Where n is the subgroup size and g is the number of subgroups. Numerator denotes SSW
First Edition
184
Z Equation
The Equation
Z = Specification Limit Central Tendency Standard Deviation Variations Z Short Term(ZST ) or Long Term(ZLT) Specification Limit USL or LSL Central Tendency Mean(m) or Target(T) Standard Deviation ST or LT
Z Long-term: Which is the actual capability of the process and what is current. It is always calculated w.r.t. the population mean. SL - m is the distance between specification limits given by customer and the true population mean.
ZLT =
SL LT
Z Short-term: Z short term is the best process can deliver if the current distribution is assumed around the Target (and not mean). This means assuming the same variation and if the distribution were to centered around Target what will be process capability. For the same reason while calculating Z short term we use Target value and not population mean. SL - T is the distance the specification is from the target.
ZST =
SL T ST
ZShift=ZST-ZLT
Higher the shift bigger the control issue in process. Typical assumed shift when rational subgrouping not present is 1.5 sigma.
First Edition
185
Zshift
1.5 1.0
Poor Control;
Poor Technology
0.5 1 2 3 4
Zst
The graph to your right helps identify the issue in your process. Once you baseline your process and understand its short term capability and shift, you will be in a position to determine whether your process needs better controls, improved technology or both. A typical DMAIC project aims at improving controls by eliminating special cause variation. DFSS projects on the other hand are run to improve technology so that the process can reach higher capability.
First Edition
186
Summary LT vs. ST
LONG-TERM CAPABILITY (ZLT) Actual current process capability Way to improve involves improving control (reducing shift) and technology 6s means 4.5s with assumed shift of 1.5s, otherwise 6s - (shift)
SHORT-TERM CAPABILITY (ZST) Best process performance Entitlement Way to improve involves improving technology as it is the inherent capability of process at current 6 means ZST = 6
ZST is based on the analysis of subgroups. If special cause variation is also present along with common cause variation, ZST will not reflect the correct (best) picture of present technology. A small ZShift does not necessarily indicate good control, but consistency among the subgroups or need to revisit sub groups. However, if ZShift is greater than 1.5, this is a definite indication of a control problem.
First Edition
187
First Edition
188
First Edition
189
First Edition
190
Analyze Phase
DEFINE MEASURE
Step A.1
Objective Deliverables Baseline Process Process capability for project Y -Study data for shape, stability and normality -Capability analysis -Short Term vs. Long Term sigma -Understanding shift, Common cause variation, special cause variation Descriptive test, run chart, capability pack Tools
ANALYZE
Step A.2
IMPROVE
CONTROL
Step A.3
Performance Objective Set Improvement goal for project Y -Concept of Benchmarking -Benchmarking Types -Benchmarking as process
Identify drivers of variation List all statistically significant Xs -Root Cause Analysis (RCA) -Process map analysis -Graphical analysis -Statistical Analysis
Activities
Benchmarking
1-Sample T-test, 2-Sample T-test, Oneway ANOVA, Moods Median, Homogeneity of Variance, Simple Linear Regression, Correlation/Scatter Diagrams, Chi SquareTest of Independence, Chi SquareTest for Goodness of Fit, Pareto, Cause and Effect diagram, VA/NVA analysis
First Edition
191
Benchmarking: One can target to achieve best in industry Learning Curve-Based: One could also use multi level goal in terms of sigma
for the process metric.
First Edition
192
Benchmark
Z Short Term
Z Long Term
Benchmark : World Class Performance Z Short Term : The best performance by the process with current technology Baseline Process Sigma : The sustained long term sigma for the process
Benchmark is the best in industry, process baseline tells where the process stands against Industry
First Edition
193
Benchmarking
Definition:
An improvement process in which a company measures its performance against that of best in class companies, determines how those companies achieved their performance levels and uses the information to improve its own performance. The subjects that can be benchmarked include strategies, operations, processes and procedures.
Compare process performance against industrys best Understand market competition Establish improvement targets for your process
Benchmarking is a continuous process which gives insight into where does your competitor stand and how do they achieve those levels of service excellence. It also provides a platform for best practice sharing across industry. The knowledge derived from the process of benchmarking can be used to improve service, products, support functions and systems. It also provides target for process capability improvement. In the current market scenario where information flows seamless and the access is universal, benchmarking is definitely a tool to facilitate service excellence achievement .
First Edition
194
Types Of Benchmarking
Strategic Benchmarking It is used where organizations seek to improve their overall performance by examining the long-term strategies and approach. It involves considering high level aspects such as core competencies, developing new products and services. Performance/ Competitive Benchmarking It is used where organizations consider their positions in relation to performance characteristics of key products and services. Usually focuses is the relevant segment of market. Process Benchmarking It is used when the focus is on improving specific critical processes and operations. Benchmarking partners are sought from best practice organizations that perform similar work or deliver similar services. It helps in driving short term goals of company. Functional Benchmarking It is used when organizations look to benchmark with market players drawn from different business sectors to find ways of improving similar functions or work processes. This sort of benchmarking can lead to innovation and dramatic improvements.
First Edition
195
Types Of Benchmarking
Internal Benchmarking It involves seeking information from within the same organization, for example, from business units located in different areas. The main advantages of internal benchmarking are that access to sensitive data and information are easier; standardized data is often readily available; and, usually less time and resources are needed. External Benchmarking External Benchmarking involves seeking outside organizations that are known to be best in class. External benchmarking provides opportunities of learning from those who are at the leading edge. International Benchmarking It is used where partners are sought from other countries because best practitioners are located elsewhere in the world and/or there are too few benchmarking partners within the same country to produce valid results
First Edition
196
Benchmarking Plan
Primarily following are the things to be considered in your benchmarking plan:
Focus of Study
Current assessment of area requiring improvement Selection of process for benchmarking Description of current process Purpose of study Outline of areas/issues for questioning rationale for selection initial contact development of detailed questions
Benchmarking partners
First Edition
197
Benchmarking Plan
Site Visit confirm and arrange site visit develop briefing package finalize and communicate plans conduct visit Post Visit points of clarification / follow up review all data initial report sent to partners for confirmation
First Edition
198
First Edition
199
First Edition
200
Analyze Phase
DEFINE MEASURE
Step A.1
Objective Deliverables Baseline Process Process capability for project Y -Study data for shape, stability and normality -Capability analysis -Short Term vs. Long Term sigma -Understanding shift, Common cause variation, special cause variation Descriptive test, run chart, capability pack Tools
ANALYZE
Step A.2
IMPROVE
CONTROL
Step A.3
Performance Objective Set Improvement goal for project Y -Concept of Benchmarking -Benchmarking Types -Benchmarking as process
Identify drivers of variation List all statistically significant Xs -Root Cause Analysis (RCA) -Process map analysis -Graphical analysis -Statistical Analysis
Activities
Benchmarking
1-Sample T-test, 2-Sample T-test, Oneway ANOVA, Moods Median, Homogeneity of Variance, Simple Linear Regression, Correlation/Scatter Diagrams, Chi SquareTest of Independence, Chi SquareTest for Goodness of Fit, Pareto, Cause and Effect diagram, VA/NVA analysis
First Edition
201
Statistical Problem
Statistical Solution
I C
Practical Solution
Control Plan
First Edition
202
8. Hypothesis Tests
Cause and Effect diagram is used in measure for identifying segmentation factors. It is also used to prioritize Xs which affect project Y
First Edition
203
First Edition
204
Cause and Effect diagram is a team tool and so it is important to have members in your team who understand the process well.
First Edition
205
E n v ir o n m e n t E n v ir o n m e n t
M e th o d s M e th o d s
M a c h in e s M a c h in e s
The attached diagram is an example to the structure, which once filled gives complete picture of problem and causes.
First Edition
206
Prioritization Matrix
The matrix only prioritizesthe output still needs to be ratified with data and an objective Few prioritization matrices which can be used are: outlook Criterion v/s criterion
QFD Control/Impact
Why use it ?
To narrow down systematically to critical Xs by weighing each option w.r.t. its importance and effect on Y
How does it help?
Helps six sigma team to focus on Xs on priority Systemizes the approach of prioritizing as it based on on Xs weighed against each other w.r.t. to there effect on Y Quickly surfaces critical Xs from disagreements within the team
First Edition
207
Items in Quadrant 1 are Must do Items in Quadrant 2 should be dropped as they complicate processes without significant impact Items in Quadrant 3 should only be implemented if Project Y does not show necessary improvements after implementing items from Q1 Items in Quadrant 4 are NOT to be done
First Edition
208
Pareto Chart
Use the tool: To filter vital few from trivial many Xs affecting process Y. A Pareto is very handy when:
You want to prioritize which causes to eliminate first You want to display information objectively to others
First Edition
209
The 80:20 rule originated from Vilfredo Pareto, an Italian economist who studied the distribution of wealth in a variety of countries around 1900. He discovered a common phenomenon: about 80% of the wealth in most countries was controlled by a consistent minority -- about 20% of the people. His observation eventually became known as either the "80:20 rule" or "Pareto's Principle".
While making Pareto the defects grouped by different segments depending on what makes sense to the business. Following are few segments: By Cost By Financial defects By risk By compliance/ legal errors
First Edition
210
P ro c e s s P ro c e s s P ro c e s s P ro c e s s P ro c e s s P ro c e s s P ro c e s s P ro c e s s P ro c e s s P ro c e s s
A A B B C C D D E E
S o ft S k ills Is s u e S o ft S k ills Is s u e 585 585 821 821 149 149 946 946 415 415
P o o r K n o w le d g e P o o r K n o w le d g e 3949 3949 5408 5408 994 994 6307 6307 3112 3112
N o P r o b le m R e s o lu tio n N o P r o b le m R e s o lu tio n 390 390 548 548 99 99 631 631 277 277
Following are the steps involved in making of pareto chart: 1. Data collection by different defect categories. While doing so we capture:
Types or categories of defects Frequency of defects For further analysis we might do a drill down on financial loss, risk, compliance defect and so on. This helps in building an approach towards problem
First Edition
211
Pareto
Tool Utility: Useful tool identify factors/segments which contribute to the majority of the
effect
How: Minitab > Quality Tools > Pareto Chart
First Edition
212
First Edition
213
First Edition
214
Identification of problems and improvement opportunity in the process Sense of urgency to improve as it is seen from customer perspective Types Of Analysis:
What does the customer feel?. The moment of Truth Which parts of the services provided, customer is ready to pay? Value to customer Wait time in process
Further on the three key components to the process map analysis: Moments of truth helps us on focusing on customer and this paradigm change in approach changes the comfort level at which the process is operating. With this approach we try to study the customer process interaction. We also study the touch points of customer with the process and improve the experience. Value analysis is process of studying the value add activities and non-value add activities in the process. Every activity within the process is seen as whether it adds any value to customer or not. Wait time is related to the process workflow.
First Edition
215
Value Analysis
Value-Added Work
For every activity in the process ask the question Is Customer Willing To Pay For Them?
Value-Enabling Work
Process steps which are non essential but enable or support the Value-Adding Tasks to be done better/faster.
Nonvalue-Added Work
This is part of process steps which is Non-Essential for production and delivery product/ service.
Answers to this question amount to Value added work which changes the product/ services physically and so is essential to the process.
Notes on Value enable work: Value enable work is the category which confuses the most. The only distinguishing factor between non value add and value enable work comes from customers perspective where customer is definitely not ready to pay for non-value added work. Given the current condition of any given process we might not be able to take out the value enable steps from the process. They aid in delivery of product and services. Indicators of non value add in the process:
Approval process involves multiple departments Too many supervisors Multiple reporting
Typical non-value add is rework. Which is a process step repeated, a step which usually takes you back in the process. E.g. Defect caught in downstream step being sent to upstream step in process.
First Edition
216
Inspection (Quality Check) Failure (Internal/External) Wait / Delay Setup Initialization Inventory Movement
Process failures: Failures happen two ways, Internal and External. Internal refers to defect found during processing due to upstream process and being corrected. E.g. Rework External failure are the defects reported by customer. Inspection(Quality Check): Internal process step dedicated to check products/ services for defects. Wait/ Delay: Products/ services waiting or queuing to get processed in front of process step. It is inventory building in front of a process step. It is usually due to bottlenecks. E.g. Backlogs Setup / Initialization: Steps involved in setup or preparation of subsequent step. E.g. Changing settings of machine in manufacturing plant Inventory Movement: Physical movement of inventory from one point of processing to other point of processing.
First Edition
217
T1 T2+T2
Cycle Time
T2
While analyzing workflow for a process one needs to go at product/service unit level and track the flow of that identity. This is very detailed process map with time allocation with each small step. In the process we categorize process step into value add, non value add and value enabling.
Concept of time allocation in process: Cycle Time Total time spent from start point of process to end point of process. It is also important to look at this metric from customer point of view. E.g in call centre the average wait time for call might be 60 seconds but customer might be going through longer wait due to IVR, which we dont capture. Process Time It is the actual spent on any unit of service or product which goes through the whole process. (Note: this also includes steps which are non value add). Wait / Delay Time:
Total idle time spent by any product or service while going through process steps.
First Edition
218
Process disconnects more often than not lead to delay and wait time in the process. It also leads to rework or resource under utilization. Most of the disconnects arise due to following reasons:
Unclear work responsibility: Happens when a process step is not owned by a particular person. The processing such environment happens in a very ad hoc manner leading to delay. Unclear processing Requirements: Operational definitions for process step do not exist. This might lead to rework due to defects caught in later process steps. Redundancies: When one process step is duplicated amount two or multiple processors. This happens when one processor is unaware of other processor working. Tricky Handoffs: Work transfer from one person or step to another step or person without any operational definitions in place. Leading to delays or rework. Conflict in Job goals: When the job goals are overlapping or do not cover complete processing involved leading to conflict or incomplete work.
First Edition
219
Procure missing Procure missing information from information from agent agent Route to Route to Resolution team Resolution team
Complete data entry Complete data entry and forward to audit and forward to audit team team Is information entered correct ?
Yes
No
Create policy package for Create policy package for Issued policy and letter for Issued policy and letter for declined policies declined policies
Apply postage and mail to Apply postage and mail to agent using SPS agent using SPS
No
Agent receives letter // Agent receives letter package and notifies client package and notifies client
Cycle time is the total time taken from the point at which the customer requests a good or service until the good or service is delivered to the customer. The components of cycle time are processing time, inventory movement time, inspection time,wait time. Cycle time metric should always be seen with caution. Lot of times we might just look at one part of process which might be working at its best. But when the customer might think otherwise as you might have another process step having lot of delays. The customer always looks at total cycle time.
First Edition
220
Process 11 22 33 44 55 66 77 88 99 10 ProcessStep Step 10 11 11 12 12 13 13 14 14 15 15 16 16 17 17 18 18 Total TotalTime Time % %Total Total Time 475 100% TimeTaken Taken(min) (min) 22 22 10 10 21 21 120 120 33 44 55 33 77 99 12 12 11 22 33 21 21 240 240 10 10 475 100% U U U Value 24 5% UU U U U ValueAdd Add 24 5% Value Valueenabling enabling Rework Rework Transportation Transportation Inventory Inventory/ /queue queue Audit Audit/ /Inspection Inspection U Prep / Set U Prep / Setup up
U U U U U U U U U U U U U U U U U U U U U U U U U U
Usually, a majority of the time is spent in Non Value add or Value Enabling work
Value analysis breaks the process into small steps categorized into value add and non value add. When we put our cycle time analysis against value analysis it gives tremendous insight into improvement opportunities. It tells exactly where we spent maximum time in process and whether or not it is value added work. It is an eye opener exercise to understand the process, we might come the data where non-value added work adds to maximum in cycle time Note: We might come across a situation where for a process step we may not have any data available. In such a scenario it is best to estimate time than skip the step. Conclusion: Example 4 steps (<25% of total steps) provide 100% value and take 24 minutes or 5% of total cycle time. 14 steps (>75% of steps) are non-value-added, and consume 95% of cycle time. How to Interpret the Matrix: Look for the longest cycle time component. Is it a value-added step? If not, can you still reduce time? Look for frequent category of non-value add steps in your process?. They might be driven due to things outside process. Look for overall breakup of cycle time into value add, value enable, and non-value add. This might give overview of improvement opportunities.
First Edition
221
Histogram
Tool Utility: Study variation in process. Gives visual display of data distribution as bar graph (frequency graph). Useful tool to determine if distribution is centered around target, and how is the variation against the customer specification limits. How:
Minitab. Histogram can also be made using excel sheet. In order to do so: - create a frequency table with or without defined classes(e.g. age 10-20, or without classes 10, 11 etc.) - Plot the frequency against individual X values or Classes on X-axis.
1 0 1 0
# of Students # of Students
0 5 5 5 5 6 0 6 0 6 5 6 5 7 0 7 0
T e s t G ra d e s T e s t G ra d e s
7 5 7 5
8 0 8 0
8 5 8 5
9 0 9 0
9 5 9 5
1 0 0 1 0 0
First Edition
222
Histogram Minitab
Click on Graph > Histograms MINITAB FILE: DescriptiveStats.mtw 1.Click on Simple 2.Click Ok
First Edition
223
Histogram Output
Histogram is the graphical representation of the distribution of population plotted. It helps understanding following for any given population:
Shape: Look for the distribution of bars. Are there multiple modes in the distribution?. This might give us idea if the given data is mix of two or more different population sets. Center: Where is the distribution centered?. Is it off the process target?. Spread: How is the spread of bars in histogram?. Is the spread large compare to requirement?(look against the specification limits). Note that spread has got nothing to do with Normality of data, as you might have normal data with very large spread when compared against specification limits from customer.
The options at the bottom of histogram input window gives option to put specification limits.
First Edition
224
Dot Plot
Tool Utility: Another tool to study variation in process. Also used for quick comparison of two or more groups for variation. As a part of descriptive statistical analysis used in initial stages. How: Use Minitab GRAPH > DOTPLOT Click on Simple, Click Ok Double Click on C1, Click ok
First Edition
225
Dot plot can be plotted from GRAPHS> DOTPLOT In Minitab Release 14 the Dotplot from Graphs menu gives multiple option for Single Y, Multiple Y, Single group, Multiple group and there combination.
First Edition
226
When Dotplot plotted for multiple operators (groups) gives us comparative visual representation of their: Variation Center and spread
First Edition
227
The box tells about the spread of data and is a visual used for comparing two or more data groups.
How: Minitab > Graph > Boxplot
First Edition
228
Minimum Observation that falls within the lower limit = Q1 - 1.5 (Q3 - Q1)
First Edition
229
Choose One Y With Groups, Click Ok. Use C6 for Data, C7 for Subscripts Click ok
First Edition
230
First Edition
231
Desired Process
Current Process
LSL
USL
Desired Process
Current Process
T LSL USL
While working on Six Sigma Projects we come across two types of problems with data. Variation (Spread) Centering Though working on one problem has some impact on other also, more often than not we would target only one problem at a time. In Analyze phase we run statistical tests to determine which Xs have an effect on the Y. These tests may be testing for variance or for mean depending on what is our Y metric.
First Edition
232
A Statistical Hypothesis
Definition of Hypothesis: Statement about the parameters of the Population
In hypothesis testing there are two hypotheses of interest. The null hypothesis (H0) The alternative hypothesis (HA)
First Edition
233
3. Hypothesis testing differentiates between sampling error (Precision) and true process differences.
Are these Samples from the same distribution or are they truly from different processes.
First Edition
234
First Edition
235
Hypothesis Testing
In Hypothesis Testing one has to make a decision:
Choose a statistic (called the test statistic) State your null and alternate hypothesis Divide the range of possible values for the test statistic into two parts The Acceptance Region The Critical Region
The Acceptance Region (The range of values of the test statistic that indicate the Null Hypothesis is true.) The Critical Region (The range of values of the test statistic that indicate the Null Hypothesis is false.)
First Edition
236
Nature of Ho and Ha
Null Hypothesis (Ho ): Null hypothesis always represents Status Quo, No difference True unless proven otherwise Represented by = or > or <
Alternative Hypothesis (Ha ): It is the conclusion we are trying to make from data Signs used in Minitab: or < or >
First Edition
237
Hypotheses are statements about population parameters. Relating to the previous example about the height of people from two different countries, we could state: Ho: B A Ha: B > A The statement is about the population means, not the sample means.
First Edition
238
Decision Error
Based on the evidence there are four possible outcomes of decision we make:
Truth Truth Ho Innocent Set Free Innocent, Set Free Ha Guilty Guilty, Set Free
Verdict Verdict
Ho
Ha
First Edition
Jailed
Innocent, Jailed
Guilty, Jailed
239
Ha Guilty
Type II Error
Jailed
Ha
Type I Error
1 - = Probability of correctly accepting the null hypothesis (detecting no change when there is none). 1 - = Probability of correctly accepting the alternative hypothesis (detecting a change when there is one). It is not possible to commit a Type I and Type II decision error simultaneously
The symbol is called the level of significance. Usually we have set at .05, which means we have 95%(1- ) confidence in accepting null hypothesis. The Acceptance Region and The Critical Region are chosen form underneath the sampling distribution of the test statistic when H0 is true. The Critical Region lies in the tails of sampling distribution of the test statistic when H0 is true.
First Edition
240
Ho
Ha
2. Ho : 2 = constant Ha : 2 constant
First Edition
241
4. Ho : 1 2 Ha : 1 > 2
5. Ho : 12 = 22
Ha : 12 22
1 2
First Edition
242
First Edition
243
Acceptance Region 1-
7 8 9
/2
12 13 14 15 16
= 10
10
11
Acceptance Region 1- /2
6 7 8 9
/2
=101011
12
13
14
15
16
The symbols used for two-sided tests are = and . The symbols used for one-sided tests are , , < and >
First Edition
244
The P-Value
Alpha denotes the probability of making a Type I error P-value is the probability value based on which we make judgment on hypothesis. Any p-value less than 0.05 means we reject the null hypothesis and accept the
alternate hypothesis
First Edition
245
Y = Normal
First Edition
Y = Non Normal
246
Z = X
df =
First Edition
247
While working on project if our metric is related to controlling or decreasing spread, we might be interested to know if factor X has an impact on variance in Y.
First Edition
248
HOV - Minitab
If you wish to study variation across 2 or more Groups and you have established that each groups stable and normally distributed. Perform the Homogeneity of Variance Test. Minitab File: HOV.mtw
First Edition
249
HOV - Input
H H
o a
: :
2 1 2 1
=
= .05
2 2 2 2
First Edition
250
: :
2 1 2 1
2 2 2 2
= .05
In the output window the graph on left side represent confidence interval around standard deviation of each group. On the right hand side we get P-values for the test. The test results which we can read from left side window are: F-Test: We get this result only when there are two groups and normal data. For more than two groups and normal data F-test is replaced by Bartletts test. Bartletts Test: When your data is normal and have more than two groups under evaluation. Do not use this test result when your data is non- normal as Bartletts test is not robust to nonnormality. Levenes Test: Use this test result when data is continuous and not necessary normal.
First Edition
251
2 Sample t test
First Edition
252
First Edition
253
2 sample t - Input
Check this box if HOV Indicates that there is no statistically significant difference in Variation among the groups being tested
Run Homogeneity of Variance test before you indicate to assume equal variances in the input window.
First Edition
254
2 Sample t -Output
Two-Sample T-Test and CI: Agent A, Agent B Two-sample T for Agent A vs Agent B N Mean StDev SE Mean Agent A 40 70.20 2.19 Agent B 40 49.58 2.85 0.35 0.45
Difference = mu (Agent A) - mu (Agent B) Estimate for difference: 20.6250 95% CI for difference: (19.4920, 21.7580) T-Test of difference = 0 (vs not =): T-Value = 36.28 P-Value = 0.000 DF = 73
The p value indicates that the two agents are statically different in their performance.
First Edition
255
used is: Ho: 1 = 2 = 3 = 4 Ha: At least one m different from the others
Since ANOVA uses pooled standard deviation it is essential to run HOV before this test.
In case HOV fails than one may use Two Sample T-Test.
Two Sample T-test and One Way ANOVA test the same thing, however the difference comes from the fact that Two Sample T-test can analyze only two sample at a time and so running it for multiple groups is a logistical nightmare. Though Two Sample T-test gives us flexibility of not assuming equal variances
Analysis of variance (ANOVA) is similar to regression in that it is used to investigate and model the relationship between a response variable and one or more independent variables. There are two ways ANOVA is different from Regression Analysis: The independent variables are categorical It doesnt make any assumption about the the nature of relationship between Y and independent variable.
First Edition
256
First Edition
257
One-way ANOVA: Average Queries Resolved versus Agent Source DF Agent SS MS 11.6 F P 3 9998.6 3332.9 287.01 0.000
S = 3.408 R-Sq = 81.46% R-Sq(adj) = 81.17% Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev --------+---------+---------+---------+(-*) (*-) (-*) 30.0 36.0 42.0 Agent A 50 19.940 2.234 (*-) Agent B 50 29.908 2.779 Agent C 50 39.874 4.049 Agent D 50 28.596 4.164 24.0 Pooled StDev = 3.408
--------+---------+---------+---------+-
An ANOVA output gives us: 1. P-value for significance 2. Confidence band to understand which ones are different
First Edition
258
First Edition
259
Non-parametric Tests
What if I dont have normal data?
When we dont have Normal data we use non-parametric tests. Nonparametric means that no assumption for distribution of data exist.
Centre tendency : Moods Median Test Spread (Variance) Capability : HOV Use Levine's test : Use DPMO to calculate sigma
It is common to have data which is not not normal. Care should be taken before reaching that conclusion.
Distribution should be check for multiple population Check for outliers
Normal data gives flexibility in terms of number of tools available for analysis. However sufficient tools are available for not normal data.
First Edition
260
Non-parametric Tests
The term nonparametric means that no assumption is made about the distribution of data. Non-parametric tests are used when we have no assumption for distribution of the data to work with. Moods Median Test is a nonparametric test which is frequently used to compare medians of two or more groups. One way ANOVA is the counterpart of this test for Normal data.
First Edition
261
Care is required before concluding that data is non-normal. Primarily because more often than not data is supposed to be normal. However we still encounter situations when we come across nonnormal data. Use the non-normal tests!
First Edition
262
E.g. File: MoodsMedian Stat > Nonparametrics > Moods Median Test Response = Average Call Handle Time Factor = Team Answer the following questions: 1. Are the medians different? (interpret the p-value) 2. Which medians are different? (interpret the confidence intervals)
First Edition
263
First Edition
264
Factors or Xs
First Edition
265
P value
Individual 95.0% CIs Team N<= N> Median Q3-Q1 -----+---------+---------+---------+A B C 7 8 280.0 14.0 15 0 263.0 10.0 (---*--) 1 14 296.0 13.0 (-----*--) (----*----)
Confidence interval
Moods Median test output gives a table where it defines by group number of data items below and above groups median. It then performs a simple Chi-square test on this summary table. Based on the test it gives P value. If you would manually run Chi-square test on the table, you would get the same result. On the right side of table we get a visual of confidence band around group medians. When the test is significant you would see that one of the group confidence band far from others.
First Edition
266
Normality Test
Ho : Data is normally distributed Ha : Data is not normally distributed
Ho : X1 = X2 = X3 =..= Xn
Homogeneity of variance
Ho : 12 = 22 =.. = n2 Ha : 12 22 .. n2
First Edition
267
First Edition
268
Continuous Y and X
Continuous Y and Continuous X Correlation : Tests Linear relationship between two variables Regression : Defines linear relationship between a dependent and an independent variable
Correlation assumes that both variables can change while regression (typically) assumes that the independent variable is measured without error. A significant correlation means that 2 variables tend to go up or down together (or change in opposite directions), for whatever reason. A significant regression means that the dependent variable can be predicted by the independent variable, again, for whatever reason. Causality is not tested by either method. Though theoretically different, both tests usually give the same answer (significant or not).
First Edition
269
Correlation - Minitab
E.g. File: Grades.MTW Stat > Basic Stats > Correlation
First Edition
270
Correlation Inputs/Outputs
First Edition
271
Regression Minitab
First Edition
272
First Edition
273
First Edition
274
First Edition
275
Regression Analysis: Score1 versus Score2 The regression equation is Score1 = - 4.67 + 4.40 Score2 Predictor Coef SE Coef T P Constant -4.6674 0.8572 -5.44 0.001 Score2 4.3975 0.3514 12.51 0.000 S = 0.572711 R-Sq = 95.7% R-Sq(adj) = 95.1% Analysis of Variance Source DF SS MS F P Regression 1 51.353 51.353 156.56 0.000 Residual Error 7 2.296 0.328 Total 8 53.649
First Edition
276
2 =
i=1
f
e
Critical Value depends on two factors: o a - the confidence o df degrees of freedom for data
First Edition
277
Chi-Square Test2
Tool Utility:
Chi-square tests enables us to test whether two or more population can be
considered equal.
distribution of data.
- Test of Independence: To test if two or more proportions are associated (Belong
to similar distribution)
First Edition
278
Chi-Square Statistic
Calculation for Chi-square statistic is:
2 =
i=1
f
e
First Edition
279
Degrees Of Freedom
The chi-square uses degrees of freedom of data under scrutiny as a parameter while calculating Chi-Square critical value. For any statistic being computed, the degrees of freedom of a sample are the number of values in the sample which are free to vary without changing the statistic being measured.
First Edition
280
First Edition
281
Chi-Square Distribution
df 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 40 50 60 70 80 90 100 .2 5 0 1 .3 2 3 2 .7 7 3 4 .1 0 8 5 .3 8 5 6 .6 2 6 7 .8 4 1 9 .0 3 7 1 0 .2 1 9 1 1 .3 8 9 1 2 .5 4 9 1 3 .7 0 1 1 4 .8 4 5 1 5 .9 8 4 1 7 .1 1 7 1 8 .2 4 5 1 9 .3 6 9 2 0 .4 8 9 2 1 .6 0 5 2 2 .7 1 8 2 3 .8 2 8 2 4 .9 3 5 2 6 .0 3 9 2 7 .1 4 1 2 8 .2 4 1 2 9 .3 3 9 3 0 .4 3 4 3 1 .5 2 8 3 2 .6 2 0 3 3 .7 1 1 3 4 .8 0 0 4 5 .6 1 6 5 6 .3 3 4 6 6 .9 8 1 7 7 .5 7 7 8 8 .1 3 0 9 8 .6 5 0 1 0 9 .1 4 1 .1 0 0 2 .7 0 6 4 .6 0 5 6 .2 5 1 7 .7 7 9 9 .2 3 6 1 0 .6 4 5 1 2 .0 1 7 1 3 .3 6 2 1 4 .6 8 4 1 5 .9 8 7 1 7 .2 7 5 1 8 .5 4 9 1 9 .8 1 2 2 1 .0 6 4 2 2 .3 0 7 2 3 .5 4 2 2 4 .7 6 9 2 5 .9 8 9 2 7 .2 0 4 2 8 .4 1 2 2 9 .6 1 5 3 0 .8 1 3 3 2 .0 0 7 3 3 .1 9 6 3 4 .3 8 2 3 5 .5 6 3 3 6 .7 4 1 3 7 .9 1 6 3 9 .0 8 7 4 0 .2 5 6 5 1 .8 0 5 6 3 .1 6 7 7 4 .3 9 7 8 5 .5 2 7 9 6 .5 7 8 1 0 7 .5 6 5 1 1 8 .4 9 8 .0 5 0 3 .8 4 1 5 .9 9 1 7 .8 1 5 9 .4 8 8 1 1 .0 7 0 1 2 .5 9 2 1 4 .0 6 7 1 5 .5 0 7 1 6 .9 1 9 1 8 .3 0 7 1 9 .6 7 5 2 1 .0 2 6 2 2 .3 6 2 2 3 .6 8 5 2 4 .9 9 6 2 6 .2 9 6 2 7 .5 8 7 2 8 .8 6 9 3 0 .1 4 4 3 1 .4 1 0 3 2 .6 7 1 3 3 .9 2 4 3 5 .1 7 2 3 6 .4 1 5 3 7 .6 5 2 3 8 .8 8 5 4 0 .1 1 3 4 1 .3 3 7 4 2 .5 5 7 4 3 .7 7 3 5 5 .7 5 8 6 7 .5 0 5 7 9 .0 8 2 9 0 .5 3 1 1 0 1 .8 7 9 1 1 3 .1 4 5 1 2 4 .3 4 2 A lp h a .0 2 5 5 .0 2 4 7 .3 7 8 9 .3 4 8 1 1 .1 4 3 1 2 .8 3 2 1 4 .4 4 9 1 6 .0 1 3 1 7 .5 3 5 1 9 .0 2 3 2 0 .4 8 3 2 1 .9 2 0 2 3 .3 3 7 2 4 .7 3 6 2 6 .1 1 9 2 7 .4 8 8 2 8 .8 4 5 3 0 .1 9 1 3 1 .5 2 6 3 2 .8 5 2 3 4 .1 7 0 3 5 .4 7 9 3 6 .7 8 1 3 8 .0 7 6 3 9 .3 6 4 4 0 .6 4 6 4 1 .9 2 3 4 3 .1 9 4 4 4 .4 6 1 4 5 .7 2 2 4 6 .9 7 9 5 9 .3 4 2 7 1 .4 2 0 8 3 .2 9 8 9 5 .0 2 3 1 0 6 .6 2 9 1 1 8 .1 3 6 1 2 9 .5 6 1 .0 1 0 6 .6 3 5 9 .2 1 0 1 1 .3 4 5 1 3 .2 7 7 1 5 .0 8 6 1 6 .8 1 2 1 8 .4 7 5 2 0 .0 9 0 2 1 .6 6 6 2 3 .2 0 9 2 4 .7 2 5 2 6 .2 1 7 2 7 .6 8 8 2 9 .1 4 1 3 0 .5 7 8 3 2 .0 0 0 3 3 .4 0 9 3 4 .8 0 5 3 6 .1 9 1 3 7 .5 6 6 3 8 .9 3 2 4 0 .2 8 9 4 1 .6 3 8 4 2 .9 8 0 4 4 .3 1 4 4 5 .6 4 2 4 6 .9 6 3 4 8 .2 7 8 4 9 .5 8 8 5 0 .8 9 2 6 3 .6 9 1 7 6 .1 5 4 8 8 .3 7 9 1 0 0 .4 2 5 1 1 2 .3 2 9 1 2 4 .1 1 6 1 3 5 .8 0 7 .0 0 5 7 .8 7 9 1 0 .5 9 7 1 2 .8 3 8 1 4 .8 6 0 1 6 .7 5 0 1 8 .5 4 8 2 0 .2 7 8 2 1 .9 5 5 2 3 .5 8 9 2 5 .1 8 8 2 6 .7 5 7 2 8 .3 0 0 2 9 .8 1 9 3 1 .3 1 9 3 2 .8 0 1 3 4 .2 6 7 3 5 .7 1 8 3 7 .1 5 6 3 8 .5 8 2 3 9 .9 9 7 4 1 .4 0 1 4 2 .7 9 6 4 4 .1 8 1 4 5 .5 5 8 4 6 .9 2 8 4 8 .2 9 0 4 9 .6 4 5 5 0 .9 9 3 5 2 .3 3 6 5 3 .6 7 2 6 6 .7 6 6 7 9 .4 9 0 9 1 .9 5 2 1 0 4 .2 1 5 1 1 6 .3 2 1 1 2 8 .2 9 9 1 4 0 .1 6 9 .0 0 1 1 0 .8 2 8 1 3 .8 1 6 1 6 .2 6 6 1 8 .4 6 7 2 0 .5 1 5 2 2 .4 5 8 2 4 .3 2 2 2 6 .1 2 5 2 7 .8 7 7 2 9 .5 8 8 3 1 .2 6 4 3 2 .9 0 9 3 4 .5 2 8 3 6 .1 2 3 3 7 .6 9 7 3 9 .2 5 2 4 0 .7 9 0 4 3 .3 1 2 4 3 .8 2 0 4 5 .3 1 5 4 6 .7 9 7 4 8 .2 6 8 4 9 .7 2 8 5 1 .1 7 9 5 2 .6 2 0 5 4 .0 5 2 5 5 .4 7 6 5 6 .8 9 2 5 8 .3 0 2 5 9 .7 0 3 7 3 .4 0 2 8 6 .6 6 1 9 9 .6 0 7 1 1 2 .3 1 7 1 2 4 .8 3 9 1 3 7 .2 0 8 1 4 9 .4 4 9
First Edition
282
Chi-Square Distribution
d f 1 2 3 4 5 6 7 8 9 10 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 .9 9 5 .0 0 0 0 3 9 0 .0 1 0 0 .0 7 2 0 .2 0 7 0 .4 1 2 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 .6 .9 .3 .7 .1 .6 .0 .5 .0 .6 .1 .6 .2 .8 .4 7 8 4 3 5 0 7 6 7 0 4 9 6 4 3 6 9 4 5 6 3 4 5 5 1 2 7 5 4 4 .9 9 0 .0 0 0 1 6 0 0 .0 2 0 0 .1 1 5 0 .2 9 7 0 .5 5 4 0 1 1 2 2 3 3 4 4 5 5 6 7 7 8 .8 .2 .6 .0 .5 .0 .5 .1 .6 .2 .8 .4 .0 .6 .2 7 3 4 8 5 5 7 0 6 2 1 0 1 3 6 2 9 6 8 8 3 1 7 0 9 2 8 5 3 0 .9 7 5 .0 0 0 9 8 0 0 .0 5 1 0 .2 1 6 0 .4 8 4 0 .8 3 1 1 1 2 2 3 3 4 5 5 6 6 7 8 8 9 1 1 1 1 1 1 1 1 1 1 .2 .6 .1 .7 .2 .8 .4 .0 .6 .2 .9 .5 .2 .9 .5 3 9 8 0 4 1 0 0 2 6 0 6 3 0 9 7 0 0 0 7 6 4 9 9 2 8 4 1 7 1 3 2 8 1 0 4 3 8 7 1 A lp h a .9 5 0 .0 0 3 9 3 0 0 .1 0 3 0 .3 5 2 0 .7 1 1 1 .1 4 5 1 2 2 3 3 4 5 5 6 7 .6 .1 .7 .3 .9 .5 .2 .8 .5 .2 3 6 3 2 4 7 2 9 7 6 5 7 3 5 0 5 6 2 1 1 .9 0 0 .0 1 5 8 0 0 0 .2 1 1 0 .5 8 4 1 .0 6 4 1 .6 1 0 2 2 3 4 4 5 6 7 7 8 .2 .8 .4 .1 .8 .5 .3 .0 .7 .5 0 3 9 6 6 7 0 4 9 4 4 3 0 8 5 8 4 2 0 7 .7 5 0 .1 0 1 5 0 0 0 .5 7 5 1 .2 1 3 1 .9 2 3 2 .6 7 5 3 4 5 5 6 .4 .2 .0 .8 .7 5 5 7 9 3 5 5 1 9 7 .5 0 0 .4 5 5 0 0 0 1 .3 8 6 2 .3 6 6 3 .3 5 7 4 .3 5 1 5 6 7 8 9 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 .3 .3 .3 .3 .3 4 4 4 4 4 8 6 4 3 2 1 0 0 9 9 8 8 8 8 7 7 7 7 7 7 6 6 6 6 6
7 .5 8 4 8 .4 3 8 9 .2 9 9 1 0 .1 6 5 1 1 .0 3 6 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 1 2 3 4 5 6 7 8 9 9 0 1 2 3 4 .9 .7 .6 .5 .4 .3 .2 .1 .0 .9 .8 .7 .6 .5 .4 1 9 7 6 5 4 4 3 3 3 4 4 5 6 7 2 2 5 2 2 4 0 7 7 9 3 9 7 7 8
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9
.3 .3 .3 .3 .3 .3 .3 .3 .3 .3 .3 .3 .3 .3 .3 .3 .3 .3 .3 .3
4 4 4 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
7 .9 6 2 8 .6 7 2 9 .3 9 0 1 0 .1 1 7 1 0 .8 5 1 1 1 1 1 1 1 1 1 1 1 1 2 3 3 4 5 6 6 7 8 .5 .3 .0 .8 .6 .3 .1 .9 .7 .4 9 3 9 4 1 7 5 2 0 9 1 8 1 8 1 9 1 8 8 3
9 .3 1 2 1 0 .0 8 5 1 0 .8 6 5 1 1 .6 5 1 1 2 .4 4 3 1 1 1 1 1 1 1 1 1 2 3 4 4 5 6 7 8 8 9 0 .2 .0 .8 .6 .4 .2 .1 .9 .7 .5 4 4 4 5 7 9 1 3 6 9 0 1 8 9 3 2 4 9 8 9
8 .0 3 4 8 .6 4 3 9 .2 6 0 9 .8 8 6 1 0 .5 2 0 1 1 1 1 1 1 1 2 3 3 .1 .8 .4 .1 .7 6 0 6 2 8 0 8 1 1 7
8 .8 9 7 9 .5 4 2 1 0 .1 9 6 1 0 .8 5 6 1 1 .5 2 4 1 1 1 1 1 2 2 3 4 4 .1 .8 .5 .2 .9 9 7 6 5 5 8 9 5 6 3
0 0 1 2 3 3 4 5 6 6
.2 .9 .6 .4 .1 .8 .5 .3 .0 .7
8 8 8 0 2 4 7 0 4 9
40 50 60 70 80 90 100
2 0 .7 0 7 2 7 .9 9 1 3 5 .5 3 5 4 5 5 6 3 1 9 7 .2 .1 .1 .3 7 7 9 2 5 2 6 8
2 2 .1 6 4 2 9 .7 0 7 3 7 .4 8 5 4 5 6 7 5 3 1 0 .4 .5 .7 .0 4 4 5 6 2 0 4 5
2 4 .4 3 3 3 2 .3 5 7 4 0 .4 8 2 4 5 6 7 8 7 5 4 .7 .1 .6 .2 5 5 4 2 8 3 7 2
2 6 .5 0 9 3 4 .7 6 4 4 3 .1 8 8 5 6 6 7 1 0 9 7 .7 .3 .1 .9 3 9 2 2 9 1 6 9
2 9 .0 5 1 3 7 .6 8 9 4 6 .4 5 9 5 6 7 8 5 4 3 2 .3 .2 .2 .3 2 7 9 5 9 8 1 8
3 3 .6 6 0 4 2 .9 4 2 5 2 .2 9 4 6 7 8 9 1 1 0 0 .6 .1 .6 .1 9 4 2 3 8 5 5 3
3 9 .3 3 5 4 9 .3 3 5 5 9 .3 3 5 6 7 8 9 9 9 9 9 .3 .3 .3 .3 3 3 3 3 4 4 4 4
First Edition
283
Test of Independence
This Test is used to check independence of two or more proportions. The hypothesis used is: H0 = all proportions are equal Ha = at least one proportion is different from rest Again as a caution frequency less than 5 not to be used for the test. To do the test we use Contingency tables. The rows and the columns define the classification of data. Following is an example of contingency table:
Tele sales Team A Leads Dead Ends fo = 37 fo = 63 Total = 100 Tele sales Team A fo = 113 fo = 167 Total = 280 Total = 150 Total = 230
Degrees of Freedom:
Degrees of freedom for the test are calculated using formula: (M-1)*(N-1) Where M and N are number of columns and rows respectively.
First Edition
284
First Edition
285
Test of Independence
Minitab Input And Results
Ho: Ha:
No correlation between team and leads generated. Correlation between team and leads generated
First Edition
286
First Edition
287
First Edition
288
First Edition
289
Identify Vital Xs Develop transfer function Experiment with possibilities and develop solution
Pilot solution with small scale tests in real business environment Run statistical tests on the results to confirm the solution
First Edition
290
Improve Phase
DEFINE MEASURE
Step I.1 Objective Screen for Vital Xs Vital Xs Deliverables Identify Vital Xs Activities
ANALYZE
IMPROVE
CONTROL
Step I.3 Define Improved Process Pilot Solution
Step I.2 Study Interaction between Xs Interaction between Xs, Transfer function Develop a transfer functions
Tools
First Edition
291
Characterization of Xs
Vital Xs or key elements which impact project Y have following characteristics: Independent variables Can be set at different levels to manipulate the response variable or dependent variable Y. Variation in independent variable contributes to variation in dependent variable Y Different Xs will have different impact on Y depending on the transfer function May be continuous and/or discrete Independent variables might not be measurable (E.g. alternative work-flow sequences)
First Edition
292
Screening of Xs
Screening of Xs is done for various reasons: Focus on few factors which make most impact Ensure best utilization of resources available for the project Tools Used: FMEA Pareto Fishbone analysis Screening DOE if advanced cases when Xs have interaction
First Edition
293
Using Pareto
Project Metric: Errors in Application form Analyze result gave us seven factors which are significant. Now the pareto chart can be used to prioritize or screen further these factors. The output clearly says that Name, DOB and Country field contribute around 80% errors. So working on rest of the factors might not be as beneficial.
First Edition
294
Using FMEA
When all the significant factors are available from Analyze step A3. One might use FMEA to evaluate corresponding risk associated i.e. Risk priority number. E.g. On insurance application form where premium calculation is based on age risk associated with error in DOB field would be more than error associated with wrong address as it might result in financial loss to the insurance company.
First Edition
295
Improve Phase
DEFINE MEASURE
Step I.1 Objective Screen for Vital Xs Vital Xs Deliverables Identify Vital Xs Activities
ANALYZE
IMPROVE
CONTROL
Step I.3 Define Improved Process Pilot Solution
Step I.2 Study Interaction between Xs Interaction between Xs, Transfer function Develop a transfer functions
Tools
First Edition
296
Practical Problem
Statistical Problem
Statistical Solution
Practical Solution
First Edition
297
Transfer Function
The goal of improve is to develop a solution using the transfer function Y = {X1, X2, X3Xn} The transfer function: Relates the vital Xs to the project Y Predicts the effect and direction of changes on Y due to changes in Xs Helps in fitting a solution which maximizes the effect on Y using all or few of the Xs
First Edition
298
Transfer Function
The goal of improve is to develop a solution using the transfer function Y = {X1, X2, X3Xn} Tools that give mathematical transfer function: DOE Regression GLM (General Linear Model)
First Edition
299
Interaction Matrix
Using the Interaction Matrix In the following example training has high and positive impact on call quality, while length of the call has low but negative effect on the call quality. In absence of a transfer function it is a very handy tool to get insight into development of possible solution
+
Subject Knowledge Training
High
Finance Background
Low
First Edition
300
Improve Phase
DEFINE MEASURE
Step I.1 Objective Screen for Vital Xs Vital Xs Deliverables Identify Vital Xs Activities
ANALYZE
IMPROVE
CONTROL
Step I.3 Define Improved Process Pilot Solution
Step I.2 Study Interaction between Xs Interaction between Xs, Transfer function Develop a transfer functions
Tools
First Edition
301
Tolerancing
Tolerance means the allowed variation in a variable
Once we have a a transfer function: Y = F (X) We would like to know for given allowed variation in Y what is the variation one would expect for Xs
This process of identifying variation limits for Xs using LSL and USL for Y is called Statistical Tolerancing
First Edition
302
Concept of Tolerancing
Assume given transfer function Y= aX + b
Y USL LSL Y = aX + b
Tolerance for Y
X
The concept can be extrapolated to transfer function involving two Xs by imagining a three dimensions (three axes)
Tolerance for X
First Edition
303
More on Tolerancing
Set Tolerance based on customer VOC. E.g. Specification limits defined by customer When multiple Xs are involved. Set Tolerance for all simultaneously, specially when the interaction between Xs are involved Adjust tolerance for measurement variation
First Edition
304
Solution Identification
This step may be broken into three steps:
Pilot solution
First Edition
305
Ideas Generation
By this step we have established: Vital Xs prioritized Transfer function Either we have a mathematical transfer function or we know how are Xs related to Y, i.e. what should we do to better Y increase X or decrease X and will it make more impact on Y than any other X ? Tolerancing Based on the transfer function we know how much variation we can allow for each of the Xs
While transfer function can exactly tells us the optimum setting for Xs, in its absence one has fair idea about what needs to be done with Xs in order to influence Y positively.
First Edition
306
Ideas Generation
The project team and process experts can together participate in a team effort to generate ideas for solution. Following may be used for generating ideas: Brainstorming team tool for generating lots of ideas Problem Analogy use an appropriate analogy for the current problem and try to generate solution for the same Best Practices team might also look for similar processes and make a list of best practices which might be useful
First Edition
307
Solution Design
Selecting the best Ideas
All the ideas and best practices should be than ranked based on following factors in the same priority: Impact on Y Resource Requirement for implementation
Select ideas which would have maximum impact on Y and minimum resource requirement A solution could be: One solution Set of solution, addressing different Xs or supplementing each other
First Edition
308
First Edition
309
Running a Pilot
Prepare a plan to execute pilot Do risk assessment of doing the pilot, use FMEA Discuss and establish test population, resources, location, duration and timing. Create data collection plan
Pilot solution and collect data Analyze data and establish if the solution meets expectation using statistical test Identify resources required for full scale implementation of the solution
First Edition
310
Improve Summary
Vital Xs select few which make big impact on Y Tools used are Pareto, FMEA, Fishbone Analysis Transfer function what can one do in absence of mathematical transfer function
Solution Design
First Edition
311
First Edition
312
First Edition
313
First Edition
314
Control Phase
DEFINE MEASURE
Step C.1 MSA on Xs Objective Acceptable gage Deliverables
ANALYZE
IMPROVE
CONTROL
Step C.3 Establish Control Plan
Step C.2 Improved Process Capability Process capability, PrePost data comparison
Control Plan
Activities
Mistake proofing, Risk management, Statistical process control Refer to step M.3 Refer to Step A.1 and A.3 FMEA & Control Charts
Tools
First Edition
315
MSA on Xs
Deliverable for this step: Validate measurement system for Xs Refer to step M3
First Edition
316
Control Phase
DEFINE MEASURE
Step C.1 MSA on Xs Objective Acceptable gage Deliverables
ANALYZE
IMPROVE
CONTROL
Step C.3 Establish Control Plan
Step C.2 Improved Process Capability Process capability, PrePost data comparison
Control Plan
Activities
Mistake proofing, Risk management, Statistical process control Refer to step M.3 Refer to Step A.1 and A.3 FMEA & Control Charts
Tools
First Edition
317
First Edition
318
Control Phase
DEFINE MEASURE
Step C.1 MSA on Xs Objective Acceptable gage Deliverables
ANALYZE
IMPROVE
CONTROL
Step C.3 Establish Control Plan
Step C.2 Improved Process Capability Process capability, PrePost data comparison
Control Plan
Activities
Mistake proofing, Risk management, Statistical process control Refer to step M.3 Refer to Step A.1 and A.3 FMEA & Control Charts
Tools
First Edition
319
Control charts are not proactive tool as mistake proofing is. They are useful when Xs can not be mistake proofed and controlled within tolerance.
First Edition
320
Control charts are used as tools for statistical process control. They tell us when Xs go out of control and surface the special cause performance.
First Edition
321
Variation
Common Cause Variation is the natural or true variation of process and is inherent in the process
Special Cause Variation- is usually of larger magnitude compared to common cause variation. It is usually due to special causes and happens once in a while.
First Edition
322
First Edition
323
Differentiating Variation
Tampering of process
Precaution should be taken while studying variation, the mistakes in treating common cause as special cause or special cause as common cause causes problem for the process. We usually end up in either creating more variation in the process or not able to rectify the cause of variation.
First Edition
324
Control Charts
A control chart: Is a time ordered plot of the data Plots the variation of the data in expected range also called control limits Identifies when there is a special cause present
Control charts use +/- 3 sigma for control limits. One should calculate the control limits when the process is stable and freeze them. Control charts in the future should be used using the same limits, this would keep an eye on special cause as it should and would also tell if the capability of the process is going down.
First Edition
325
For any control chart, you can find the Minitab Rules under the test Button. You can select the rules you want Minitab to use, but you should select all eight tests. Note: Minitab rules only should be used in the exam to evaluate special causes of variation.
First Edition
326
Minitab Tests
Test 1: One point or more points more than 3s from the center line.
1
A B C C B A UCL
Average
LCL
Average
2
LCL
Test 1 is positive if there is a shift in the process mean if there is an increase in the process standard deviation, or if there is a single aberration in the process such as a mistake in calculation, an error in measurement, process breakdown etc Test 2 signals a shift in the process mean.
First Edition
327
Minitab Tests
Test 3: Six points in a row, either increasing or decreasing
A B C C B A UCL
Average
LCL
UCL
Average
LCL
Test 3 signals a drift in the process mean. The causes can include improvement in skill, any kind of deterioration in the process. Test 4 signals a systematic effect produced by two different population. The causes could be due to two different operator, two different process etc.
First Edition
328
Minitab Tests
Test 5: Two out of three points greater than 2s on the same side
A B C C B A
UCL
Average
LCL
Test 6: Four out of five points greater than 1s on the same side
A B C C B A UCL
Average
6
LCL
In the case of charts for variables, the first four tests should be augmented by Tests 5 and 6 when earlier warning is desired.
First Edition
329
Minitab Tests
Test 7: Fifteen points in a row within 1s from center line on either side.
A B C C B A UCL
7
Average
LCL
Test 8: Eight points in a row greater than 1s from centre line on either side
A B C C B A UCL
Average
LCL
Tests 7 and 8 indicate stratification (observations in a subgroup have multiple sources with different means). Test 7 is positive when the observations in the subgroup always have multiple sources. Test 8 is positive when the subgroups are taken from one source at a time.
First Edition
330
Specification Limits: Specification limits are defined by the customer. It defines the desired level of performance by customer. Specification limits are not based on any statistic, but is based on what customer expects. Specification limits are used to derive the defect rate of process and hence the capability of the process.
PROCESS A
Lower SL Upper SL
PROCESS B
Lower SL Upper SL
Process B under control limit but has unacceptable variation when evaluated against customer specification limits.
First Edition
331
Process States
The Four possibilities of any Process The State of Chaos Non conforming and non predictable Non conformity is also unstable Random changes to process would not fix the process Need to eliminate effect of assignable causes
The Brink of Chaos Confirming yet unpredictable Process output is influenced by assignable causes
The Threshold State Non Confirming and predictable Periodic instability Stable but not able to deliver on customer standards fully
The Ideal State Confirming and predictable Process control limits fall within specification limits Process stays inherently stable over time
First Edition
332
Attribute Control Chart Uses attribute or ordinal data like Pass/Fail, Good/Bad, Go/No-Go Could be many characteristics per chart
First Edition
333
Variable or Attribute
Rational Subgroups? Constant Lot Size? Yes X-bar and R for sample of size <8 X-bar and S sample of larger size c I-MR Defect or Defective np u Defect or Defective
Yes
No
No
First Edition
334
Variable Control Chart with rational Subgroups: S Chart: a plot of sample standard deviations over time X-bar Chart: plots of the sample means over time R-Chart: a plot of the sample range of a sample over time X-bar and R: plots x-bar chart and r chart in the same chart window X-bar and S: plots x-bar chart and s chart in the same process window
First Edition
335
First Edition
336
Data Collection
Data should be in time order as used for run charts
Use rational sub grouping as discussed in step M.3 Use team and subject matter experts for rational sub grouping
First Edition
337
X Bar R Chart
Used for data with subgroup size less than 10, for subgroup size more than 10 use X bar S, which plots subgroup standard deviations instead of range
599.5
Control Limits
U C L=2.835
Sample Range
2 _ R=1.341 1
0 1 3 5 7 9 11 Sample 13 15 17 19
LC L=0
First Edition
338
I-MR-R/S (Between/Within)
Three charts and three perspectives for the process
600
_ X=599.548
Individual charts for sample means, control limits are based on moving range of means. Refers to stability of process location
MR of Subgroup Mean
UCL=1.493
Sample Range
1.5
_ R=1.341
0.0 1 3 5 7 9 11 Sample 13 15 17 19
LCL=0
First Edition
339
I-MR Chart
Used for data with no sub-grouping
Individual V alue
_ X=0.11025
13
17
21 O bser vation
25
29
33
37
M oving Range
U C L=0.00930
First Edition
340
First Edition
341
C Chart
Charts defects when sub group size is constant and based on Poisson distribution C-bar is calculated as (Total defects / Total number of units)
C Chart of Data
8 7 6 Sample Count 5 4 3 2 1 0 1 3 5 7 9 11 Sample 13 15 17 19 LCL=0 _ C=2.55 UCL=7.341
First Edition
342
u Chart
Charts defects when sub group size is not constant and based on Poisson distribution U-bar is calculated as (Total defects / Total number of units)
U Chart of Defects
0.14 0.12 Sample Count Per Unit 0.10 0.08 0.06 0.04 0.02 0.00 1 2 3 4 5 6 7 8 9 Sample 10 11 12 13 14 15 LCL=0 _ U=0.0478
1
UCL=0.1134
First Edition
343
np Chart
Charts defectives when sub group size is constant and based on Binomial distribution
NP Chart of Rejects
30
1
First Edition
Sample Count
344
p Chart
Charts defectives when sub group size is not constant and based on Binomial distribution
P Chart of Rejects
0.14 0.12 0.10 Proportion 0.08 0.06 0.04 0.02 0.00 1 2 3 4 5 6 7 8 9 Sample 10 11 12 13 14 15 LCL=0 _ P=0.0478
1
UCL=0.1118
First Edition
345
Process control system ensures ongoing process control, which also means that the process keeps delivering the improved capability. Factors which make a process control system effective: Clarity of requirements before making the control plan Clear communication to involved parties Good training Buy-in from stakeholders level of involvement and ownership from stakeholders can help
First Edition
346
Risk Management
Identify risk which can deteriorate the performance of the process Evaluate each risk for probability of happening and impact on the process Make action plan to limit or reduce the risk
First Edition
347
Risk Management
Identify risks in the following categories: Technology related Financial Decision making Business
First Edition
348
Mistake Proofing
A technique to eliminate errors Also known as Poka Yoke (Japanese) Proactive approach to stop errors from happening
Example: Fuel reserve indicator in vehicles Shape of SIM card allows only one way to place it in cell
First Edition
349
Mistake Proofing
Helps to limit Xs within the tolerance so that the process performance does not go out of control Identifies the loopholes and helps implementing proactive steps for warning or avoidance of situation which can lead to Xs stepping out of tolerance
First Edition
350
Mistake Proofing
How to mistake proof: Understand origin of defects Identify sources of errors resulting in defects Identify process steps where mistake proofing can be done Redesign process steps such that they are resistant to errors. The options to make errors are reduced.
First Edition
351
Errors
Sources of errors: Process Variation Procedural incorrectness Measurement accuracy Human errors
One can use fishbone analysis to brain storm and identify the sources of errors
First Edition
352
Control Plan
Control plan for a project should include following things: CTQ Input and out variables Specifications and tolerances Control measures and tools Monitoring and sampling plan
First Edition
353
Control Plan
Date Created: Date Revised: Data Control Reaction Sample Collection Tool Plan Size Frequency
Measure
First Edition
354
First Edition
355
Control - Summary
Statistical process control Eight tests for control charts Variable and Attribute control charts Control limits vs. Specification limits Risk management and FMEA Mistake proofing Control plan Elements of control plan Additional details
First Edition
356