Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
S0377-2217(15)00550-0
10.1016/j.ejor.2015.06.036
EOR 13043
To appear in:
Received date:
Revised date:
Accepted date:
8 February 2015
2 May 2015
11 June 2015
Please cite this article as: Tariq Aldowaisan , Mustapha Nourelfath , Jawad Hassan , Six Sigma
Performance for Non-Normal Processes, European Journal of Operational Research (2015), doi:
10.1016/j.ejor.2015.06.036
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service
to our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and
all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
Highlights
We analyze Six Sigma performance for non-normal processes.
We examine changes in failure rates for exponential, Gamma and Weibull processes.
Higher quality improvement effort may be required for non-normal processes.
Reporting the sigma level as an indication of the quality can be misleading.
Wrong Six Sigma projects can be selected when systematically assuming normality.
AC
CE
PT
ED
AN
US
CR
IP
T
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
CR
IP
T
AN
US
Abstract
Six Sigma is a widely used method to improve processes from various industry sectors. The
target failure rate for Six Sigma projects is 3.4 parts per million or 2 parts per billion. In this
paper, we show that when a process is exponential, attaining such performances may require a
larger reduction in variation (i.e., greater quality-improvement effort). Additionally, identifying
ED
whether the process data are of non-normal distribution is important to more accurately estimate
the effort required to improve the process. A key finding of this study is that, for a low k level,
PT
the amount of variation reduction required to improve an exponentially distributed process is less
than that of a normally distributed process. On the other hand, for a higher k level, the reverse
CE
scenario is the case. This study is also analyses processes following Gamma and Weibull
distributions, and the results further support our concern that simply reporting the sigma level as
AC
an indication of the quality of a product or process can be misleading. Two optimization models
are developed to illustrate the effect of underestimating the quality-improvement effort on the
optimal solution to minimize cost. In conclusion, the classical and widely used assumption of a
normally distributed process may lead to implementation of quality-improvement strategies or
the selection of Six Sigma projects that are based on erroneous solutions.
Keywords: Quality management, Six Sigma, non-normal distributions, optimization, project
selection.
2
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
1 Introduction
Since its launch in the 1980s, Six Sigma has generated great expectations and hype in the
business world, driven largely by reports of significant business gains; e.g., GE reported 11%
revenue and 13% earnings increases from 1996 to 1998, AlliedSignal had savings of $1.5 Billion
in 1998, and Harry and Schroeder claimed 10% net income improvement, a 20% margin
CR
IP
T
improvement, and a 10 to 30% capital reduction for each sigma improvement (Harry and
Schroeder, 1999). These business gains were based on management and technical methods for
improving processes with a theoretical aim of a failure rate of 3.4 parts per million (ppm) or 2
parts per billion (ppb), depending on certain assumptions. Important assumptions in Six Sigma
AN
US
are that the process is normal and its specifications are two-sided.
assumption. However, for service or transaction systems, which have become increasingly
predominant in various applications, even in manufacturing ones, the normality assumption
ED
comes into question. For example, in supply chain management and goods production systems,
fill rates and the probability to meet specified customer service levels are not governed by normal
distributions (Nourelfath, 2011). Additionally, in project management where a project will more
PT
likely finish late, the normality assumption does not hold; in particular, this is the case when the
number of activities is too small to assume a normal distribution based on the central limit
CE
theorem. Another example is the procurement process of oil and gas companies, where it was
observed that cycle time data more closely resemble a Gamma distribution rather than a normal
AC
distribution.
Identified in a survey of the relevant literature, English and Taylor (1993) provide a good
study on the robustness of the normality assumption when the distribution is not normal.
Montgomery and Runger (2011) studied the error associated with non-normality in the estimation
of defects in parts per million; the authors recommended applying transformational functions to
the underlying non-normal variable until a normal transformed variable is found. A major
drawback of this trial-and-error approach is the lack of physical meaning in the resulting
3
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
CR
IP
T
performance methodology. Using the exponential distribution case, we first analyse the amount
of variation reduction required to improve the process. The efforts required to achieve the
performance goal of Six Sigma are then compared between the normal and exponentially
distributed cases. Then, we generalize our study to processes using Gamma and Weibull
distributions. Finally, two optimization models are developed: the goal of the first model is to
AN
US
find the optimal trade-off between quality-improvement program costs and costs incurred as a
result of poor quality, and the second model addresses the optimal selection of processimprovement strategies. These optimization models analyse the impact of a poor estimation of
quality-improvement effort on optimal solutions to minimize costs.
It is often the case in the professional and academic communities of quality to set 3.4 ppm as
the goal of Six Sigma without due consideration for the underlying process distribution. This
ED
could lead to erroneous estimation of the efforts and costs to achieve that goal when the process
is not normal, as often is the case in service applications. The contribution of this research is that
it provides guidelines to help managers to more accurately estimate the efforts required to
PT
achieve the performance goal of Six Sigma, and analyse the robustness of proposed quality
CE
AC
Section 3 analyses the Six Sigma performance methodology in exponential, Gamma and Weibull
distribution processes; Section 4 presents two optimization models to illustrate the effect of
inaccurate estimations of probability distributions; and Section 5 provides our concluding
remarks.
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
2 Literature Review
There are two main bodies of literature that are related to our research. The first is the
literature of Six Sigma implementation in service systems that are more often characterized by
non-normal distributions. The second concerns approaches to the evaluation of Six Sigma
performance without assuming normal probability distributions. Next, we discuss each of these
CR
IP
T
Several studies have been conducted in regard to implementation of the Six Sigma
methodology in large manufacturing companies; nevertheless, as confirmed by many literature
reviews, relatively few studies have reported the successful implementation of Six Sigma in
AN
US
service systems that, arguably, represent the life-blood of the modern economy (Aboelmaged,
2010; Stone, 2012). Behara et al. (1995) presented a case study to illustrate how the concept of
zero defects, as measured by Six Sigma, can be applied to the measurement of customer
satisfaction and to examine the impact of customer expectations on the companys strategies for
improving satisfaction. The information presented is based on actual studies conducted for a
hightech manufacturing company. Chakraborty and Chuan (2012) suggested that the
implementation and impact of Six Sigma can be affected by contextual factors such as service
ED
types. The authors argued that management support and team member support are primary
factors for success. Exploratory empirical evidence was provided through many case studies of
organisations. They include a hospital, a public service organisation, a consultancy service and a
PT
hotel. The major findings of the authors include an understanding about the suitability of Six
Sigma implementation in service organisations. Additionally, Chakraborty and Chuan (2013)
CE
highlighted that top management commitment and involvement are indeed important and
critical factors for success, and the continuity of Six Sigma very much depends on these factors.
AC
The authors investigated the implementation of Six Sigma in service organizations through
questionnaire surveys. They found that Six Sigma implementation in service organizations is
limited across the globe. Service organizations in the USA and Europe are front-runners in
implementation of Six Sigma, with Asian countries such as India and Singapore following
behind. Their findings also showed that organizations from the healthcare and banking sectors
have largely implemented the Six Sigma methodology, and other services, such as information
technology, telecommunication services and transport services, are gradually incorporating Six
5
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
Sigma. Antony et al. (2007) presented some of the most common challenges, difficulties,
common myths, and implementation issues in the application of Six Sigma in service industry
settings. They also discussed the benefits of Six Sigma in service organizations and presented the
results of a Six Sigma pilot survey in the UK. The results of their study showed that the majority
of service organizations in the UK have been engaged in a Six Sigma initiative for just over three
CR
IP
T
years. Management commitment and involvement, customer focus, linking Six Sigma to business
strategy, organizational infrastructure, project management skills, and understanding of the Six
Sigma methodology were identified as the most critical factors for the successful introduction,
development and deployment of Six Sigma. Chakraborty and Leyer (2013) developed a
framework that defined organizational conditions by which to implement Six Sigma in financial
AN
US
service companies. They showed that it is important to link Six Sigma to the strategic as well as
the operational level. By analysing case studies of financial institutions in Thailand, Buavaraporn
and Tannock (2013) formulated relationships between specific business process-improvement
(BPI) initiatives and customer satisfaction. They developed a model based on service quality
principles to explain the outcomes of BPI adoption at the operational level. In the context of
Indian service companies, Talib et al. (2013) empirically investigated the relationship between
total quality management practices and quality performance.
ED
Our analysis of the literature regarding Six Sigma performance without the normality
assumption reveals that this assumption is usually taken for granted without a proper inquiry
PT
into whether this assumption has been fulfilled or not. Given that critical attributes may contain
data sets that are non-normally distributed, Setijono (2010) presented a method of estimating left-
CE
side and right-side Sigma levels. In this method, to fulfil the assumption of normality, the
primary data were replicated by first generating random numbers that followed a normal
standard distribution and then re-calculating these random numbers with the mean, standard
AC
deviation, and the skewness of the primary data. Simulation technique was then applied to
generate a larger amount of secondary data as the basis for estimating left-side and right-side
Sigma levels. This method was applied in a Swedish house-building construction project. The
calculated Sigma levels suggested that the developers performance was still far below Six Sigma
level of performance. Because most service quality data follow a non-normal distribution, Pan et
al. (2010) developed a new key performance index (KPI) and its interval estimation for
measuring the service quality from customers perceptions. Based on the non-normal process
6
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
capability indices used in manufacturing industries, a new KPI suitable for measuring service
quality was developed. In addition, the confidence interval of the proposed KPI was established
using the bootstrapping method. The results showed that the new KPI allowed practicing
managers to evaluate the actual service quality level delivered and to prioritize the possible
improvement projects from customers perspectives. Furthermore, compared with the traditional
CR
IP
T
method of sample size determination, it was shown that a substantial amount of cost savings
could be expected using the suggested sample sizes. Hsu et al. (2008) considered the problem of
determining the adjustments for process capability with mean shift when data follow a Gamma
distribution. Using the adjusted process capability formula in Hsu et al. (2008), engineers were
able to determine the actual process capability more accurately. A real-world semi-conductor
AN
US
production plant was investigated and presented to illustrate the applicability of the proposed
approach. Pan and Wu (1997) developed new performance indicators for non-normal data by
breaking quality characteristics into bilateral specifications, unilateral specification with target
value, and unilateral specification without target value. They proposed formulas for performance
measurement and put forward a flowchart for calculating performance indices of normal and non-
normal data. There are also some approaches using Clementss method, Burrs method, and the
BOX-COX method to estimate performance indices of non-normal processes. Ahmad et al.
ED
(2008) compared these methods and concluded that estimations by Burrs method yielded better
results that are closer to the real performance factors. Other existing approaches consist of
transforming data such that they satisfy normality conditions. For example, Chou et al. (1998)
PT
and Amiri et al. (2012) considered data transformations to convert non-normal data into normal
data. The process capability index also can be evaluated using approximate heuristic approaches.
CE
For example, in Abbasi (2009), an artificial neural network is proposed to estimate the process
capability index (PCI) for right skewed distributions without appeal to the probability distribution
AC
function of the process. The proposed neural network estimated PCI using skewness, kurtosis and
upper specification limit as input variables. The performance of the proposed method was
validated by a simulation study for different non-normal distributions.
The above-mentioned papers confirm that Six Sigma theory and implementation have not
been sufficiently studied for service processes. Only a few papers have dealt with this important
issue. A challenge to applying Six Sigma methodology in service processes is the fact that, in
most cases, the underlying processes are non-normal. Our literature review also shows that the
7
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
majority of the existing studies are based on the normality assumption. Our work rather develops
an approach for Six Sigma performance evaluation without assuming normal probability
distributions. The contributions of this study are twofold. First, we evaluate the Six Sigma
performance using exponential, Gamma and Weibull distributions. Guidelines are provided to
help managers to more accurately estimate the efforts required to achieve the performance goal of
CR
IP
T
Six Sigma. Next, two optimization models are developed to analyse the effect of inaccurately
estimating the quality effort. To our knowledge, this is the first time that the Six Sigma
performance methodology has been evaluated to analyse the robustness of quality optimization
models under inaccurate probability distributions.
AN
US
Step 1 Illustration of the Six Sigma failure rate goal of 3.4 ppm or 2 ppb:
This is based on the use of a simple example from the manufacturing industry.
ED
PT
The results of Step 2 are extended by considering Gamma and Weibull processes.
CE
AC
the second model addresses a problem that arises with Six Sigma project selection. These
optimization models are presented to support our analysis of Six Sigma performance. They are
proposed to analyse the robustness of quality improvement strategies.
The next Sections detail each of the four steps above.
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
CR
IP
T
and the process specifications are two-sided. The following example from the manufacturing
industry sheds some light on these performance goals.
AN
US
specification limits (LSL and USL) is denoted as defective (Figure 1). In the traditional goal-post
or binary view of quality, a shaft produced with a diameter that falls anywhere within the
specification limits is considered non-defective. However, in the parabolic view of quality, the
process performance increases as the diameter of the manufactured shaft is closer to the target
2.00
2.06
CE
PT
ED
1.94
(nominal) value (TV) (and away from the upper and lower specification limits).
AC
Defective
LSL
Defective
TV
USL
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
Assuming that the cylindrical shaft manufacturing process follows a normal distribution with
a mean of 2.00 cm (equal to the TV) and a standard deviation () of 0.02 cm, we calculate , the
difference between LSL or USL and TV in terms of the process variation , as follows:
USL TV
TV LSL
CR
IP
T
Therefore, the process is said to be a 3 process (i.e., amount of variation on either side of
TV) with a calculated failure rate of 2,700 ppm using the normal distribution function. If the
engineer can improve the quality of the cylindrical shaft-making process, thereby reducing the
process variation to 0.01 cm, then = 12. This results in a 6 process with a failure rate of 2
AN
US
ppb, as illustrated in Figure 2. For statistical references, see, for example, Montgomery and
ED
6 process
AC
CE
PT
3 process
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
process mean and the target value, a common practice is to account for normality deviation by
allowing for a shift of 1.5 on either side of the target value (Bothe, 2002). This leads to = 9,
a 4.5 process with a failure rate of 3.4 ppm. Figure 3 shows the effect of a 1.5 process shift to
the right. Note that such a shift has a greater consequence in the case of 3 processes, resulting in
CR
IP
T
6 process
ED
AN
US
Shifted
6 process
PT
CE
As previously mentioned, many transactional and service processes do not follow a normal
AC
distribution. Moreover, their specification limits are not necessarily two-sided. This would
generally yield a different approach and, consequently, different failure rates. Such non-normality
is commonly accounted for in the Six Sigma projects using the 1.5 allowance. However, in
this section, we show that for certain applications, a seemingly ample allowance of 1.5 is not
always adequate.
11
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
CR
IP
T
deviation, 2k, is one-sided. This concept also requires that the one-sided normal be redefined so
that TV is not equal to the mean of 15 days, and therefore the total failure rate is also on one side
of the TV.
For the exponential distribution, the standard deviation equals the mean, and the resulting 3
AN
US
process has a failure rate of 2,479 ppm. This performance compares favourably against the 3
process of 2,700 ppm in the normally distributed case. Furthermore, to improve the process
average to 2.5 days, we obtain a 6 process with a failure rate of 6.144 ppm, which in this
scenario, compares much less favourably against the 6 normal process claim of 0.002 ppm
(Table 1).
ED
Normal (ppm)
Exponential (ppm)
15
317,311
135,335
7.5
45,500
18,316
2,700
2,479
3.75
63.34
335.5
0.573
45.4
2.5
0.002
6.144
2.143
2.56E-6
0.832
1.875
1.33E-9
0.113
1.667
0.015
10
1.5
0.002
AC
CE
PT
12
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
Table (1) also demonstrates that for the normal distribution 6 goal of 2 ppb to be valid for an
exponential distribution, a 10 process needs to be targeted with mean and standard deviation of
approximately 1.5 days.
Figure 4 shows the behaviour of k processes for the two distribution types on a logarithmic
CR
IP
T
scale. It is interesting to note that the exponential distribution has a lower failure rate than the
normal distribution up to approximately 3; then, the situation reverses with an increasingly
greater difference in failure rates.
normal
1000000
exponential
Fallout Rate
100
1
0.01
0.0001
0.000001
1E-10
10 k
1E-08
AN
US
10000
ED
Now, if we assume a 1.5 shift in the process mean, then the direction of shift becomes an
PT
important consideration. For the 6 exponential case, a 1.5 shift in the process mean to the left
is inconsequential because the mean would be below the TV of zero, and in all cases, any shift to
CE
the left side should not be considered because it is in the direction of improvement. However, if
we were to allow the mean to shift 1.5 to the right from its current value of 2.5 days as shown in
AC
Figure (5), this would produce a failure rate of 8,230 ppm (Table 2). By applying a similar
analysis to the 3 process, the calculated failure rate is 90,720 ppm for the shifted mean
exponential.
13
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
CR
IP
T
6 process
Shifted
6 process
AN
US
Table 2: Normal and Exponential k processes for the case of a 1.5 shifted mean.
Shifted Normal
Shifted Exponential
(days)
(ppm)
(ppm)
15
697,672
449,329
7.5
308,770
201,897
66,810
90,718
3.75
6,210
40,762
232.6
18,316
2.5
3.4
8,230
2.143
0.019
3,698
CE
PT
ED
1.875
4.02E-5
1,662
1.667
3.19E-8
747
10
1.5
335.5
15.75
0.9523
3.4
AC
From Table (2), for the shifted 6 normally distributed goal of 3.4 ppm to be valid for an
exponential distribution, a 15.75 process must be targeted with a mean and standard deviation
14
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
of approximately 0.9523 days. Similar to Figure (4), Figure (6) shows that the exponential
distribution has a lower failure rate than the normal distribution up to somewhere between 2 and
3, and then the situation reverses with an increasingly greater difference in failure rates.
normal shift
exponential shift
10000
Fallout Rate
100
1
0.01
CR
IP
T
1000000
10
0.0001
AN
US
0.000001
1E-08
Figure 6. Failure rate behaviour of normal and exponential distributions for the shifted case.
From the analysis of the exponential distribution example, it is clear that one must first
ED
determine the distribution of the underlying data in order to estimate the efforts, as defined by the
amount of reduction in , required to improve performance. For example, in the un-shifted case,
exponentially distributed data from a process performing at less than 3 would achieve
PT
improvement faster than normally distributed data. However, if the process is performing at
higher than 3 and the target is to achieve the goal 6 failure rate of 2ppb, then the exponential
CE
process would require more than twice the variation reduction as that of a normal distribution.
The variation reduction required is even larger for the shifted case (more than four times). It is
AC
important to emphasize that higher sigma processes for a given distribution are generally more
difficult to improve than lower ones, which would further add to the total effort required to
achieve Six Sigma performance level goals.
15
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
CR
IP
T
the value than the failure rate for the Gamma distribution. These results support our concern
that simply reporting the sigma level as an indication of the quality of a product or process can be
misleading. For example, for a quality characteristic that follows the Weibull distribution with
shape parameter 3, the failure rate is greater than 25% for a 6 Sigma level of performance. This is
Weibull distribution performance that is worse than the 2 ppb failure rate for the normal
AN
US
distribution.
4 Optimization Models
In this section, two optimization models are developed to analyse the effect of inaccurate
estimations of probability distributions on the optimal solutions. The first model is a step-loss
PT
ED
quality model, and the second addresses a problem in Six Sigma project selection.
CE
AC
of the effect of underestimating the effort required to improve quality on the total cost. Consider
that many quality-improvement strategies are available. Each strategy is characterized by cost
and a potential quality level. For example, a 6 sigma quality-improvement strategy is more
expensive than that for 5 sigma. We need to select the most appropriate alternative. The objective
function, which consists of minimizing the total cost, is the sum of (1) the cost incurred in
deploying the selected quality-improvement strategy and (2) the cost incurred as a result of
defective items or services. On the one hand, selecting the best quality-improvement strategy is
16
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
more expensive but leads to fewer defective items or services. On the other hand, less expensive
improvement strategies lead to higher costs as a result of poor-quality items or services. The goal
of the optimization model is to determine the most favourable trade-off between qualityimprovement costs and poor-quality costs.
We use a step-loss function based on a conformance quality definition of a quality characteristic
0
L Y
r
CR
IP
T
if LSL Y USL,
otherwise
(1)
with L(Y) as the loss associated with quality characteristic Y, LSL and USL as the lower and
AN
US
upper specification limits, and r as the rejection costs. Loss function (1) is consistent with the
idea that all values within the specification range are equally desirable (binary view of quality).
The following notations are used:
ci:
FRi:
Q:
r:
PT
ED
n:
Xi:
decision variable, equal to 1 if quality-improvement strategy i is implemented (0
otherwise)
CE
AC
Minimize
X i ci r FRi Q
(2)
i 1
n
Subject to
Xi 1
(3)
X i 0, 1 , i = 1, 2, , n
(4)
i 1
Constraint (3) specifies that only one quality-improvement strategy can be selected.
Constraint (4) specifies that the decision variable is binary. The objective function (2) is
17
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
X i ci ,
i 1
fraction of a sigma improvement, a quality effort needs to be implemented, and this corresponds
n
X i FRi
i 1
CR
IP
T
as a result of poor quality. For tangible products, this may correspond to the costs of annual
production, rework, repair or scrapping, etc. In service systems, this may correspond to any cost
incurred because of failure to fulfil the service on time. The failure rate FRi depends on the
selected quality level. For example, for a 6 sigma quality level, this is equal to 3.4 ppm for a
shifted normally distributed process and to 8,230 ppm for a shifted exponentially distributed one
AN
US
ED
most cases, more effort will be required to improve a process from a 5 sigma to 6 sigma level
compared to improving a process from a 3 sigma to 4 sigma level. The average number of
PT
services per year is Q = 100,000. We assume that the unitary rejection cost is r = $10,000.
The results are presented in Table 4 and show that when we consider a normal distribution,
CE
the optimal solution is found for a 6 Sigma level with a total cost of $403,400. However, if the
probability distribution is instead exponential, the optimal solution corresponds to a 9 Sigma
AC
level with a total cost of $2,847,000. On the other hand, the actual total cost if we consider a 6
Sigma level is $8,630,000 for an exponential process (instead of $403,400 calculated under the
normal assumption).
18
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
Shifted Normal
Shifted Exponential
FRi (ppm)
FRi (ppm)
10,000
697,672
449,329
7.5
25,000
308,770
201,897
50,000
66,810
90,718
3.75
100,000
6,210
200,000
232.6
2.5
400,000
3.4
2.143
700,000
0.019
1.875
1,300,000
4.02E-5
CR
IP
T
Table 3: Costs and failure rates for Normal and Exponential k processes (1.5 shifted mean).
1.667
2,100,000
3.19E-8
747
10
1.5
4,000,000
335.5
ci ($)
15
40,762
18,316
8,230
3,698
AN
US
(days)
1,662
Shifted Exponential
697,682,000
449,339,000
308,795,000
201,922,000
66,860,000
90,768,000
6,310,000
40,862,000
432,600
18,516,000
403,400
8,630,000
700,000
4,398,000
1,300,000
2,962,000
2,100,000
2,847,000
10
4,000,000
4,335,500
AC
CE
PT
ED
Quality strategy
19
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
The Total Quality Cost of the objective function in Model (2)-(4) consists of:
Failure/Rejection costs (e.g. scrap, rework, warranty, investigation, correction, reputation, loss of
customers); and Appraisal and Prevention costs (e.g. inspection, audit, reviews, improvement
projects). It is true that not all costs are easy to quantify (e.g. reputation, image, loss of
customers). However, many organizations managed to estimate the various components of
CR
IP
T
quality costs. Therefore, it is not unusual to select a strategy with higher failure cost and lower
prevention cost if it minimizes the Total Cost (i.e. the Failure/Rejection, plus Prevention and
Appraisal Costs). This remark was explicitly pointed out for example by Juran and Gryna (1993)
and Kubiak and Benbow (2009). As for zero defects, it is a goal that is not possible to achieve,
even in life threatening operations (e.g. airplanes). It remains that for companies that are very
AN
US
sensitive to service level, the failure cost will be relatively high (because the estimated costs
related to loss of customers, image and reputation are very high). In this case, the lowest failure
rate is preferred as long as the available quality budget allows this.
ED
PT
distribution on the selection process of Six Sigma projects. Because many of the most successful
manufacturing and oil and gas companies implement thousands of Six Sigma projects every year,
let us consider a set of processes with nj quality-improvement candidate projects for each process.
CE
AC
project 0 corresponds to the case where no quality improvement is implemented (status quo).
We need to select the most suitable set of Six Sigma projects and the corresponding quality
levels. We consider that the objective consists of maximizing the total profit, as defined by the
selling price minus the total cost. The latter is the sum of (1) the cost incurred in implementing
the selected quality projects for the selected processes and (2) the cost incurred because of
defective items or services in all selected projects. On the one hand, selecting many processes and
the best quality-improvement projects is more expensive but leads to fewer defective items or
20
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
services. On the other hand, fewer selected processes and less expensive projects lead to higher
costs incurred as a result of poor quality. The goal of the optimization model is to select the
appropriate set of processes/projects and quality levels, while taking into account the trade-off
between quality-improvement and poor-quality costs. Indeed, as the objective is to maximize the
total profit, the selling price of the conforming items is also taken into account.
CR
IP
T
nj:
cij:
FRij:
Qj:
rj :
uj:
B:
Xij:
ED
AN
US
m:
PT
m nj
Profit =
CE
Maximize
AC
Subject to
m nj
X ij 1,
j 1 i 0
j 1, ..., m
(5)
(6)
i 0
m nj
X ij cij B
(7)
X ij 0, 1 , i 0, 1, 2, , n j ; j 1, ..., m
(8)
j 1 i 0
21
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
m nj
The objective function (5) is composed of three terms. The first term, X ij u j Q j 1 FRij ,
j 1 i 1
is the total revenue generated when units are sold or services are fulfilled. The second one,
m nj
X ij cij ,
is the annual cost of quality for the selected projects. The last term,
j 1 i 1
X ij rj FRij Q j ,
CR
IP
T
m nj
j 1 i 1
for all selected projects. Constraint (6) specifies that for each process, only one qualityimprovement project can be selected. Constraint (7) specifies the budget constraint. In fact, most
companies do not have unlimited funds to implement process-improvement projects. Therefore,
AN
US
there is generally a limited budget allocated to all Six Sigma projects. Thus, constraint (7)
guarantees that the budget limit is not exceeded. The last constraint specifies that the decision
variables are binary.
Consider that we must choose between 2 candidate processes or operations for improvement
ED
(j = 1, 2) and that 2 potential quality-improvement projects are possible (i = 1, 2). For both
processes, the first project corresponds to a 3 Sigma quality level, and the second is a 6 Sigma
PT
quality level. We assume that the current situation (i = 0) corresponds to a One Sigma quality
level. The failure rates of normal and exponential distributions are grouped in Table 5. The
AC
CE
Shifted Exponential
0 (One Sigma)
697,672
449,329
1 (Three Sigma)
66,810
90,718
3.4
8,230
Quality project i
2 (Six Sigma)
22
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
No. of items or
Unit. rejection
Unit selling
(i)
in $
services (Qj)
cost (rj) in $
price (uj) in $
100,000
12
20
10,000
100,000
200,000
100,000
120,000
5,000
120,000
150,000
120,000
CR
IP
T
12
20
12
20
10
18
10
18
10
18
AN
US
Process (j)
The results are given in Tables 7 and 8 for normal and exponential distributions, respectively.
ED
Status quo
-416,728
PT
1,592,030
1,592,030
1,697,968
1,777,438
Process 1 with Project of Quality level 1, and Process 2 with Project of Quality level 1
3,706,726
AC
CE
Process 1 with Project of Quality level 1, and Process 2 with Project of Quality level
2
3,786,197
Process 1 with Project of Quality level 2, and Process 2 with Project of Quality level 1
Infeasible
Process 1 with Project of Quality level 2, and Process 2 with Project of Quality level 2
Infeasible
23
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
Status quo
1,212,402
2,339,957
Infeasible
CR
IP
T
2,407,335
Infeasible
Process 1 with Project of Quality level 1, and Process 2 with Project of Quality
level 1
3,534,890
AN
US
Process 1 with Project of Quality level 1, and Process 2 with Project of Quality level 2
Infeasible
Process 1 with Project of Quality level 2, and Process 2 with Project of Quality level 1
Infeasible
Process 1 with Project of Quality level 2, and Process 2 with Project of Quality level 2
Infeasible
We note the following for the status quo situation (One Sigma level):
A loss is observed under the normal distribution assumption (- $416,728);
ED
This is because the failure rate of an exponentially distributed process is less than that of a
normal process for less than 3 Sigma quality levels. On the other hand, for 3 and 6 Sigma levels,
PT
CE
When distributions are considered exponential, the model instead selects Process 1 with
AC
The aforementioned difference is the quality level (i.e., project) selection for Process 2. The
optimal profit realized under the normal distribution is 3,786,197; this same solution is infeasible
for exponential processes under the fixed budget constraint.
24
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
The results above confirm that it is mandatory to accurately estimate the efforts required to
achieve the performance goal of Six Sigma. That is, once data are collected, a special emphasis
must be put in identifying the right probability distribution, instead of merely assuming a normal
distribution as it is often the case in the current practice.
CR
IP
T
5 Conclusions
In the existing literature, Six Sigma theory and implementation have not been sufficiently
studied for service processes. Only a relatively small number of papers have dealt with this
important issue. A challenge to applying Six Sigma methodology in service processes is the fact
AN
US
that, in most cases, the underlying processes are non-normal. However, the majority of the
existing studies are based on the normality assumption. Unlike the prior literature, this paper has
developed an approach for Six Sigma performance evaluation without assuming normal
probability distributions. In comparison to the existing studies, the contributions of this study are
twofold. First, we have evaluated the Six Sigma performance using exponential, Gamma and
Weibull distributions. Next, two optimization models were developed to analyse the effect of
inaccurately estimating the quality effort. To the best of our knowledge, this is the first time that
ED
the Six Sigma performance methodology has been evaluated to analyse the robustness of quality
optimization models under inaccurate probability distributions. Managerial insights were
provided through the paper using many illustrative numerical examples. Guidelines were then
PT
provided to help managers to more accurately estimate the efforts required to achieve the
performance goal of Six Sigma. We demonstrated that, by using the exponential distribution
CE
rather than the normal distribution, the variation reduction required to improve a process at a
lower than given k level is less than that when the process is beyond that level. Moreover, we
AC
have shown that achieving the performance goal of Six Sigma when the process is exponential is
more demanding than that for the normally distributed case. The consequences of non-normality
on the failure rate were analysed for Gamma and Weibull distributions. These analyses
demonstrate that attaining the Six Sigma performance goals require varying different effort levels
based on the distribution type. Therefore, it reasonable that the distribution type is accounted for
and specifically calculated when an organization plans for its next Six Sigma initiative. We have
shown that achieving the performance goal of Six Sigma in the exponential, Weibull and Gamma
25
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
cases requires a greater variation reduction than that for the normally distributed case, although
for less than the Six Sigma level, it is possible that less variation reduction is required. Finally,
using two optimization models, we have analysed the robustness of the optimal solution to
minimize cost when an exponential process is assumed to be normal. The results indicated that an
incorrect estimation of the probability distribution, and thus the quality effort, may lead to
CR
IP
T
erroneous solutions when selecting quality programs and Six Sigma projects.
The purpose of this article is not to advocate for changing the term Six Sigma or cast doubt on
its business value. Instead, we hope to highlight the effect of distribution types in an effort to
promote professionalism and accuracy in regard to the 3.4 ppm or 2 ppb target performance goals
made by Six Sigma practitioners. That is, instead of systematically assuming normal distributions
AN
US
in Six Sigma projects, managers need to make significant efforts and a sense of awareness to
identify the right probability distribution for each process.
This paper does not identify which distribution is better for service systems. We are currently
developing a theoretical model to explain why the normality assumption is highly questionable
for cycle times in service processes. In addition, no field study is provided to support our claims.
Future work will aim to apply the results of this study to the Commercial Department of a Gas
PT
Acknowledgement
ED
CE
number XP 01/14.
AC
References
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
Amiri, A., Bashiri, M., Mogouie, H., & Doroudyan, M.H. (2012). Non-normal multi-response
optimization by multivariate process capability index, Scientia Iranica, 19 (6), 18941905.
Antony, J., Antony, F.J., Kumar, M., & Cho, B.R. (2007). Six sigma in service organizations:
benefits, challenges and difficulties, common myths, empirical observations and success factors.
International Journal of Quality & Reliability Management, 24 (3), 294-311.
CR
IP
T
Behara, R.S., Fontenot, G.F., & Gresham, A. (1995). Customer satisfaction measurement and
analysis using Six Sigma. International Journal of Quality & Reliability Management, 12 (3), 918.
Bothe, D.R. (2002). Statistical reason for the 1.5r shift. Quality Engineering, 14 (3), 479487.
AN
US
Buavaraporn, N., & Tannock, J. (2013). Business process improvement in services: case studies
of financial institutions in Thailand. International Journal of Quality & Reliability Management,
30 (3), 319-340.
Chakraborty, A., & Chuan, T.K. (2012). Case study analysis of Six Sigma implementation in
service organisations. Business Process Management Journal, 18 (6), 992-1019.
Chakraborty, A., & Chuan, T.K. (2013). An empirical analysis on Six Sigma implementation in
service organisations. International Journal of Lean Six Sigma, 4 (2), 141-170.
Chakraborty, A., & Leyer, M. (2013). Developing a Six Sigma framework: perspectives from
financial service companies. International Journal of Quality & Reliability Management, 30 (3),
256-279.
ED
PT
Chou Y.M., Polansky A.M., & Mason R.L. (1998). Transforming Non-Normal Data to Normality
in Statistical Process Control. Journal of Quality Technology, 30 (2), 133-141.
Clements, J. (1989). Process capability calculations for non-normal distributions. Quality
Progress, 22, 95-100.
CE
English, J., & Taylor, G. (1993). Process capability analysis a robustness study. International
Journal of Production Research, 31, 1621-1635.
AC
Farnum, N. (1996). Using Johnson curves to describe non-normal process data. Quality
Engineering, 9, 335-339.
Harry, M., & Schroeder, R. (1999). Six Sigma: The breakthrough management strategy
revolutionizing the worlds top corporations. Doubleday, Random House.
Hsu, Y.C., Pearn, W.L., & Wu, P.C. (2008). Capability adjustment for gamma processes with
mean shift consideration in implementing Six Sigma program. European Journal of Operational
Research, 191, 517529.
Juran, J.M., & Gryna, F.M. (1993). Quality Planning & Analysis. (3rd ed.). McGraw-Hill, Inc.
27
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
Kubiak, T.M., & Benbow, D.W. (2009). The Certified Six Sigma Black Belt, (2nd ed.). ASQ
Quality Press.
Montgomery, D., & Runger, G. (2011). Applied statistics and probability for engineers. (5th ed.).
John Wiley & Sons.
CR
IP
T
Nourelfath, M. (2011). Service level robustness in stochastic production planning under random
machine breakdowns. European Journal of Operational Research, 212, 81-88.
Pan, J.L., & Wu, S.L. (1997). Process capability analysis for non-normal relay test data.
Microelectronics Reliability, 37 (3), 421428.
Pan, J.N., Kuo, T.C., & Bretholt, A. (2010). Developing a new key performance index for
measuring service quality. Industrial Management & Data Systems, 110 (6), 823-840.
AN
US
Setijono, D. (2010). Normal approximation through data replication when estimating DisPMO,
DePMO, left-side and right-side Sigma levels from non-normal data. International Journal of
Quality & Reliability Management, 27 (3), 318-332.
ED
Stone, K.B. (2012). Four decades of lean: a systematic literature review. International Journal of
Lean Six Sigma, 3 (2), 112-132.
AC
CE
PT
Talib, F., Rahman, Z., & Qureshi, M.N. (2013). An empirical investigation of relationship
between total quality management practices and quality performance in Indian service
companies. International Journal of Quality & Reliability Management. 30 (3), 280-318.
28
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
APPENDIX
In this section, the failure rates for the Gamma and Weibull distributions are presented for
different sigma levels. Because the failure rates for both distributions are only affected by the
value of their shape parameter , the scale parameter is fixed at an arbitrary value of 2 in all the
1
x 1e
AN
US
CR
IP
T
calculations below. The probability density function, mean, and variance for the Gamma
2 2
whereas, the probability density function, mean, and variance for the Weibull distribution are
given by, respectively:
f x; , x 1e
ED
AC
CE
PT
2 1 2
1 1
2
29
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
Mean
1
2
3
4
5
6
Variance
2
4
6
8
10
12
Std. dev.
1.414214
2
2.44949
2.828427
3.162278
3.464102
4 sigma level
Mean
1
2
3
4
5
6
Variance
2
4
6
8
10
12
Std. dev.
1.414214
2
2.44949
2.828427
3.162278
3.464102
6 sigma level
Mean
1
2
3
4
5
6
AC
CE
Variance
2
4
6
8
10
12
ED
Mean
1
2
3
4
5
6
PT
Variance
2
4
6
8
10
12
Failure (ppm)
3580.312011
2478.752177
2094.834923
1958.571092
1943.966576
2005.423022
USL
11.31371
16
19.59592
22.62742
25.29822
27.71281
Failure
0.000769
0.000335
0.000206
0.000150
0.000122
0.000106
Failure (ppm)
769.3695568
335.4626279
205.8235242
150.282253
122.0212981
106.414285
Std. dev.
1.414214
2
2.44949
2.828427
3.162278
3.464102
USL
14.14214
20
24.4949
28.28427
31.62278
34.64102
Failure
0.000170
4.54E-05
1.97E-05
1.09E-05
7.06E-06
5.06E-06
Failure (ppm)
169.5041988
45.39992976
19.68936532
10.92284241
7.055632298
5.057500709
Std. dev.
1.414214
2
2.44949
2.828427
3.162278
3.464102
USL
16.97056
24
29.39388
33.94113
37.94733
41.56922
Failure
3.8E-05
6.14E-06
1.85E-06
7.66E-07
3.87E-07
2.24E-07
Failure (ppm)
37.96389309
6.144212353
1.850780182
0.766196071
0.386628432
0.223636375
5 sigma level
Failure
0.003580
0.002479
0.002095
0.001959
0.001944
0.002005
AN
US
USL
8.485281
12
14.69694
16.97056
18.97367
20.78461
CR
IP
T
30
ACCEPTED MANUSCRIPT
Accepted Paper for Publication in European Journal of Operational Research, June 2015
Mean
4
2
1.805491
1.772454
1.774528
1.785959
Variance
80
4
1.502761
0.858407
0.576587
0.421332
Std. dev.
8.944272
2
1.225872
0.926503
0.759333
0.649101
USL
53.66563
12
7.35523
5.559017
4.555999
3.894603
Failure
2.37699E-13
0.002478752
0.061397259
0.234590382
0.472426159
0.690936954
Failure (ppm)
2.37699E-07
2478.752177
61397.25947
234590.3817
472426.1588
690936.9542
Mean
4
2
1.805491
1.772454
1.774528
1.785959
Variance
80
4
1.502761
0.858407
0.576587
0.421332
Std. dev.
8.944272
2
1.225872
0.926503
0.759333
0.649101
USL
71.55418
16
9.806973
7.412022
6.074665
5.192804
Failure
0
0.000335463
0.020280255
0.115651910
0.299021372
0.519333147
Failure (ppm)
0
335.4626279
20280.25452
115651.9102
299021.3716
519333.147
Mean
4
2
1.805491
1.772454
1.774528
1.785959
Variance
80
4
1.502761
0.858407
0.576587
0.421332
Std. dev.
8.944272
2
1.225872
0.926503
0.759333
0.649101
USL
89.44272
20
12.25872
9.265028
7.593331
6.491006
Failure
0
4.53999E-05
0.006547469
0.054805873
0.180118031
0.370488383
Failure (ppm)
0
45.39992976
6547.468671
54805.87313
180118.0309
370488.3827
Variance
80
4
1.502761
0.858407
0.576587
0.421332
Std. dev.
8.944272
2
1.225872
0.926503
0.759333
0.649101
USL
107.3313
24
14.71046
11.11803
9.111997
7.789207
Failure
0
6.14421E-06
0.002081569
0.025269028
0.104679491
0.253956951
Failure (ppm)
0
6.144212353
2081.568597
25269.02772
104679.4911
253956.9507
AC
CE
6 sigma level
ED
PT
5 sigma level
Mean
4
2
1.805491
1.772454
1.774528
1.785959
AN
US
4 sigma level
CR
IP
T
31