Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
To cite this article: Douglas C. Montgomery & Connie M. Borror (2017): Systems for modern
quality and business improvement, Quality Technology & Quantitative Management, DOI:
10.1080/16843703.2017.1304032
Article views: 6
Download by: [The UC San Diego Library] Date: 29 April 2017, At: 14:37
QUALITY TECHNOLOGY & QUANTITATIVE MANAGEMENT, 2017
http://dx.doi.org/10.1080/16843703.2017.1304032
Introduction
The quality of goods and services produced by society has always been an important consideration in
decisions made by consumers. However, our complete awareness of its importance and the introduc-
tion of formal methods for quality control and improvement have been an evolutionary development.
Montgomery (2013) and Montgomery, Jennings, and Pfund (2011) present a timeline of some of the
important milestones in this evolutionary process. We will briefly discuss some of the events on this
timeline.
The development of standardized and interchangeable parts was a key idea in making units of
manufactured product more similar to each other. This had a generally favourable impact on varia-
bility leading to higher quality products. Frederick W. Taylor introduced some principles of scientific
management as mass production industries began to develop prior to 1900. Taylor pioneered dividing
work into tasks so that the product could be manufactured and assembled more easily. His work led
to substantial improvements in productivity. Also, because of standardized production and assembly
methods, the quality of manufactured goods was positively impacted as well. However, along with the
standardization of work methods came the concept of work standards – a standard time to accomplish
the work, or a specified production rate. Frank and Lillian Gilbreth and others extended this concept
to the study of motion and work design. Much of this had a positive impact on productivity, but it
often did not sufficiently emphasize the quality aspect of work. Furthermore, if carried to extremes,
work standards have the risk of halting innovation and continuous improvement, which we recognize
today as being a vital aspect of all work activities.
At the start of the twentieth century, science and most of industry was based on determinism, a
widespread belief that physical laws and economic models of real-world phenomenon were perfect
descriptors and that all we needed to be able to predict future events perfectly was enough data so that
the physical constants in these models could be calculated accurately. With sufficient data, and the right
models, we could predict future events with almost perfect accuracy. Some successes in the nineteenth
century supported this belief. For example Newton’s methods were used to predict the existence of the
planet Neptune. This and other successes supported the belief in determinism. Variability and statistical
thinking were not widespread and in many fields they were unappreciated, notable exceptions being
agriculture and genetics. Ernst Rutherford (1871–1937), winner of the 1908 Nobel Prize in physics,
said ‘If your experiment needs statistics, you ought to have done a better experiment’. Albert Einstein
(1879–1955), winner of the 1921 Nobel Prize in physics, famously said that ‘God doesn’t play dice’ in
his discussion of quantum physics.
Walter A. Shewhart of the Bell Telephone Laboratories was one of the first scientists to recognize
the harmful aspects of variability on product and process quality. In 1924, Shewhart developed the
statistical control chart concept. Many people consider this the formal beginning of the quality control
and improvement field. We think that the control chart is one of the most important technological
developments of the 20th century. It has a simple theoretical basis, it is easy to apply in practice and
has been widely implemented in software, it can be applied in an extremely diverse range of settings,
spanning almost all manufacturing and service organizations, and it has been proven successful in
helping organization achieve often dramatic changes in business performance. Shewhart’s contribu-
tions go far beyond the mathematical development of the control chart. His concepts of chance and
assignable causes of variation and rational subgroups define an operational strategy for implementing
quality improvement via the control chart and other tools of statistical process control.
Toward the end of the 1920s, Harold F. Dodge and Harry G. Romig, both of Bell Telephone
Laboratories, developed statistically based acceptance-sampling as an alternative to 100% inspection.
By the middle of the 1930s, statistical quality-control methods were in wide use at Western Electric,
the manufacturing arm of the Bell System. However, the value of statistical quality control was not
widely recognized by industry.
The Second World War saw a greatly expanded use and acceptance of statistical quality-control
concepts in manufacturing industries. Wartime experience made it apparent that statistical techniques
were necessary to control and improve product quality. The American Society for Quality Control
(now the American Society for Quality) was formed in 1946. This organization promotes the use of
quality improvement techniques for all types of products and services. It offers a number of confer-
ences, technical publications, and training programs in quality assurance.
The 1950s and 1960s saw the emergence of reliability engineering, the introduction of several
important textbooks on statistical quality control, and the viewpoint that quality is a way of man-
aging the organization. In the 1950s, designed experiments for product and process improvement
were first introduced in the United States. The initial applications were in the chemical industry.
These methods were widely exploited in the chemical industry, and they are often cited as one of the
primary reasons that the U.S. chemical industry is one of the most competitive in the world and has
lost little business to foreign companies. The spread of these methods outside the chemical industry
was relatively slow until the late 1970s or early 1980s, when many Western companies discovered that
their Japanese competitors had been systematically using designed experiments since the 1960s for
process improvement, new process development, evaluation of new product designs, improvement
of reliability and field performance of products, and many other aspects of product design, including
selection of component and system tolerances. This discovery sparked further interest in statistically
QUALITY TECHNOLOGY & QUANTITATIVE MANAGEMENT 3
designed experiments and resulted in extensive efforts to introduce the methodology in engineering
and development organizations in industry, as well as in academic engineering curricula.
organizations and work culture, customer focus, supplier quality improvement, integration of the
quality system with business goals, and many other activities to focus all elements of the organiza-
tion around the quality improvement goal. Typically, organizations that have implemented a TQM
approach to quality improvement have quality councils or high-level teams that deal with strategic
quality initiatives, workforce-level teams that focus on routine production or business activities, and
cross-functional teams that address specific quality improvement issues.
TQM has only had moderate success for a variety of reasons, but frequently because there is insuf-
ficient effort devoted to widespread utilization of the technical tools of variability reduction. Many
organizations saw the mission of TQM as one of training. Consequently, many TQM efforts engaged
in widespread training of the workforce in the philosophy of quality improvement and a few basic
methods. This training was usually placed in the hands of human resources departments, and much
of it was ineffective. The trainers often had no real idea about what methods should be taught, or how
the methods should be used, and success was usually measured by the percentage of the workforce
that had been ‘trained,’ not by whether any measurable impact on business results had been achieved.
Some general reasons for the lack of conspicuous success of TQM include (1) lack of top-down,
high-level management commitment and involvement; (2) inadequate use of statistical methods and
insufficient recognition of variability reduction as a prime objective; (3) general as opposed to specific
business-results-oriented objectives; and (4) too much emphasis on widespread training as opposed
to focused technical education.
Another reason for the erratic success of TQM is that many managers and executives regarded
it as just another ‘program’ to improve quality. During the 1950s and 1960s, programs such as Zero
Defects and Value Engineering abounded, but they had little real impact on quality and productivity
improvement. During the heyday of TQM in the 1980s, another popular programme was the quality
is free initiative, in which management worked on identifying the cost of quality. Indeed, identification
of quality costs can be very useful but the quality is free advocates often had no idea about what to do
to actually improve many types of complex industrial processes. In fact, the leaders of this initiative
had no knowledge about statistical methodology and completely failed to understand its role in quality
improvement. When TQM is wrapped around an ineffective programme such as this, disaster is often the
result. TQM and related activities cannot be the major focus of an effective quality management system.
The International Standards Organization, known as ISO, has developed a series of standards for
quality systems. The first standards were issued in 1987. The current version is the ISO 9000 series.
Many organizations have required their suppliers to become certified under ISO 9000, or one of the
standards that are more industry-specific.
Much of the focus of ISO 9000 (and of the industry-specific standards) is on formal documenta-
tion of the quality system – that is, on quality assurance activities. Organizations usually must make
extensive efforts to bring their documentation into line with the requirements of the standards; this
is the Achilles’ heel of ISO 9000 and other related or derivative standards. There is far too much effort
devoted to documentation, paperwork, and bookkeeping and not nearly enough to actually reducing
variability and improving processes and products. Furthermore, many of the third-party registrars,
auditors, and consultants that work in this area are not sufficiently educated or experienced enough
in the technical tools required for quality improvement or how these tools should be deployed. They
are all too often unaware of what constitutes modern engineering and statistical practice, and usually
are familiar with only the most elementary techniques. Therefore, they concentrate largely on the
documentation, record keeping, and paperwork aspects of certification. There is also evidence that ISO
certification or certification under one of the other industry-specific standards does little to prevent
poor quality products from being designed, manufactured, and delivered to the customer.
For example, in 1999–2000, there were numerous incidents of rollover accidents involving Ford
Explorer vehicles equipped with Bridgestone/Firestone tires. There were nearly 300 deaths in the United
States alone attributed to these accidents, which led to a recall by Bridgestone/Firestone of approxi-
mately 6.5 million tires. Apparently, many of the tires involved in these incidents were manufactured
at the Bridgestone/Firestone plant in Decatur, Illinois. In an article on this story in Time magazine
QUALITY TECHNOLOGY & QUANTITATIVE MANAGEMENT 5
(18 September 2000), there was a photograph (p. 38) of the sign at the entrance of the Decatur plant
which stated that the plant was ‘QS 9000 Certified’ and ‘ISO 14001 Certified’ (ISO 14001 is an envi-
ronmental standard). Although the assignable causes underlying these incidents have not been fully
discovered, there are clear indicators that despite quality systems certification, Bridgestone/Firestone
experienced significant quality problems. ISO certification alone is not an effective quality management
system. The certification is no guarantee that good quality products are being designed, manufactured,
and delivered to the customer. An ISO certified organization can manufacture and sell concrete life
jackets just so long as the process is adequately documented, the materials are traceable, and there
is a process available for the survivors and the estates of the deceased to register complaints. Relying
on ISO certification as the basis of a quality management system is a strategic management mistake.
The six sigma initiative was developed by Motorola in the mid-1980s. The six sigma concept is illus-
trated in Figure 1, which shows a normal probability distribution as a model for a quality characteristic
with the specification limits at three standard deviations on either side of the mean. The underlying
idea is to reduce the variability in the process so that the specification limits are at least six standard
deviations from the mean. Then, as shown in Figure 1(a), there will only be about 2 ppb defective.
Under Six Sigma quality, the probability that any specific unit of the hypothetical product above is
nondefective is .9999998, or .2 ppm, a much better situation. Refer to Montgomery and Woodall
(2008) for an overview of six sigma.
When the Six Sigma concept was initially developed, an assumption was made that when the pro-
cess reached the Six Sigma quality level, the process mean was still subject to disturbances that could
cause it to shift by as much as 1.5 standard deviations off target. This situation is shown in Figure 1(b).
Under this scenario, a Six Sigma process would produce about 3.4 ppm defective.
There is an apparent inconsistency in this. We can only make predictions about process performance
when the process is stable – that is, when the mean (and standard deviation, too) is constant. If the
mean is drifting around, and ends up as much as 1.5 standard deviations off target, a prediction of
3.4 ppm defective may not be very reliable, because the mean might shift by more than the ‘allowed’
1.5 standard deviations. Process performance isn’t predictable unless the process behaviour is stable.
However, no process or system is ever truly stable, and even in the best of situations, disturbances
occur. These disturbances can result in the process mean shifting off-target, an increase in the process
standard deviation, or both. The concept of a Six Sigma process is one way to model this behaviour.
Like all models, it’s probably not exactly right, but it has proven to be a useful way to think about
process performance and improvement.
Motorola established Six Sigma as both an objective for the corporation and as a focal point for pro-
cess and product quality improvement efforts. In recent years, Six Sigma has spread beyond Motorola
and has come to encompass much more. It has become a programme for improving corporate business
performance by both improving quality and paying attention to reducing costs. Companies involved
in a Six Sigma effort utilize specially trained individuals (Green Belts, Black Belts, and Master Black
Belts) to lead teams focused on projects that have both quality and business (economic) impacts for
the organization. The ‘belts’ have specialized training and education on statistical methods and quality
and process improvement tools that equip them to function as team leaders, facilitators, and problem
solvers. The project approach to improvement, adopted from Juran’s approach, is an integral part of a
six sigma deployment. Typical Six Sigma projects are four to six months in duration and are selected
for their potential impact on the business. For more information on projects in six sigma see Duarte,
Montgomery, Fowler, and Konopka (2012) and Montgomery (2004a,b).
Six Sigma uses a specific five-step problem-solving approach: Define, Measure, Analyze, Improve,
and Control (DMAIC). The DMAIC approach evolved directly from the Shewhart cycle of Plan-Do-
Check-Act (PDCA). It is a very effective framework for process and overall business improvement.
Harry and Schroeder (2000) present discuss the DMAIC approach with three additional steps. The
approach is involves the steps: Recognize, Define, Measure, Analyze, Improve, Control, Standardize
and Integrate, sometimes denoted as RdmaicSI. RdmaicSI is taught and integrated into some Six
Sigma programs, the DMAIC process is the most common and most often recognized approach of
the two. Many different organizations have adopted six sigma and used the DMAIC approach for
quality and business improvement. For some examples and discussion of this see Antony, Bhuller,
Kumar, Mendibil, and Montgomery (2012), Borror, Beechy, Shunk, Gish, and Montgomery (2011),
and Kumar, Antony, Madu, Montgomery, and Park (2008). The statistical engineering tools that are
typically used in DMAIC are shown in Table 1.
In recent years, two other tool sets have become identified with Six Sigma: lean systems and design
for Six Sigma (DFSS). Many organizations regularly use one or both of these approaches as an integral
part of their Six Sigma implementation.
Design for Six Sigma is an approach for taking the variability reduction and process improvement
philosophy of Six Sigma upstream from manufacturing or production into the design process, where
new products (or services or service processes) are designed and developed. Broadly speaking, DFSS
is a structured and disciplined methodology for the efficient commercialization of technology that
QUALITY TECHNOLOGY & QUANTITATIVE MANAGEMENT 7
results in new products, services, or processes. By a product, we mean anything that is sold to a con-
sumer for use; by a service, we mean an activity that provides value or benefit to the consumer. DFSS
spans the entire development process from the identification of customer needs to the final launch
of the new product or service. Customer input is obtained through voice of the customer activities
designed to determine what the customer really wants, to set priorities based on actual customer
wants, and to determine if the business can meet those needs at a competitive price that will enable
it to make a profit. VOC data is usually obtained by customer interviews, by a direct interaction with
and observation of the customer, through focus groups, by surveys, and by analysis of customer sat-
isfaction data. The usual objective of this is to develop a set of critical to quality requirements for the
product or service. Traditionally, six sigma and DMAIC is deployed to achieve operational excellence,
while DFSS is focused on improving business results by increasing the sales revenue generated from
new products and services and finding new applications or opportunities for existing ones. In many
cases, an important gain from DFSS is the reduction of development lead time – that is, the cycle
time to commercialize new technology and get the resulting new products to market. DFSS is directly
focused on increasing value in the organization. Many of the tools that are used in operational Six
Sigma are also used in DFSS. The DMAIC process is also applicable, although some organizations
and practitioners have slightly different approaches (DMADV, or Define, Measure, Analyze, Design,
and Verify, is a popular variation).
DFSS makes specific the recognition that every design decision is a business decision, and that the
cost, manufacturability, and performance of the product are determined during design. Once a product
is designed and released to manufacturing, it is almost impossible for the manufacturing organization
to make it better. Furthermore, overall business improvement cannot be achieved by focusing on
reducing variability in manufacturing alone (operational Six Sigma), and DFSS is required to focus
on customer requirements while simultaneously keeping process capability in mind. Some business
writers have argued that applying six sigma in a research and development environment can stifle
innovation, but the key to success here is to utilize specific tools in DFSS such as statistically designed
experiments [see Montgomery (1999, 2008, 2011)].
Lean principles are designed to eliminate waste. By waste, we mean unnecessarily long cycle times,
or waiting times between value-added work activities. Waste can also include rework (doing something
over again to eliminate defects introduced the first time) or scrap. Lean applications frequently make use
of many tools of industrial engineering and operations research. One of the most important of these is
discrete-event simulation, in which a computer model of the system is built and used to investigate and
quantify the impact of changes to the system that improve its performance. Simulation models are often
very good predictors of the performance of a new or redesigned system. Both manufacturing and service
organizations can greatly benefit by using simulation models to study the performance of their processes.
Ideally, Six Sigma/DMAIC, DFSS, and lean tools are used simultaneously and harmoniously in an
organization to achieve high levels of process performance and significant business improvement.
Figure 2 highlights many of the important complimentary aspects of these three sets of tools. Six Sigma
(often combined with DFSS and lean) has been much more successful than its predecessors, notably
8 D. C. MONTGOMERY AND C. M. BORROR
TQM. The project-by-project approach, the analytical focus, and the emphasis on obtaining improve-
ment in bottom-line business results have been instrumental in obtaining management commitment
to Six Sigma. Another major component in obtaining success is driving the proper deployment of
statistical methods into the right places in the organization. The DMAIC problem-solving framework
is an important part of this.
Concluding remarks
We view the current success of six sigma (often integrated with lean and design for six sigma) as a
direct decedent of the philosophy of Walter Shewhart. As part of his legacy, we recognize that some of
the more recent thought leaders in the quality field such as Deming and Juran were deeply influenced
by Shewhart during their tenure at Bell Labs and Western Electric and in their subsequent professional
practice. While six sigma alone is not a quality management system, when combined with design for
six sigma it can nicely accommodate two of the three major components of such a system, quality
control and improvement, and planning for quality.
While there have been many significant developments in the technical tools of quality control and
improvement since Shewhart’s times, we suggest a few areas where additional research can have sig-
nificant payoff. For additional discussion and suggestions refer to Steinberg et al. (2008) and Woodall
and Montgomery (1999, 2014). Because of the increasing emphasis on shifting quality and business
improvement upstream into research and development and product/process design, designed exper-
iments will continue to be a key area for research. Computer models play an increasingly important
role in research and development activities so research in design of experiments for computer models
is critical. Many computer models have a large number of input and output variables which can present
a challenge for existing techniques. Outputs are often in the form of functional data for which there
are few completely satisfactory analysis techniques. Stochastic computer models often produce time
QUALITY TECHNOLOGY & QUANTITATIVE MANAGEMENT 9
series output which presents another challenge for analysis. Many of these issues apply to physical
experiments as well as computer models. Real-time experiments with web traffic is a promising area
for research. While Shewhart control charts have a long history, modifications and extensions of the
basic principles continue to be an active area of research, including extensions to multivariate data,
efficient techniques for monitoring measures of variability, and effective utilization of nonparametric
methods. Perhaps the biggest challenge to researchers in the process monitoring and control area is the
increasing availability of big data. Developing techniques that can satisfactorily monitor large, highly
dynamic data streams without excessive false alarms is critically important, since there are applica-
tions in many areas such as public health and biosurveillance, food safety, environmental monitoring,
and fraud detection in electronic business transactions. While there has been substantial progress in
monitoring systems for functional data [see Noorossana, Saghaei, and Amari (2011)] this is still an
area where important problems remain to be addressed. Image monitoring is an application that has
received little attention in the literature. In all of the applications involving large data streams, data
visualization is a largely untouched area that needs significant research.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes on contributors
Douglas C. Montgomery is a Regents Professor of Industrial Engineering & Statistics. His research interest is in indus-
trial statistics. He is the author of 16 books and over 180 technical papers. He is a recipient of the Shewhart Medal,
the Brumbaugh Award, the Hunter Award and the Shewell Award, and the Ellis R. Ott Award, among others. He is an
honorary member of American Society for Quality Control and a fellow of the American Statistical Association, the
Royal Statistical Society, the Institute of Industrial Engineers, among others. Montgomery has consulted with more than
200 businesses, including Motorola, AT&T, Boeing, IBM and The Coca-Cola Company, that has helped promoting the
quality technology by linking research to real-world applications.
Connie M. Borror http://magazine.amstat.org/blog/2016/07/01/cborror16/ We received the sad news that Prof. Borror
passed away after finishing this article. She was the first recipient of the Shewhart Medal, and we are using this occasion
to remember her as well.
References
Anderson-Cook, C. M., Lu, L., Clark, G., DeHart, S. P., Hoerl, R., Jones, B., … Wilson, A. G. (2012a). Statistical
engineering – Forming the foundations. Quality Engineering, 24, 110–132.
Anderson-Cook, C. M., Lu, L., Clark, G., DeHart, S. P., Hoerl, R., Jones, B., … Wilson, A. G. (2012b). Statistical
engineering – Roles for statisticians and the path forward. Quality Engineering, 24, 110–132.
Antony, J., Bhuller, A. S., Kumar, M., Mendibil, K., & Montgomery, D. C. (2012). Application of Six Sigma DMAIC
methodology in a transactional environment. International Journal of Quality and Reliability Management, 29, 31–53.
Borror, C. M., Beechy, T., Shunk, D., Gish, M., & Montgomery, D. C. (2011). TASER’s roadmap to quality. The Quality
Management Forum, 37, 13–18.
Duarte, B., Montgomery, D. C., Fowler, J., & Konopka, J. (2012). Deploying LSS in a global enterprise – Project
identification. International Journal of Lean Six Sigma, 3, 187–205.
Harry, M., & Schroeder, R. (2000). Six Sigma: The breakthrough management strategy revolutionizing the world’s top
corporations. New York, NY: Random House.
Kumar, M., Antony, J., Madu, C. N., Montgomery, D. C., & Park, S. H. (2008). Common myths of Six Sigma demystified.
International Journal of Quality & Reliability Management, 25, 878–895.
Montgomery, D. C. (1999). “experimental design for product and process design and development” (with commentary).
Journal of the Royal Statistical Society Series D (The Statistician), 48, 159–177.
Montgomery, D. C. (2004a). Improving business performance: Project-by-project. Editorial in Quality and Reliability
Engineering International (Vol. 20, p. iii).
Montgomery, D. C. (2004b). Selecting the right improvement projects. Editorial in Quality and Reliability Engineering
International (Vol. 20, pp. iii–iv).
10 D. C. MONTGOMERY AND C. M. BORROR
Montgomery, D. C. (2008). Does Six Sigma stifle innovation? Editorial in Quality and Reliability Engineering International
(Vol. 24, p. 249).
Montgomery, D. C. (2011). Innovation and quality technology. Editorial in Quality and Reliability Engineering
International (Vol. 27, pp. 733–734).
Montgomery, D. C. (2012). Design and analysis of experiments (8th ed.). Hoboken, NJ: Wiley.
Montgomery, D. C. (2013). Introduction to statistical quality control (7th ed.). Hoboken, NJ: Wiley.
Montgomery, D. C., & Woodall, W. H. (2008). An overview of Six Sigma. International Statistical Review, 76, 329–346.
Montgomery, D. C., Jennings, C. L., & Pfund, M. E. (2011). Managing, controlling and improving quality. Hoboken,
NJ: Wiley.
Noorossana, R., Saghaei, A., & Amari, A. (2011). Statistical analysis of profile monitoring. Hoboken, NJ: Wiley.
Steinberg, D. M., Bisgaard, S., Doganaksoy, N., Fisher, N., Gunter, B., Hahn, G., … Wu, C. F. J. (2008). The future of
industrial statistics: A panel discussion. Technometrics, 50, 103–127.
Woodall, W. H., & Montgomery, D. C. (1999). Research issues and ideas in statistical process control. Journal of Quality
Technology, 31, 376–386.
Woodall, W. H., & Montgomery, D. C. (2014). Some current trends in the theory and application of statistical process
monitoring. Journal of Quality Technology, 46, 78–94.