Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
No Boundaries
Six Sigma for the New Millennium: A CSSBB Guidebook, Second Edition
Kim H. Pries
Lean for Service Organizations and Offices: A Holistic Approach for Achieving Operational
Excellence and Improvements
Debashis Sarkar
Lean ISO 9001: Adding Spark to your ISO 9001 QMS and Sustainability to Your Lean Efforts
Mike Micklewright
Focus
Reduce Waste
Contain Variability
14 13 12 11 10 5 4 3 2 1
ISBN-13: 978–0-87389-795-2
No part of this book may be reproduced in any form or by any means, electronic, mechanical,
photocopying, recording, or otherwise, without the prior written permission of the publisher.
ASQ Mission: The American Society for Quality advances individual, organizational, and community
excellence worldwide through learning, quality improvement, and knowledge exchange.
Attention Bookstores, Wholesalers, Schools, and Corporations: ASQ Quality Press books, videotapes,
audiotapes, and software are available at quantity discounts with bulk purchases for business, educational,
or instructional use. For information, please contact ASQ Quality Press at 800–248–1946, or write to ASQ
Quality Press, P.O. Box 3005, Milwaukee, WI 53201-3005.
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
PART 1
Figure 3.1 OE, I relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Figure 5.1 iTLS™® model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Figure 5.2 iTLS™® seven-step process.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Figure 5.3 iTLS™® seven-step flow, tools, and techniques. . . . . . . . . . . . . . . . . . . . . . 58
Figure 6.1 Lean and Six Sigma benefits/project.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Figure 6.2 Lean–Six Sigma and TLS benefits/project. . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Figure 6.3 Lean, Six Sigma, and iTLS™® financial return/project. . . . . . . . . . . . . . . . 64
Table 6.1 Lean, Six Sigma, and iTLS™® comparison . . . . . . . . . . . . . . . . . . . . . . . . . 65
Figure 6.4 Financial contribution by methodology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Figure 7.1 A–river flow operation network—laptop computer manufacturing. . . . . . . . 68
Figure 7.2 V–river flow operation network—a pick-pack-ship warehouse. . . . . . . . . . . 69
Figure 7.3 T–river flow operation network—automobile assembly.. . . . . . . . . . . . . . . . 71
Figure 7.4 I–network river—airline meal tray assembly operations. . . . . . . . . . . . . . . . 72
PART 2
Table 1.1 Product costs when most costs are variable . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Table 1.2 Product costs when most costs are not variable. . . . . . . . . . . . . . . . . . . . . . . 92
Table 1.3 Investment decisions when most costs are variable . . . . . . . . . . . . . . . . . . . . 93
Table 1.4 Investment decisions when most costs are not variable. . . . . . . . . . . . . . . . . 93
Table 1.5 Make vs. buy decisions when most costs are variable . . . . . . . . . . . . . . . . . . 94
Table 1.6 Make vs. buy decisions when most costs are not variable. . . . . . . . . . . . . . . 94
Table 1.7 Standard costs of clutches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Figure 1.1 Kanban system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
ix
Figure 9.6 Box plot of inventory before and after iTLS implementation.. . . . . . . . . . . . 304
Table 9.1 ANOVA indicating reduction significance. . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Figure 9.7 Inventory position after implementation of the iTLS. . . . . . . . . . . . . . . . . . . 305
Figure 9.8 Goals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Figure 9.9 Spaghetti flow of the current layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Figure 9.10 Cause-and-effect analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Figure 9.11 FMEA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Figure 9.12 New process flow after implementation of improvements. . . . . . . . . . . . . . . 312
Figure 9.13 Process time reduction monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Figure 9.14 iTLS model applied.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Figure 9.15 The DBR model → D 5 drum, B 5 buffer, R 5 rope.. . . . . . . . . . . . . . . . . . . 317
Figure 9.16 Thinking tool applied for cause-and-effect determination. . . . . . . . . . . . . . . 320
Figure 9.17 Mobilize work teams.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Figure 9.18 Buffer management.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Figure 9.19 Application of SPC, Lean, and Six Sigma tools and techniques. . . . . . . . . . 322
Figure 9.20 Buffer performance and status dashboard in real time. . . . . . . . . . . . . . . . . . 323
Figure 9.21 Metallurgical plant expansion.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Figure 9.22a, b, c, d Examples of four plants’ performances applying iTLS. . . . . . . . . . 324
xv
portion of the material in Part I has been repeated in Part II to provide continuity
in the flow of ideas.
Throughout the book we often use both “we” and “I” in describing our views
and experiences. “I” is used when describing an experience unique to one of us,
although we don’t normally distinguish which one. “We” is used when referring
to our collective beliefs.
If your intention is to learn how to systematically improve quality, process
reliability, and throughput while creating a waste-less enterprise, then you should
read on. This book is for you!
During the last century, three waves washed over the shores of developed and
developing countries, creating both wealth and freedom. These waves were break-
throughs in how companies managed their businesses, and all have deep roots in
the automobile industry. Despite beginning in a single industry, these management
systems eventually influenced many industries and countries. The first system was
initiated early in the last century by Henry Ford at Ford Motor Company, the second
was led by Alfred Sloan at General Motors, and the third resulted from the efforts
of Taichi Ohno at Toyota Motor Company. Although these breakthroughs resulted
in enormous increases in productivity, growth, and prosperity, they were not the
result of improvements in technology, at least not how technology is commonly
viewed. They were changes in how these three automobile companies managed
their businesses. Today we stand on the verge of a fourth system of management,
which promises similar benefits in productivity, growth, and prosperity.
HENRY FORD
So what did Henry Ford do that was so earthshaking? We often view him as
the inventor of the assembly line, a very efficient production process with negative
overtones of subjecting people to mind-numbing repetitive tasks. Although there
is truth in both viewpoints, they miss the magnitude of Ford’s accomplishment.
Ford’s goal was extraordinarily ambitious, to say the least. He wanted to pro-
duce a reliable, dependable automobile that the common man, including those
who produced it, could afford. In the early 1900s, only the wealthy could afford
an automobile. Such a purchase was far beyond the reach of the great majority,
causing most people to live their entire lives within a few miles of their birthplaces.
Ford’s management system changed all that. Between 1909 and 1927, he
produced and sold 17 million Model Ts while driving the price down from $970
to $290, and that was without taking into account inflation. Ford claimed that ev-
ery time he reduced the price of the Model T by $1, he created another 1000 buy-
ers. In addition to developing a more efficient method of producing automobiles,
Ford devised a way to greatly increase demand for his product. At one point, he
more than doubled the going wage of his workforce to $5 a day while reducing
the workday from nine to eight hours. This wage increase allowed his workers
to purchase the product they produced by simply saving this wage increase for
about a year. This action not only made his workers customers, but it also forced
many other companies to pay a competitive wage in order to retain their best
workers, again increasing the number of customers for Ford’s Model T.
The effect in the United States was profound and quickly spread to Europe.
Economic activity exploded, and the ensuing wealth was much more widely dis-
tributed. In rural areas, which comprised most of the United States, families were
no longer tethered to the plow and the beasts that pulled it, which sometimes
were the farmers themselves.
We know a great deal more about the results of Ford’s management system
than we do about the system itself. We are going to call it a “river system.” The
outputs of his system were Model Ts being purchased by customers, often as
they rolled off his assembly lines. The inputs were raw materials like iron ore
for metal parts, silicates for glass, and textiles for fabrics. The flow eventually
became so seamless from raw materials to finished product that it represented
a smooth, fast-flowing river system. The content or materials flowed smoothly,
steadily, and directly throughout the entire system, like small creeks feeding
larger streams and then a larger river—a river system in which there were no
meandering flows, dams, or rapids. Ford focused on expanding the breadth of his
river system, shortening the length of the various tributaries and making it flow
faster and more smoothly.
Ford poured the company’s earnings into expanding and improving his river
system. At the onset, he basically produced engines and assembled cars. Using
the profits generated, he began to produce more of the components needed in
his cars. He eventually integrated his supply lines to the point that instead of
purchasing and assembling components he was buying basic raw materials such
as iron ore, sand, and textiles and converting them into components. He was so
single-minded in the pursuit of reinvesting in his river system that other investors
sued him in order to force distribution of some of the profits.
The rate at which Ford produced cars was dictated by the number and speed
of his assembly lines; they controlled the flow rate, because for nearly 18 years
demand always exceeded supply. It made for a very efficient management sys-
tem. Everyone from Henry Ford on down knew exactly how many of each part
needed to be received, produced, and shipped each day. The financial system
was equally straightforward. Productivity was simple to measure—the number
of cars produced divided by the number of employees times $5 (the daily wage).
While expanding the breadth and increasing the flow rate of his river system, he
simultaneously reduced its length. In his River Rouge plant, it took only 28 hours
for iron ore to be converted into steel for engines, body panels, and other parts
and roll off the assembly line as a finished car.
ALFRED SLOAN
General Motors’s strategy was totally different from Ford’s. It had 10 car
lines, while Ford had only two. Ford owned the low end of the market, the Model
T, and had a strong foothold in the high end with the Lincoln. GM’s 10 lines
covered all the market segments. Its strategy provided the greater variety of body
styles, features, and prices that the market was now demanding. However, man-
aging this diversity required a totally different management system.
the divisions were truly independent entities, actual market prices would exist.
Unfortunately, because they were part of the same corporation, prices for the
internal transfer of products could be established only by management, not the
marketplace. An answer was needed that made sense to both the corporation and
its entities. It was a formidable challenge.
GM was fortunate to have gained the services of Donaldson Brown as a top
financial manager. DuPont had purchased a large portion of GM’s shares and had
dispatched Brown, a valued executive, to ensure a good return on its investment.
Brown was instrumental in developing decision-making tools that became the
heart of a new management system, one that enabled GM, and eventually other
companies, to effectively manage decentralized systems producing a variety of
products. It allowed GM to make good economic decisions in this new world of
organizational complexity and product diversity.
Brown devised a way to calculate the cost of each product, which became the
core of what we now call “cost accounting,” an approach that soon became the
basis for management decisions at General Motors and throughout much of
the industrial world. Today managers often act as if cost accounting were one of
the original Ten Commandments, because decisions based on its use are often
considered to be holy and beyond challenge.
The key to cost-accounting management was the assignment of the major
cost components—material, labor, and overhead—to individual products. Brown
believed that the “cost of a product” could be calculated by adding up these three
components. The costs of material and labor were easily attributed to products,
because at that time these costs varied directly with the volume of production.
When production increased, more material was purchased and more labor em-
ployed. Material and labor costs increased in direct proportion to the increased
production. When production volumes declined, the opposite happened. So de-
spite changes in volumes, the cost of the material and labor in a product changed
only in proportion to changes in material and labor costs.
Even in Sloan’s day overhead expenses did not directly vary with volume,
but because they represented a very small part of the overall cost of a product
they could safely be allocated to a product without introducing significant error.
During the early years of GM’s rise, material and labor accounted for 85–90% of
the cost of a product, whereas overhead expenses were only 10–15%.
Although it’s easy to see why material costs would vary directly with pro-
duction volumes, today it isn’t so obvious why labor expense should. One must
remember that in the early part of the last century most factory workers were paid
on piecework, not on the number of hours they worked. In addition, companies
could hire and fire workers at will, which they did, often on a daily basis, in order
to keep production volumes and labor costs tightly linked.
Knowing the cost of a product enabled companies to decentralize deci-
sion making and deal with an increasing variety of products when establishing
external selling prices and internal transfer prices. A division could establish
prices that yielded acceptable returns on the required investment. If sales fell far
short of expectations on this product, the return on investment would be much
less than expected. The division then needed to either reduce the costs of making
the product, find a way to increase sales, or discontinue the product. If not, ad-
ditional capital would be deployed to products and divisions that provided more
attractive returns. This approach worked for both automobiles and components
such as engines, transmissions, and the like.
Once the cost of a product was known, divisions could make a host of sound
management decisions without specific direction from the corporation. If they
could reduce the cost of making their products, profits and ROI would increase,
and more capital would be available to grow the business. Investing in more
efficient equipment had the potential of reducing both labor and material costs.
The existing material and labor costs for a particular operation or set of opera-
tions could be compared with the expected material and labor costs from using
a new piece of equipment. The resulting savings could be used to calculate a
return on the investment needed for the new equipment. If the return was above a
certain threshold the investment made sense; if not, then other alternatives should
be considered. As a result, investment decision making could be pushed lower
and lower in organizations. Corporate managers usually retained final signoff on
major investments, but their role shifted to checking the assumptions behind the
various requests and allocating capital to the most attractive opportunities.
A second avenue for increasing profits was to reduce material and labor
costs by “in-sourcing” production. A “make vs. buy” analysis could be done to
determine if costs would be reduced by producing items internally instead of
purchasing them. This analysis simply compared the vendor’s price to the cost of
internal production (material, labor, and overhead). If the savings were sufficient
to justify the required change, companies should make the item instead of buying
it. Many “make-buy” decisions could be made at a local level without corporate
authorization or awareness.
A third way to increase profits and ROI was to increase the efficiency of
the workforce so that more products could be produced by the same number of
people. Time study and industrial engineering techniques were widely employed
to determine the most efficient methods and time standards for each operation.
Workers were measured and held accountable for meeting these standards. More
efficient methods and tighter labor standards resulted in lower costs, an admi-
rable goal. Unfortunately, this process created divisiveness between management
and workers. Workers were naturally reluctant to find or divulge ways to improve
an activity, because the result would be tighter standards for measuring their
performance without any direct benefit to them.
In essence, the ability to calculate product cost gave rise to a series of proce-
dures for establishing transfer and market prices, evaluating investment oppor-
tunities, deciding whether to make or buy items, and driving down labor costs.
that essentially made labor a fixed rather than variable cost. Workers not needed
for the current level of production were allowed to be idle or participate in make-
work programs (like painting the fire hydrants in their community) rather than
being furloughed. During these times, they were paid approximately 90% of
wages they would have earned if they were producing products.
Success in using cost-accounting techniques also caused material and labor
costs to decline as a percentage of a product’s cost. Investment in more efficient
equipment and making, rather than buying, more products reduced the propor-
tion of material and labor in a product’s cost. At the same time, overhead became
an increasingly larger portion of the cost of a product.
The automobile industry had shifted from selling a Model T, whose design
barely changed for 18 years, to annual product redesigns. The engineering, fi-
nancial, and marketing staffs needed to develop, support, and sell this increased
variety of products grew at a much faster pace than material and labor costs.
Fixed costs, which were a mere 10–15% of General Motors’s product costs in the
late 1920s, skyrocketed to 50–60% near the end of the century.
At that point, instead of 85–90% of a product’s cost varying with volume,
it was only 30–40%. This shift greatly undermined the soundness of the tech-
niques used for pricing, investing, and deciding whether to purchase or make
components, yet these cost-accounting techniques continued to be widely used
and began to produce results that had negative, rather than positive, economic
consequences. For specific examples, see Chapter 1, Part II.
In addition, the use of EOQ thinking often resulted in large inventories that
needed to be scrapped or sold at reduced prices when overproduction and model
changes created obsolete parts and excess quantities of the prior years’ models.
The negative effects of dedicating assembly lines to single models highlighted
this problem. When actual demand exceeded the capacity of an assembly line,
there would be a shortage of cars consumers wanted to purchase, resulting in
lost sales. An even more common problem occurred when an assembly line
overproduced models that were not selling. Because labor efficiency was a very
important measure and workers could not be furloughed to save costs, plants
often produced cars well in advance of actual sales. Unfortunately, in order to
make room for the annual new model introduction, many of these cars needed
to be sold at a discount. Over time, consumers noticed this pattern, causing
many of them to defer purchasing a new car until dealers began offering year-
end discounts, further magnifying the problem.
The final and possibly most devastating impact of cost-accounting thinking
was how it valued inventories. It employed a value-added concept that assumed
that as raw materials are converted into finished products, the labor and overhead
associated with these activities should be added to the raw material cost to obtain
the cost of the product. These product costs were then used to value invento-
ries. When the level of finished goods increases, a portion of the increase occurs
because of the labor and overhead in the product cost. During that reporting
period, the labor and overhead costs that increase finished goods inventory are
excluded from the calculation of profits for that period.
A company’s profits can actually increase simply because the level of finished
goods increases. The opposite happens when the inventory is reduced. The labor
and overhead that had previously been capitalized are now counted as additional
expense, lowering the net profit for that period. When managers are rewarded
based on net profits, the temptation to inflate profits by increasing inventories
is often unavoidable. This distortion was not significant when directly variable
costs were a large portion of a product’s cost. However, when they became a
much smaller part, it created a serious problem.
The most devastating example I am aware of occurred as the result of a
highly successful experiment. This particular automobile company tested a dif-
ferent way of distributing cars to dealers. In one state, it shipped a portion of the
cars that dealers ordered to a central distribution point rather than directly to the
dealers. The distribution center had some capability to modify cars to fit the spe-
cific needs of the buyer, such as changing the seats, audio systems, and the like.
As a result, the dealers maintained smaller inventories on their lots, but with
enough variety that consumers could see and drive the various models. If the con-
sumer wanted to purchase a car with specific features not currently available on a
dealer’s lot, the dealer would check the central inventory to see whether the exact car
the customer wanted could be made available in a day or two. If so, a sale was made.
The experiment was a great success; inventories and shipping costs were
significantly reduced. Most important, sales rose because more consumers could
quickly get the exact car they wanted. Despite the success, however, the car com-
pany decided not to expand it to other models. So why didn’t the car company
make it standard practice for all models? The simple reason was that such a
system would significantly reduce the amount of cars in the field, which meant a
one-time drop in sales to dealers, even though sales to consumers would increase.
This inventory reduction would have caused a temporary, but significant, drop in
the company’s profits. Fearful of Wall Street’s reaction, management decided not
to implement the system company wide.
Unfortunately, the inertia of long accepted and once successful way of mak-
ing decisions tended to overwhelm “facts”—old habits die slowly. A marvelous
example of this inertia occurred in the English Navy when it dominated the seas.
Scurvy was a debilitating disease and a common problem among sailors, pi-
rates, and others who spent extended periods at sea. In 1536, a French explorer,
Jacques Cartier, learned from natives along the St. Lawrence River that he could
save the lives of men dying from scurvy by using a citrus tonic. In 1753, the
British Royal Navy finally approved a lime-based treatment, which incidentally
is why British sailors are still referred to as “Limeys.” The pain of this inertia to
change extended over 200 years and resulted in an enormous amount of misery.
The change in the elements that comprised a product’s cost eventually be-
came so widespread that they made the use of cost-accounting management not
only obsolete, but actually destructive. The good economic decisions that re-
sulted when most of a product’s cost varied with volume became bad economic
decisions when much less of the cost varied with volume. In addition, the focus
on optimizing local decisions resulted in negative effects on the total system.
Ford’s idea of a smooth, fast-flowing river system dissipated as supply chains
became more clogged with inventories and the results of local-cost thinking.
Inertia has blocked change to a decision-making system more suitable to the
current environment and has devastated many companies. It’s similar to the rise
and fall of Ford’s river system. The seeds of its downfall grew as a result of its
success, and they still infect decision making in many companies today.
TAIICHI OHNO
Sloan’s management system, like Ford’s, was developed in the U.S. auto-
motive industry and eventually spread to other industries and countries, greatly
benefiting those who adopted it. The next wave of improvement, however, origi-
nated in Japan’s automobile industry. Taiichi Ohno’s management system, often
referred to as the Toyota Production System (TPS), had a huge impact there and
reverberated throughout other countries. It’s acceptance in the United States and
Europe, however, has not been universally positive, both because of inertia and a
lack of understanding as to why the Toyota Production System was so effective.
This misunderstanding was in part due to Ohno’s successful efforts to mislead
and confuse non-Japanese companies.
After the Second World War, the bicycle was the standard mode of transpor-
tation in Japan and only the wealthy few could afford an automobile. Ohno’s
goal was not so different from Ford’s. He wanted to help Japan become a modern
industrial nation by producing an automobile that could be purchased even by the
workers who produced it.
Why was Ohno’s Toyota Production System a leap forward from Ford’s? Es-
sentially they were both highly efficient “river systems” with the same types of
inputs and outputs. However, Ohno built a river system that worked for multiple
products with uncertain demand. Unlike Ford, Toyota produced several models
in a variety of colors and with a number of options.
Creating an efficient river system for such an environment was much more
difficult. Ford had developed a highly integrated supply system with dedicated
manufacturing processes that produced the same quantity of each item day in and
day out. Volume increased only when new assembly lines were added or when
the productivity of the existing lines increased. The assembly lines and most
machines were dedicated to make the same item, and machine changeovers were
rare, which greatly simplified synchronizing the flow of materials.
Ohno had no such luxury. While the demand for the Model T exceeded Ford’s
capacity for 18 years, demand for the various Toyota models was uncertain and
subject to frequent change. Forecasting sales of the various Toyota models was
compounded by the annual introduction of new models by competitors. In addi-
tion, Ohno had to produce different models that were assembled from a variety
of parts. Since he couldn’t dedicate equipment, he had to frequently changeover
equipment in order to produce the needed parts.
Ohno faced another obstacle Ford did not have to deal with: the widespread
acceptance of cost accounting for managing complex production organizations.
Ohno told me that cost-accounting thinking was the biggest obstacle he had to
overcome in developing his system. When he eventually surmounted this ob-
stacle, he gained a competitive edge, because many of his competitors still have
not yet overcome this hurdle.
Ohno could have dealt with these obstacles in the same way as other car
companies—by producing large batches of the various parts and models to avoid
changeovers and obtain local production efficiencies. The result would have been
dams (piles of inventories), rapids (shortages that required expediting), and me-
andering flows in his river system. It also would have required maintaining a
large supply of cars in the showrooms to buffer the assembly lines from changing
consumer tastes.
It was clear from my discussions with Ohno and from his writings that he re-
jected these options. He spoke clearly and forcefully about developing a smooth-
flowing river system that closely linked actual sales to the assembly of cars,
production, and receipt of components. He believed it was the only way that
Toyota and Japan could compete with entrenched, well-financed competitors in
the United States and Europe. He knew that Japan initially had some advantage
due to low labor costs, but that this competitive edge would eventually disappear
as Japan became a modern industrial nation. He believed he had to devise a more
efficient management system than his competitors in order to reach his goal. He
spent over 40 years developing and refining such a system. The results of his ef-
forts speak for themselves.
In addition to the three-headed hurdle of uncertain demand, machine change-
overs, and cost-accounting thinking, Ohno had to rely on vendors for many com-
ponents. As a result, he had little direct control over a large part of his river
system. In order to have a fast, smooth-flowing river system like Ford’s, the as-
sembly of cars, production of components, and receipt of materials needed to be
tightly linked to actual sales. This required flexible rather than dedicated assem-
bly lines—lines that could produce several models. It also meant that the machine
shops, which produced the major components such as engines and transmissions,
that most deeply or most frequently threatened depletion of the buffer could
be traced to their causes. The sources of these disruptions would be the prime
candidates for improvement. Essentially, the degree of buffer penetration
would provide a Pareto list of improvement opportunities. Using this ap-
proach, throughput is increased, less inventory is required, and the causes
of disruptions are prioritized. This laser-like approach enables a company to
much more rapidly develop a smooth, fast-flowing river system without the
loss of valuable throughput.
We knew that Ohno agreed when he exclaimed, “If I had thought about it that
way I could have developed my system in less than half the time.” He said that he
didn’t have a way to know which disruptions were most important, so he worked
on eliminating each disruption as it appeared.
The obvious question is whether there is a seed of destruction in Ohno’s sys-
tem that will cause its demise. We think the short answer is no. However, we do
believe that by building on the lessons of Ford, Sloan, and Ohno, a much superior
and more widely applicable management system can be constructed. The major
drawbacks to Ohno’s system are twofold. First, it takes a long time to embed it in
the DNA of a company and achieve a true competitive edge. Second, it has been
used primarily in higher volume, stable production environments. Unfortunately,
many companies do not have this ideal environment and certainly few can wait
many years to achieve similar benefits.
Lean, in constrast, is replete with tools for reducing the seven wastes (muda)
but lacks a global focusing mechanism for prioritizing when and where to attack
these wastes. Effort to reduce some wastes can itself be a waste.
Six Sigma’s strength lies in its statistical tools for reducing variations in pro-
cesses. Like Lean, it is short on a focusing mechanism for prioritizing actions for
reducing these variations and removing the most important wastes.
Because of the similarities, overlaps, and shortcoming of these three meth-
odologies, it has not been a simple process to develop an iTLS system capable
of creating a fourth wave of prosperity, productivity, and growth. We believe
that the unique combination we have developed and tested has the potential to
be such a system. We call this unique combination iTLS™®. From this point
forward, we use the terms TLS and iTLS™® interchangeably.
So what is required to implement this new management system? It consists
of three major elements. The first is the development of an overall strategy to
manage our river systems. We refer to such a strategy as a Throughput Operating
Strategy (TOS), because the primary emphasis should be on continually growing
throughput (revenue) and profitability. Every organization produces and delivers
its products or services through a network of activities. When these networks are
mapped so that the flows of activities are displayed vertically, they take on one of
four shapes, or a combination of these four shapes. The fact there appears to be
only four shapes greatly simplifies the development of a Throughout Operating
Strategy. These four shapes roughly resemble either an A, V, I, or T.
In an A network, a variety of inputs, or materials, typically converge to form
a single or small number of end items. Organizations that fabricate and assemble
products are typically A structures. Ford, Sloan, and Ohno all dealt with A-shaped
river systems, although they varied greatly in complexity.
A V-shaped river system may initially look like an upside down A, but it has
entirely different characteristics. Instead of the flow of items converging to form a
small number of end items, they diverge to form a large, sometimes a very large,
number of end items. Examples of V structures include oil refineries, semiprocess
industries (e.g., steel, aluminum, paper), and operations that convert animals into
a wide variety of food products. Many distribution systems, reverse logistics, and
repair operations also take on the shape of a V.
The T structure is probably the least common of the four river systems. It’s
essentially a network in which a number of items flow to an assembly point
where they can be combined in myriad ways to form a much larger number of
end items. An excellent example of T networks are automotive companies, where
a few thousand parts can be combined to produce millions of unique automo-
biles, especially when color, fabric, audio systems, and so forth are considered.
are perceived bottlenecks to the flow. Even worse are bottlenecks that seem to
float or shift within the river system. Our experience is that very few, if any, orga-
nizations have real production bottlenecks that cannot quickly be broken with the
right focus and effort. We have found that a minimum of 25% more can be pro-
duced in every organization by applying proven tools to perceived bottlenecks.
Archimedes, a famous Greek philosopher, supposedly claimed that “If I had a
long enough lever I could move the whole world.” Expanding the capacity of our
organizations by breaking bottlenecks usually means that the organization can
produce additional products at essentially the cost of the purchased materials—a
very, very long lever.
It is estimated that these three methodologies (TOC, Lean, and Six Sigma)
comprise more than 90% of current improvement efforts. Unfortunately, the
practitioners of these methodologies often spend considerable time touting their
methodologies and defending their approaches rather than trying to determine if
and how they could be combined into a much superior system. The good news is
that the environment is beginning to change.
My research and practice concluded that a successful TLS methodology
should use TOC to determine where to focus improvement efforts—to set the
priorities. Based on these priorities, Lean, with its array of tools for reducing
waste, is best used to identify and eliminate the causes of these wastes. Then, to
stabilize the processes and achieve the desired statistical control for sustainabil-
ity, we employ Six Sigma tools. The combination of Lean and Six Sigma focused
by TOC, which I call iTLS, provides a proven methodology for smoothing and
speeding the flow of work in our river systems.
At the first Continuous Productivity Improvement (CPI) Conference in 2006,
sponsored by Weber University, I presented the results of an extensive and rigor-
ously conducted 2.5-year test of a unique comparison of the iTLS, Lean, and Six
Sigma methodologies. The results of this experiment demonstrated that iTLS
yielded four times more benefits than projects using either Lean or Six Sigma.
Even more telling was the fact that iTLS projects were responsible for 80% of
the financial benefits even though they were used in less than 30% of the plants.
More than 211 practitioners in 21 plants conducted 105 projects, which demon-
strates the validity of the results.
I made a similar presentation to the American Production Inventory Control
Society (APICS) during the same year and in articles in APICS’s magazine de-
scribing the enormous effect on profitability, agility, and quality. Frankly, I was
overwhelmed by the interest and excitement created among the CPI practitioners.
In 2007, I was a keynote speaker at Goldratt’s TOC-ICO conference in Ne-
vada to introduce iTLS™® to TOC practitioners and report on the increased
benefits that result from the interaction effects when TOC, Lean, and Six Sigma
are combined in a logical sequence. iTLS™® was warmly embraced by the TOC
practitioners, including Eli Goldratt, its developer.
Since 2003, iTLS™® has been used by more than 4000 practitioners in more
than 70 plants in the United States, Canada, United Kingdom, Germany, France,
Finland, Israel, Mexico, Brazil, Ireland, Spain, Hungary, China, India, South Ko-
rea, and Singapore. As our iTLS process has become refined and better under-
stood, the results have improved by more than 50% over the initial test.
One day in the fall of 2009, I received a call from two Brazilian consultants,
Celso Calia and Fabiano Almeida, partners with Goldratt Associados, de Brasil,
who specialize in implementing continuous improvement processes in Brazil’s
heavy industries. Historically, they had focused on using TOC. Celso had read
my articles and other materials published by APICS and wanted to arrange a
meeting. I agreed and at a meeting in Dallas, Texas, he gave a PowerPoint pre-
sentation that showed how they had successfully implemented iTLS™® and
achieved very significant results. He explained that using TOC had allowed them
to make significant positive changes, but variability in the processes was killing
them, and they could not understand why. They seemed to be constantly chas-
ing ghosts and floating bottlenecks. With considerable reservations they decided,
at least temporarily, to shed their current paradigm and apply what they had
learned from my articles and presentations. They were pleasantly surprised that
they were able to not only achieve significant process improvements, but were
also able to systematically control process variability from the onset. Now they
wanted more . . . they wanted to learn more about the nuts and bolts of the iTLS
process and its details, and asked if I could help them. They were very proud of
their accomplishments and so was I . . . like a proud grandparent!
Having agreement on a 40,000-foot view of how these three methodologies
should be combined is a major step forward. However, given the scope of the
various tools and techniques and the fact that in some cases they overlap, de-
veloping a ground-level working process was not a simple task. It was similar
to trying to combine the best racing engine, the best transmission, and the best
suspension system in order to produce a superior race car.
In summary, creating a smooth, fast-flowing river system requires three ele-
ments. First, we need to understand the shape of the network(s) by which we
deliver our products or services and develop an appropriate TOS for manag-
ing the flows. Ideally, the TOS will encompass procurement, production, and
distribution so that we can more closely connect all these activities with the
marketplace.
Second, we need a robust process for prioritizing and removing the disrup-
tions that impede rapid and smooth flow so that the time needed to turn materials
into purchased products shrinks.
Third, we need a combination of courage and consensus to make the transi-
tion from managing with a local focus to a more global one. The broader and
deeper the management consensus, the less courage is needed and vice versa.
Today, we oscillate between local and global actions. Almost everyone who
has worked in an organization that produces and delivers products and services
has experienced the hockey-stick impact of the end of the month (sometimes
it’s the end of the quarter); during the first portion of the period we focus on lo-
cal performance measures (efficiencies, limiting overtime, long production runs,
etc.). As we near the end of the period, the emphasis turns sharply to the global
focus of meeting shipping goals. We then take the opposite actions in order to get
as much produced and shipped as possible. Once the new period starts, we im-
mediately revert to the old ways of operating. This oscillation of effort continues
month in and month out, considerably disrupting the flow of our river systems.
The Fourth Wave Management System, iTLS, provides a consistent method
for running organizations. Educating employees in the new TOS helps refine
it and engenders both understanding and acceptance, providing a solid consen-
sus for change. When measurements are closely aligned with a more global ap-
proach, they reinforce the desired behaviors.
The timetable for adoption of a Fourth Wave Management System depends
almost solely on overcoming inertia, because the changes needed are mostly
in policies and mind-sets, not physical changes. Hopefully, companies will
move much more quickly than the British did in using citrus drinks to eliminate
scurvy.
Great organizations with excellent performances still need to constantly stay
on top of Voice Of the Customer (VOC) and the Voice Of the Processes (VOP),
particularly when it comes to favoring costs savings ahead of safety and quality.
The slightest slip in those dimensions can cause catastrophes that damage the
organization’s reputation, good will, and market health. The organization’s past
performance cannot necessarily guarantee present and future health without a
commitment to constant improvements that strengthen the basic fundamentals
that were causes of successes for the organization.
Let’s take a moment and reflect on a devastating situation in 2010 with the
Japanese automobile industry, causing Toyota to recall millions of its vehicles
due to manufacturing defects. Toyota recalled over 8,000,000 of its vehicles. The
recall included many models, such as the legendary Prius hybrid automobile.
Aside from losing nearly one billion dollars a month in cost of production shut-
downs, on top of the mammoth costs for the recall for the millions of vehicles,
Toyota’s pristine consumer confidence was severely bruised. Its stocks devalu-
ated by double digits in matter of only a few weeks as the automaker’s defects
became public knowledge. Toyota, the icon of auto-making quality and technol-
ogy, became the subject for the stand-up comedians! Toyota’s leading market
position in the auto industry came under question and began sliding backwards.
This became an opportunity for the other automakers to penetrate into Toyota’s
market share, which previously was a protected fortress.
What do you think happened to Toyota? Why didn’t the famed Toyota Pro-
duction System (TPS) protect this once-fine organization? Did it not seem as
though Toyota lost focus on what was important, allowing reliability variability
to sneak into its multinational operations? The TPS has been the cornerstone of
its impressive quality and reliability record. But as discussed earlier, the Lean
system has its limitations.
Auto industry failures were not limited to Toyota. The National Highway
Traffic Safety Administration reported 492 recalls for the same year, 2010, in-
volving more than 16.4 million vehicles. Among them were: GM with 1,300,000
vehicles, Nissan with 539,864 cars, Honda with 410,000 Odyssey minivans, and
Ford with 18,000 Fusion and Mercury hybrids.
We believe that our proposed application, iTLS™®, provides the needed
long-term protection for organizations’ profitability, reliability, and agility.
Frequently, well-performing organizations assume that the challenges for
achieving excellence have been met. These organizations often don’t invest in
the additional resources needed to sustain their competitive edge. That is why
many companies have up-and-down performances. When performance is poor,
they spend energy and resources to improve things, then, as the metrics indi-
cate necessary improvements have been achieved, the organization relaxes and
risks relinquishing their sustainability efforts. Over time, performance plummets
again, and this vicious cycle repeats.
We believe that by properly implementing iTLS, significant bottom-line ben-
efits will appear within a couple of months and that within one to two years many
companies will have more than doubled their profits. We know that making such
claims is extremely dangerous, not because they are not possible, but because of
the reaction of you, the reader. You may be inclined to immediately put down this
book and dismiss us. It’s a totally natural reaction, because these claims are so far
beyond most people’s personal experience and intuition that they assume there
is no chance of their being valid. Smart, dedicated people in many companies
have worked very hard and for a long time in order to squeeze out much smaller
gains. If our claims are true, it suggests that we have been either really stupid
or that some magic bullet has been invented. Of course, neither of these is true.
We simply ask you to read on to understand what’s involved in this Fourth Wave
Management System. If it makes sense to you, try it. If it works, extend it and
share it with others to help prevent future economic and industrial disasters like
the one we currently experiencing.
Improvement techniques have come and gone. Today’s favorites are Lean,
Six Sigma, and TOC. They all have reported notable successes, but, like earlier
techniques, some of these improvements have not reached the bottom line. In
addition, they are all facing some eroding support. The question often posed is
“Which one is the best method?” We believe that this is the wrong question and
that a better one is, “How can we best combine the strengths of these techniques?”
In order to answer this question, we need to better understand the strengths and
weaknesses of each methodology and how they can contribute to a smooth, fast-
flowing river system that produces tangible bottom-line improvements.
39
squeezing more out of it. Typically there are only three physical constraints—
lack of capacity, sales, or materials, with the latter being the rarest. In reality,
very few companies have real physical constraints. The policies and practices
that determine the behaviors of the organization often manifest themselves as
physical constraints. As a result, most companies first focus on the most obvious
constraint, capacity. Hence, the first two steps are:
1. Identify the constraint.
2. Exploit the constraint.
The third step is to align activities at the other resources so they are consistent
with how the constraint is functioning. If the constraint is producing at a certain
level, it doesn’t make sense for preceding or following operations to produce at
higher or lower levels. The need to synchronize other activities is the third step:
3. Subordinate everything else to the above decisions.
In order to continue to generate more throughput, a company will eventually
need more of the constraint. If the constraint is internal, the company might need
to add another shift or purchase additional constraint capacity. If the constraint is
external, the company needs to increase demand for its products. TOC refers to
this need in the fourth step:
4. Elevate the constraint.
If more and more capacity is added at the constraint or if sales increase, the con-
straint of the system may shift. Therefore, we have the fifth step:
5. Go back to step 1.
This step is accompanied with the warning “Do not let inertia become the con-
straint of the system.” Be aware; things change, and when they do the constraint
may move, requiring different actions.
These five steps were developed in the early 1980s and have been widely em-
ployed with considerable success. We believe that three issues have limited even
more widespread use and additional benefits. First, while TOC clearly points to
where to focus, it is very short on providing tools for fixing the identified prob-
lems. As an example, TOC may highlight that a considerable amount of a con-
straint capacity is wasted in setups or that the constraint is causing considerable
scrap and rework. Unfortunately, TOC doesn’t provide tools on how to reduce
setup times (Lean does), nor does it provide tools for reducing the variations in
the output of the constraint (Six Sigma does).
A second obstacle is the suggested continual iteration of identifying the con-
straint, breaking it, resynchronizing all the other activities, and then repeating the
process. Companies have found that this degree is destabilizing and often stop
after the third step. Incidentally, the later development of a TOS and the estab-
lishment of a single control point for each of the four network shapes resolve this
problem (see Chapter 7).
The third and largest obstacle to the wider use of TOC is the conflict it exposes
with prevailing cost-accounting measurement systems. These systems reward
fewer setups rather than the more needed for faster, smoother flow. Another com-
mon conflict is local efficiency measurements. Cost-accounting measurements
reward high local efficiencies even when the products produced cannot be pro-
cessed by the constraint or are not needed by the market. The conflicts with cost
accounting are not limited to how internal operations are measured. TOC’s view of
a product’s “octane,” and therefore its desirability, is diametrically opposed to cost
accounting’s concept of product cost and product margin. Failure to resolve these
conflicts with local measurements limits the degree to which TOC can be used to
develop a fast-flowing river system geared to generating more throughput.
Thinking Processes
The five-step focusing process is very effective in dealing with physical con-
straints. When the real constraint is a policy or practice, it may not be obvious. In
these instances, TOC’s thinking processes can be very helpful:
• What to change—what core problem bedevils the organization?
• What to change to—what actions will provide a breakthrough solution
that both eliminates the core problem and results in other benefits?
• How to create the change—what actions are needed to create the
desired environment, and how can they best be executed?
The first two aspects of the TOC thinking processes are similar to Lean’s kaizen
approach, except that are generally applied to more global issues. These thinking
processes are essentially logic trees and are best used by people who have intu-
ition about the subject matter being analyzed. TOC provides six logic processes
to help answer these three questions.
• A current reality tree (CRT) is used to identify a core problem.
It begins by listing several undesirable effects (UDEs) about the
current situation and first looks for the cause(s) of each UDE. These
causes are then viewed as effects, and the users work to determine
the cause of these effects. The process is continued until a single
cause or core problem is defined that is ultimately responsible for
all the UDEs.
• A conflict diagram is often used to find a breakthrough solution to a
core problem, although in many cases a solution may be obvious. The
LEAN
Lean was developed by a group of Massachusetts Institute of Technology
professors to provide a process for implementing Ohno’s Toyota Production
System. It has been well publicized and used by a wide range of companies.
Lean as generally practiced today differs somewhat from our understanding of
Ohno’s original intent, which was to improve internal activities so that Toyota
could sell more. This focus on directly connecting internal improvements to
more sales is sometimes lost in Lean implementations. Essentially, Lean is an all-
encompassing process that requires involvement of all functions of a company.
This highly disciplined approach takes considerable time, effort, and persistence
to implement—remember Ohno’s 40-year effort and use of a gun. It has proven
to be most successful in high-volume discrete manufacturing environments.
Lean focuses on improving processes and consists of discipline, daily practice,
and tools. It strongly emphasizes developing and growing a culture through re-
petitive practice.
Lean efforts are heavily focused on reducing the following seven wastes
(muda):
1. Transport
2. Waiting
3. Overproduction
4. Defects
5. Inventory
6. Motion
7. Extra processing
The process of implementing Lean involves the following six steps:
1. Specify the value—Lean attempts to create a waste-less environment
by first categorizing each activity as either:
• Value added (retain)
• Business value added (may be eliminated later)
• Non-value added (eliminate now)
2. Identify the value stream—Map the flows needed to design, order,
and make each product beginning and ending with the customer.
3. Make the value stream flow without interruptions—Herein lies the
heart of Ohno’s river system, the rapid, smooth-flowing of production
to the customer. In order to achieve such a system, it is essential
that variations be greatly reduced and where possible eliminated.
Lean focuses on continually exposing, highlighting, and eliminating
such disruptions. Lean suggests balancing takt times (the time to
perform an operation) so that there is no interruption in the flow of
work. It recognizes reality and establishes takt times slightly below
the capacity of each operation so that there is a capability to catch
up when small disruptions occur. This extra or protective capacity is
analogous to TOC’s use of time buffers to protect constraints from
disruptions. Lean spreads this protective capacity evenly throughout
the system, while TOC focuses it at key operations.
4. Let the customer pull value from the producer—Again, Ohno’s
concept of producing to actual customer sales is evident. The
SIX SIGMA
Six Sigma is a rigorous and disciplined methodology that uses data and sta-
tistical analysis to measure and improve a company’s operational performance.
It focuses on identifying and eliminating “defects” in production and service-
related processes. On a statistical basis, 3.4 defects per million opportunities are
considered Six Sigma, a performance bar that is significantly above what many
industries have achieved.
In the early 1980s with Chairman Bob Galvin at the helm, Motorola decided
that the traditional quality levels (measuring defects in thousands of opportu-
nities) didn’t provide sufficient granularity. Motorola’s engineers decided to
measure defects per million opportunities. Bill Smith, one of the engineers, is
credited with coining the term Six Sigma.
Motorola developed this new standard and created a methodology to cause
the needed cultural change. Over time, Six Sigma has evolved from a metric to a
methodology to a management system.
for many of us), teachers assumed that there would be a normal variation in test
scores and graded accordingly. The result of “curving” test scores was:
A—A few very bright students
A—A small number who scored well
B—The majority
C—A small number who performed poorly
F—A few who failed
The assumption was that the normal curve explained the inherent variation in
every process or population. Anything that varied beyond three standard statisti-
cal deviations was an anomaly or outlier, indicating a totally unexpected event.
When applied to manufacturing, the existence of such outliers meant that the
process had gotten out of control and needed to be adjusted.
The idea that processes should be kept within six rather than three standard
deviations dramatically raised the bar of when a process was out of control. The
result was an intense effort to improve processes so that only a very few defects
existed (those that varied more than six sigma). The change from a three-sigma
to a six-sigma perspective resulted in a dramatic improvement in the quality of
many products. We’ve often wondered how it would impact the quality of gradu-
ates if it were applied in the same fashion in our schools.
Step 1—Define
The champion identifies and/or validates the improvement opportunity, de-
velops the business processes, defines critical customer requirements, and selects
the project team leaders and members. The deliverables from this phase include
team charters, including a mission statement and team objectives, action plans,
process maps, quick-win opportunities, critical customer requirements, and a
prepared team.
Step 2—Measure
The objectives of this phase are to identify the critical measures that will
determine the success of the project in meeting critical customer requirements.
In addition, the project team will begin developing a methodology to collect the
data needed to measure process performance. This methodology will be used to
establish baseline sigma levels for the processes.
The team also maps the existing processes to understand the process flow and
extenuating factors. It’s interesting to note that TOC draws diagrams of networks
and Lean uses value-stream mapping to depict flow. Some of the tools that may
be used in this stage to ensure that the measurement system is sufficiently accu-
rate are cause and effect diagram (C&E), quality function deployment diagram
(QFD), preliminary FMEA (failure modes and effects analysis), and measure-
ment system analysis (MSA). The deliverables for this phase are:
Step 3—Analyze
The objective of this phase is to stratify and analyze the opportunity in order
to identify the major causes of unacceptable variation and describe it in an eas-
ily understood problem statement. It is critical to pinpoint and validate the root
causes that when eliminated resolve the unacceptable variation. Consequently,
the real sources of the variation that result in defects causing customer dissat-
isfaction must also be determined. This is done by applying statistical tools
to narrow the possibilities to a few, collecting and analyzing data, and testing
hypotheses to determine the significant input variables. The result is a list of a
few input variables that may be causing the excessive variation. Some statistical
tools typically employed are:
• Histograms • Multivariant studies • Regression
• Box plots • Correlation
Step 4—Improve
In this step of the methodology the objectives are to:
• Identify, evaluate, and select the right improvement solutions.
• Develop a change management approach to assist in implementing the
recommended changes.
In this phase, a new process model that demonstrates that the recommended
changes will yield the desired result needs to be established. Tools and activities
supporting this phase include:
• Design of experiments (DOE) to develop a mathematical prediction
model
• Structured decision tools to select the input variables needed to
optimize process performance
The deliverables of this stage are:
• Process maps and documentation
• Solutions
• Change maps
• Implementation milestones
• Improvement impacts and benefits
• Storyboards
Step 5—Control
Objectives of this phase are to:
• Maintain the gains, understand the importance of planning and
executing against the plan, and determine the approach to be taken to
assure achievement of the targeted results.
• Understand how to disseminate the lessons learned and standardize
the approach to improving other opportunities/processes.
• Develop related plans.
Typical activities are:
• Developing a pilot solution and plan
• Verifying that a reduction in the identified root causes actually
resulted in the expected improvement
Summary/Evaluation
TOC ranks high in the primary measurements it proposes (T, I, OE) to evalu-
ate improvements. In addition to these measurements, the primary strength of
TOC is its laser-like focus on where to make improvements, i.e., where to de-
vote improvement energies. Typically, it requires first identifying either a core
problem or a constraint to additional throughput so that efforts are focused on
eliminating core problems and breaking constraints.
Because generating throughput (T) is the longest improvement lever, TOC
implementers often focus heavily on how much of constraints time is used to
actually produce products or deliver services (often referred to as “blue light”).
This is a good indicator of the available capacity that is actually being used. In
addition, its use of time buffers prior to constraints protects throughput and helps
prioritize the downstream disruptions that need the most attention. Because of its
focusing capabilities, the ratio between bottom-line benefits and effort expended
tends to be high.
TOC also offers several thinking process tools, one of which is useful in
identifying what to change, what to change to, and how to create the change.
While TOC’s strengths are where we should focus efforts, its shortcoming
lies in an absence of robust tools to solve the specific problems it identifies. For
example, it may correctly find that considerable capacity is being lost at a con-
straint due to long setups and excessive scrap. However, it lacks specific tools for
resolving these problems. In addition, while its use of time buffers prior to the
constraint is helpful in identifying the major disruptions like an out-of-control
process, it again lacks specific tools for fixing these problems.
In summary, TOC measures improvements through T, I, and OE, which are
excellent indicators of bottom-line benefits. It also focuses internal efforts on Archi-
medes’s long-lever opportunities by stressing the value of exposing hidden capac-
ity within the system or resolving core problems that limit performance. Although
it stresses the great value of using exposed capacity to generate more revenue with
little operating expense, its suggestions of how to accomplish this goal are largely
anecdotal. Its greatest shortcoming is a paucity of proven tools to reduce the waste
and variations that slow and disrupt the flow in our river systems.
Lean offers an impressive and proven array of tools to reduce waste, but it
lacks a focusing mechanism to point at the most important wastes to eliminate.
Lacking such a mechanism, we, like Ohno, are often relegated to improving ev-
erything rather than focusing on the long levers of Archimedes.
Six Sigma brings a variety of excellent statistical tools to our CPI efforts.
Their focus on reducing variations in activities and processes has contributed
mightily to improvement in the quality of products and reliability of processes.
The define step of the DMAIC process does assist in focusing efforts on higher
potential opportunities. However, it lacks TOC’s more rigorous measurements (T,
I, OE) and global approach to focusing on exposing and capitalizing on hidden
constraint capacity. In addition, because it is project-oriented, it often focused on
improving a part of the system, rather than the overall system.
Some firms have already combined the Lean and Six Sigma approaches and
generated good results. In just six years, Pella more than doubled its sales in a
347
Robert E. Fox is a founder of The Goldratt Institute, The TOC Center, Inc.,
and Viable Vision LLC. He earned an MS in Industrial Administration from
Carnegie Mellon and a BS in Engineering from the University of Notre Dame.
His has extensive industrial and consulting experience and has served as Vice
President of Booz & Co. and President of Tyndale, Inc. He authored The Race
and The Theory of Constraints Journal. In honor of his 50 years of contribution
to organizational improvement, the Fox Award was established to honor organi-
zations and individuals who have demonstrated excellence. Steven Covey and
Peter Senge have been recipients of a lifetime Fox Award.
353